text
stringlengths 313
1.33M
|
---|
# Blended Learning in K-12/The many names of Blended Learning
|previous=Definition
|next=Why is Blended Learning Important?}}
```
Blended Learning has been around for many years, but the name has
changed as the uses and recognition have increased. Many people may be
using a form of blended learning in lessons and teaching, but may not
realize it or be able to give it an actual name. Blended learning is
something that is used in the world of education as well as the world of
business. Blended learning is not a new concept, but may be a new term
to many users. Below is a list and explanation of just a few of the more
common, but older, names of blended learning.
\"You may hear blended learning described as "integrated learning",
"hybrid learning", "multi-method learning" (Node, 2001). \"The term
\"blended learning\" is being used with increasing frequency in both
academic and corporate circles. In 2003, the American Society for
Training and Development identified blended learning as one of the top
ten trends to emerge in the knowledge delivery industry\" (cited in
Rooney, 2003) (Graham, 2004).
## **Blended Learning - Descriptions**
Blended Learning: Learning methods that combine e-learning with other
forms of flexible learning and more traditional forms of learning.
(Flexible Learning Advisory Group 2004)
Blended learning (also called hybrid learning) is the term used to
describe learning or training events or activities where e-learning, in
its various forms, is combined with more traditional forms of training
such as \"class room\" training(Stockley, 2005).
Blended learning is usually defined as the combination of multiple
approaches to teaching. It can also be defined as an educational
processes, which involves the deployment of a diversity of methods and
resources or to learning experiences, which are derived from more than
one kind of information source. Examples include combining
technology-based materials and traditional print materials, group and
individual study, structured pace study and self-paced study, or
tutorial and coaching (Blended Learning, 2005).
Blended learning can be delivered in a variety of ways. A common model
is delivery of \"theory\" content by e-learning prior to actual
attendance at a training course or program to put the \"theory\" into
practice. This can be a very efficient and effective method of delivery,
particularly if travel and accommodation costs are involved. This
mixture of methods reflects the hybrid nature of the training.
(Stockley, 2005)
These explanations show how blended learning is viewed in different
situations, different environments, and by different people. As is
suggested blended learning involves the use of some technology as well
as the use of more traditional methods to allow the student to work and
learn at his/her own pace. Blended learning is a relatively new term.
However, the ideas of this style of teaching and learning behind it are
more common. The following is a list of synonyms or previous terms that
are linked to blended learning.
## **Hybrid Learning - Descriptions**
\"Hybrid instruction is the single greatest unrecognized trend in higher
education today\"---Graham Spanier, President of Penn State University
(TLC, 2002).
Hybrid courses (also known as blended or mixed mode courses) are courses
in which a significant portion of the learning activities have been
moved online (generally 30 - 75%), and time traditionally spent in the
classroom is reduced but not eliminated. The goal of hybrid courses is
to pair the best features of face-to-face teaching with the best options
of online learning to promote active and independent learning and reduce
class seat time. Using instructional technologies, the hybrid model
forces the redesign of some lecture or lab content into new online
learning activities, such as case studies, tutorials, self-testing
exercises, simulations, and online group collaborations (NJIT, 2005).
Using computer-based technologies, instructors use the hybrid model to
redesign some lecture or lab content into new online learning
activities, such as case studies, tutorials, self-testing exercises,
simulations, and online group collaborations (TLC, 2002).
Hybrid courses seem to be the pre-step to blended learning. Hybrid
courses involve a great amount of technology. They also greatly increase
the independence of the student by allowing him/her to work at his/her
own pace outside of the typical classroom. this is obviously a synonym
of blended learning as the explanations and ideas seem to differ very
little from those of blended learning.
## **Integrated Learning - Descriptions**
Teaching strategies that enhance brain-based learning include
manipulative, active learning, field trips, guest speakers, and
real-life projects that allow students to use many learning styles and
multiple intelligences. An interdisciplinary curriculum or integrated
learning also reinforces brain-based learning, because the brain can
better make connections when material is presented in an integrated way,
rather than as isolated bits of information (ASCD, 2005).
ILS (integrated learning system): A complete software, hardware, and
network system used for instruction. In addition to providing curriculum
and lessons organized by level, an ILS usually includes a number of
tools such as assessments, record keeping, report writing, and user
information files that help to identify learning needs, monitor
progress, and maintain student records (ASTD, 2005).
As stated by Node in the introduction, integrated learning is also seen
as a stepping stone to blended learning. This term shows a need for more
methods of teaching than the traditional classroom can offer. This is of
course the basis for blended learning. Integrated learning allows a
teacher to provided instruction to students in a way that will be
meaningful and interesting to each learner. This again is the main
concept behind blended learning.
## **Multi-method Learning or Mixed Mode Learning**
As a capability, learning is often thought of as one of the necessary
conditions for intelligence in an agent. Some systems extend this
requirement by including a plethora of mechanisms for learning in order
to obtain as much as possible from the system, or to allow various
components of their system to learn in their own ways (depending on the
modularity, representation, etc., of each). On the other hand, multiple
methods are included in a system in order to gauge the performance of
one method against that of another (Cognitive Architectures, 1994).
Node also mentions multi method learning and its connection to blended
learning. Again it is not a synonym of blended learning, but instead a
stepping stone. This shows a user how teaching outside the traditional
box can and is more meaningful to learners.
## **e-learning**
\"e-learning is a broader concept \[than online learning\], encompassing
a wide set of applications and processes which use all available
electronic media to deliver vocational education and training more
flexibly. The term "e-learning" is now used in the Framework to capture
the general intent to support a broad range of electronic media
(Internet, intranets, extranets, satellite broadcast, audio/video tape,
interactive TV and CD-ROM) to make vocational learning more flexible for
clients\" (ANTA 2003b, p. 5).
\"E-learning (elearning, eLearning) is the newer, more encompassing term
for those activities previously described by the term \"computer based
training\". Computer based training has existed for many years now\"
(Stockley, 2005).
e-learning is not a synonym of blended learning, but is a major
component of a successful blended learning unit. Without e-learning,
there would be no real technology. Without technology, there would be
little hope of blended learning. Therefore the concept of e-learning is
a major aspect of blended learning and one that needs to be recognized
and understood by those interested in blended learning.
## **Flexible Learning**
The provision of a range of learning modes or methods, giving learners
greater choice of when, where and how they learn. See also Flexible
delivery.
www.trainandemploy.qld.gov.au/tools/glossary/glossary_f.htm
Describes an educational regime providing pathway choices and learner
control of the learning process.
www.lmuaut.demon.co.uk/trc/edissues/ptgloss.htm (Google Web, 2005)
\"The term flexible learning is referred broadly to mean increased
learner choice in content, sequence, method, time, and place of
learning. In addition it is also associated with increased flexibility
in administrative and course management processes. However it is
interesting to note that most literature refers to online learning as
only a form of flexible learning, but there is a clear emphasis on the
use of online technologies to achieve flexible learning goals\" (VU
TAFE, 2004).
Flexible learning aims to meet individual needs by providing choices
that allow students to meet their own educational requirements in ways
suiting their individual circumstances. Choices may be offered in:
time and/or place of study - opportunities to study on- and off-campus
or combinations of both;
learning styles and preferences - the availability of a range of
learning resources and tasks to suit individual needs;
contextualized learning - the ability to tailor some or all of the
learning content, process, outcomes or assessment to individual
circumstances;
access - flexible entry requirements, multiple annual starting points,
recognition of prior learning, articulation between programs of study
and cross-crediting arrangements;
pace - unit completion on the basis of achievement of specified
competencies rather than according to a pre-determined calendar;
progression - flexible progression requirements and teaching periods
allowing accelerated or delayed completion of study; and
learning pathways - degree requirements allowing choice in programs of
study.
The student-centered approach underpinning flexible learning requires
different teaching methodologies and also different relationship between
teachers and students. In comparison to traditional educational models,
flexible learning is broadly characterized by:
\- less reliance on face-to-face teaching and more emphasis on guided
independent learning; teachers become facilitators of the learning
process directing students to appropriate resources, tasks and learning
outcomes.
\- greater reliance on high quality learning resources using a range of
technologies (e.g., print, CD-ROM, video, audio, the Internet)
\- greater opportunities to communicate outside traditional teaching
times
\- an increasing use of information technology (IT). Flexible learning
is not synonymous with the use of IT but IT is often central to much of
the implementation of flexible learning, for example in delivering
learning resources, providing a communications facility, administering
units and student assessment, and hosting student support systems.
\- the deployment of multi-skilled teams. Rather than the academics
responsible undertaking all stages of unit planning, development,
delivery, assessment and maintenance, other professionals are often
required to provide specific skills, for example in instructional
design, desktop publishing, web development and administration and
maintenance of programs (Centre for Felxible Learning, 2005).
Flexible learning is the main concept and reasoning behind blended
learning. The point of creating blended learning was to allow flexible
learning for the students. This term then is also a major aspect of
blended learning. It needs to be recognized and understood by users in
order to better create and deliver a blended learning unit to students.
As can be seen from the above examples and explanations, technology may
not be a given to blended learning, but an addition. Instead we need to
look at blended learning as using methods outside the traditional
classroom and teaching to help interest and reach all learners. These
methods could be using technology, or just some other form of teaching,
like speakers or manipulative, to help all student learning, to reach
all the students in the class.
|
# Blended Learning in K-12/Why is Blended Learning Important?
|previous=The many names of Blended Learning
|next=Evolution of Blended Learning}}
```
Now that you know what blended learning is, you may be asking yourself,
\"Why do I need to know about blended learning? Why is it important?\"
Over the years, many groups of people have asked this same question.
These groups have included classroom teachers and others in the field of
education, but have also included people in the business field as well.
Blended learning makes up the "fastest growing use of technologies in
learning---much faster then the development of online courses."
(Alvarez, 2005) Because of the interest in blended learning, there have
been many research studies done to find the potential strengths and
weaknesses of blended learning as compared to just the traditional
classroom or e-learning.
## In Education
Educators seem to have the most interest in blended learning, for
obvious reasons. Because of this, much of the research on blended
learning has been based around classroom situations. All levels of
education have been researched with blended learning, from the
elementary school grades up to graduate school. Educators' interests in
blended learning is best summarized by Flavin in his E-Learning
Advantages in a Tough Economy. He states:
Ironically, the notion of blending is nothing new. Good classroom
teachers have always blended their methods---reading, writing, lecture,
discussion, practice and projects, to name just a few, are all part of
an effective blend. Blending is only a revelation for those who have
been trying to do everything with just one tool---usually the
computer---and ending up with less than ideal results. Understanding
that using the right tool, in the right situation, for the right purpose
should be a guiding design principle. (Flavin, 2001)
One clear advantage of blended learning in education is its connection
with differentiated instruction. Differentiated instruction involves
"custom-designing instruction based on student needs." (deGula, 2004) In
differentiated instruction, educators look at students' learning styles,
interests, and abilities. Once these factors have been determined,
educators decide which curriculum content, learning activities,
products, and learning environments will best serve those individual
students' needs. Blended learning can fit into a number of these areas.
By using blended learning, educators are definitely altering the
learning environment when students work collaboratively in learning
communities online, for example. Teachers could also add relevant
curriculum content that would be unavailable or difficult to comprehend
outside of the internet. Learning activities and products can also be
changed to use technologies in a classroom that uses blended learning.
So what does the research say? In a study by Dean and associates,
research showed that providing several online options in addition to
traditional classroom training actually increased what students learned.
(2001) Another study showed that student interaction and satisfaction
improved, along with students learning more, in courses that
incorporated blended learning. (DeLacey and Leonard, 2002)
Another advantage of blended learning is pacing and attendance. In most
blended learning classrooms, there is the ability to study whenever the
student chooses to do so. If a student is absent, she/he may view some
of the missed materials at the same time that the rest of the class
does, even though the student cannot be physically in the classroom.
This helps students stay on track and not fall behind, which is
especially helpful for students with prolonged sicknesses or injuries
that prevent them from attending school. These "self-study modules" also
allow learners to review certain content at any time for help in
understanding a concept or to work ahead for those students who learn at
a faster pace. (Alvarez, 2005)
Because of the ability of students to self-pace, there is a higher
completion rate for students in blended learning classrooms than to
those in strictly e-learning situations. (Flavin, 2001) This self-pacing
allows for the engagement of every learner in the classroom at any given
time. Students also see that the learning involved becomes a process,
not individual learning events. This revelation allows for an increased
application of the learning done in the classroom. (Flavin, 2001)
With the given research, it is clear that using blended learning in
education improves the teaching and learning done in a given course.
Educators want to teach in a way that best reaches all of their
students. If blended learning accomplishes this, then more teachers will
begin to use these methods. When teachers begin to explore blended
learning and the resources that can be found through the internet and
other technologies, they can structure their classroom in a way that
best suits their teaching style and their students' learning styles.
Blended learning allows "\[teachers\] and \[their\] students to have the
best of both worlds." (Alvarez, 2005) The traditional classroom and
e-learning both have advantages and disadvantages. As Alvarez states,
"the online environment is not the ideal setting for all types of
learning. Classrooms are not perfect either.... That's why so many
teachers and corporate trainers are concentrating their efforts on
integrating internet-based technologies and classrooms to create blended
learning environments. It just makes good sense." (2005)
As stated above blended learning should provide students and teachers
with the best of both worlds. I Agree that this should happen, but I am
not convinced that this does happen. Teachers that have been teaching
for numerous years may be stuck in their traditional teaching methods.
These same teachers may not be technology literate which in turn limits
what they can do with technology. Teachers must be trained and required
to use some type of technology in the classroom if blended learning is
going to successfully take place. The teachers may use a variety of
teaching styles, but without technology the students are being cheated
out of what they need to be successful in today\'s world. Proper teacher
training is the only way teachers and students will \"get the best of
both worlds.\" Bret M. Helms
## In Business
Blended learning is also of interest to corporate world. Through
different studies, blended learning has been shown to be an effective
tool in worker training and education. One of the best advantages of
blended learning for the business world is its cost-effectiveness. When
a business relies purely on instructor-led training, besides paying the
cost of the trainer, there are also transportation, hotel, food, and
other expenses. Blended learning helps reduce these expenditures by
reducing the amount of time needed to face-to-face instruction.
(Alvarez, 2005) "Effective blending lets an organization spend the
dollars in the most beneficial, cost effective way." (Flavin, 2001) A
business can decide what mix of face-to-face and e-learning would best
fit their learning objectives.
One study which illustrates some of the benefits of blended learning in
today's business world is the Thomson Job Impact Study. (2003) This
study had 128 participants from a number of corporate and academic
organizations, including Lockheed-Martin, Utah State University,
National Cash Register, and the University of Limerick in Limerick,
Ireland. The researchers wanted to determine if blended learning
increased the overall learning in a number of areas. What they found
supports the use of blended learning in the corporate world. The blended
learning group "significantly" out-performed the traditional and
e-learning group in spreadsheet application performance and they took
less time to complete the real-world tasks than did the e-learning
group. Overall, the blended learning classroom achieved a performance
improvement of 30 percent. (Thomson, 2003). Given these apparent
benefits, it is only natural that the business world is now also
incorporating blended learning techniques into their employee training
and education programs. By doing a quick internet search, a business
could find a number of manuals and examples of how to incorporate
blended learning strategies into their own programs. There are also
companies which specialize in bringing blended learning programs into
the business world.
As one can see, the benefits of using blended learning have been
carefully researched. Most people would agree that these benefits
support the use of blended learning in the classroom and in the business
realm as well. It is up to the individual educator or business how best
to use the tools of blended learning to meet their own goals and those
of their students/workers. really
|
# Blended Learning in K-12/Types of Blended Learning
|previous=Evolution of Blended Learning
|next=Types of Blended Learning/multimedia virtual internet}}
```
As stated previously, K-12 may be the last to utilize Blended Learning
but it has certainly gone beyond the \'trickle down effect.\' There is a
growing trend to use technology in K-12 classrooms and more grants and
financial opportunities are making Blended learning a reality. Because
there are multiple tools available for use when incorporating *Blended
Learning* in the K-12 classroom by a teacher, it is important to
highlight what is available to create a blended learning environment.
According to \"Building Effective Blended Learning Programs\" by Harvey
Singh, \"Blended Learning Programs may include several forms of learning
tools, such as real-time virtual/collaboration software, self-paced
Web-based courses, electronic performance support systems (EPSS)
embedded within the job-task environment, and knowledge management
systems. Blended learning mixes various event based activities,
including face-to-face classrooms, live e-learning, and self-paced
learning\" (Singh, 2003).
\"From a course design perspective, a blended course can lie anywhere
between the continuum anchored at opposite ends by fully face-to-face
and fully online learning environments.\" (Rovai and Jordan, 2004) This
chapter presents explanations of the various types of tools available to
create a blended learning environment in the classroom. The following
categories will be used to organize major types:
1. The use of Multimedia and Virtual Internet
Resources
in the classroom. Examples include the use of videos, virtual field
trips, and interactive websites.
2. The use of Classroom
Websites
in the classroom. Included is a growing list of examples of useful
blended learning websites.
3. The use of Course Management
Systems.
Examples include the use of Moodle, WebCT and Blackboard.
4. The use of Synchronous and Asychronous
Discussions
in the classroom. Examples of resources available include Yahoo
Groups, TappedIn, Blogs, and Elluminate.
|
# Blended Learning in K-12/Characteristics of Blended Learning
|previous=Types of Blended Learning/Synchronous Asychronous Discussions
|next=General Comparisons in Blended Learning }}
```
What are the main characteristics of Blended Learning? What makes
blended learning unique, different, special?
The term "Blended Learning", while popular, has been subjected to
criticism, mostly because the definition lacks specificity. (See section
1.1 \"What is Blended Learning?\")The term applies to diverse
situations, including professional development in the business world,
and technology integration in the K-12 and university settings, and
describes a range of instructional practices. Indeed, one critical
article has suggested that it is the very focus on instructional
practices that is problematical, and that blended learning should
instead focus on content from the learner's, rather than the
instructor's, perspective. (Oliver and Trigwell, 2005) Although
criticizing the terminology appears to be merely a discussion over
semantics, the lack of specificity raises an interesting question: What
are the characteristics of blended learning? A number of sources
summarize the following characteristics as being unique to the blended
learning environment.
- General Comparisons in Blended
Learning
```{=html}
<!-- -->
```
- Pedagogical Models- blending constructivism, behaviorism and
cognitivism
```{=html}
<!-- -->
```
- Synchronous and asynchronous communication
methods
` Taking each of these point for point, we may attempt to characterize blended learning by examining its appearance point for point. In other words, by describing these blends as they should appear in a “real” context, we can begin to understand blended learning not as a textbook definition, but as a concept for learning and instruction in the Twenty-first Century.`
|
# Blended Learning in K-12/General Comparisons in Blended Learning
|previous=Characteristics of Blended Learning
|next=Pedagogical Models- blending constructivism, behaviorism and cognitivism }}
```
**Offline and Online Learning** One of the most distinguishable
characteristics of blending learning is its ability to combine two
different forms/setting of learning and instruction. (Singh, 2001) In
blended learning, instruction takes place in an offline and online
setting. In the offline setting, the instruction takes place in a
traditional, face-to-face classroom. The online setting usually takes
place using the Internet. Although there are distance courses solely
designed to have all of its instruction take place online through the
use of the Web, blended learning utilizes the atmosphere of both offline
and online settings.
The *dual settings* of online and offline learning are optimally
combined to administer the responsibilities of sharing content,
establishing and continuing communication, and stimulating interaction.
Ideally, the online and offline components of a blended learning class
are more or less symbiotic, where the interactions and successes of each
setting feed off each other.
In blended learning the web enhancements of the online portion also
contribute to not only aiding in the pragmatic goals of the classroom,
but to augmenting the pedagogical goals as well. (Wingard, 2004) The
percentage of time and activity spent by students either on the online
or offline classroom is usually dependent on the nature of the course
and the preference of the instructor.
Early results of studies show an increase in student-instructor
interaction and student preparation in use of course material. (Wingard,
2005)
**Structured and Unstructured Learning** Structured or formal learning
occurs when content is organized like chapters in a text book.
Unstructured learning takes place informally online through synchronous
and asynchronous discussions as well as e-mail correspondence. In a
blended learning environment instructors can develop a program that
incorporates both types of learning together. (Singh, 2001) Although
traditionally used in higher education, blended learning is making its
way into elementary, middle, junior high, and high schools around the
country.
A structured learning program must encourage students to be actively
engaged. More importantly, it must allow the instructor to track student
use of the program, manage access to the next stage on the basis of
completion or assessment, and follow up with another form of
communication to students who are not completing work. (Hoyle, 2003)
Finally, it should have specific learning objectives and expected
outcomes. A blended learning environment with a structured program can
benefit those students who learn better on their own rather than in the
traditional classroom setting. (Zenger and Uehlein, 2001) This can be
especially helpful in the k-12 classroom as individual student needs
must be met. One common pitfall of structured online learning is merely
the repackaging of current class curriculum and placing it in an online
environment. (Hoyle, 2003)
In an unstructured online environment, students actually have some
control of their learning experiences. Some students prefer the
discovery method of learning while others prefer more straightforward
content. (Zenger and Uehlein, 2001) The freedom to interact and
collaborate with peers without the teacher looming overhead can be
highly motivating for some students. This would be beneficial for
younger students with learning disabilities in that they may recognize
their individual strengths in this new environment. In an unstructured
learning environment, assessment is especially important to ensure
objectives have been met. (Hoyle, 2003)
Best practices in blended learning contain structured and unstructured
components.
- Create a structured core curriculum of learning activities that are
taught using a variety of instructional methods.
- Support an environment in which students can learn smaller parts and
work their way up to more complex ideas.
- Create a classroom in which students can learn informally.
- Provide technological support and for students.
- Provide an easy to use environment. (Oakes and Casewit, 2003)
Personal Experiences -- Lisa Abate (2004) developed an unstructured
asynchronous component to her classroom while on maternity leave. An
online classroom was created using a common educational website where
Lisa communicated with her students. She integrated the concepts her
substitute teacher discussed in the classroom and included online
extensions for her students. Not only did these teachers "team teach",
but they did it from different places within the blended learning
environment.
**Off-the-Shelf Content vs. Custom Content** One of the greatest
advantages of Blended learning is that it gives students and instructors
the opportunity to utilize a wide range of resource materials. Although
there does not seem to be much research about the types of materials
used, students and instructors are not limited to those resources which
can be exploited in a traditional face-to-face setting. Rather, the
instructor can combine all of the traditional resources, which include
lectures and assigned readings, with interactive, self-paced materials.
Traditional textbooks increasingly offer multi-media components as part
of their ancillary materials. These might include opportunities for
extra skill practice, online assessments, and links to resources that
would not realistically fit into a traditional bound book. The
instructor may include these extra resources as part of the learning
modules in a blended course. Theoretically, the instructor could create
a blended course entirely from these ancillary materials for a
completely off-the-shelf learning experience.
Alternatively, the instructor could choose to create a course without
utilizing a traditional textbook at all. Using the wealth of material
that is available online, the instructor can synthesize materials to
create a unique learning context, combining recorded lectures, online
articles, and material up-loaded by the instructor. While this has
always been possible in traditional face-to-face courses, particularly
at the post-High-School level, it has not always been feasible for the
K-12 instructor.
An additional possibility, which would not be possible in a traditional
setting, is the creation of the forum as a learning resource. (NSW,
2005) Although discussion has been a part of face-to-face instruction,
the learning forum provides a virtual space in which discussion can
occur, as well as a mechanism for recording the discussion that ensues.
Students not only construct their own knowledge in this space, but can
return to it at a later time for clarification. The instructor can also
use this permanent record to assess student participation, and include
new viewpoints as resources for subsequent learning modules or courses.
**Implications for K-12 Education**
**Restructuring the Class** One of the most difficult challenges of
transforming a traditional K-12 class into a blended learning classroom
requires a shift in an educator's teaching paradigm. Instead of
preparing the necessary materials for the off-line setting of a
traditional classroom, the blended learning K-12 teacher now must also
organize and maintain the on-line, virtual classroom. The time,
planning, and organization needed to restructure a K-12 blended
classroom requires the educator to, in essence, plan and organize a
class in two settings, which complement each other in terms of
scheduling, content, lesson flow, and organization.
A blended learning classroom requires the K-12 instructor to develop
teaching materials prior to the start of class. Other K-12 classroom
models have also used other characteristics of blended learning for
instructing students through the Web and other electronic tools.
Web-enhanced campus courses, Web-centric courses, Web-courses using
distance learning and campus settings, and traditional distance courses
have implemented web use to certain degrees. Each category of web
integration requires a varied percentage of packaged developed material,
ranging from fifty to one hundred percent. (Boettcher and Conrad 1999)
However, the percentage of prepared material web material in blended
K-12 course ultimately depends on teacher preference, student/school
technological capabilities, and other lesson dependent factors.
Like the other models of Web integration in the K-12 classroom, blended
learning courses need to successfully address and incorporate four of
the following components:
- Administration -- organization of the syllabus, increase teacher
productivity/efficiency, distributing/collecting material, and
scheduling duties
- Assessment -- providing feedback, tracking student progress, and
testing opportunities
- Content Delivery -- communicating content through different learning
styles, using multimedia, incorporating learning activities, using
the Internet for the acquisition of knowledge
- Community -- building the classroom community through
synchronous/threaded chats, providing office/help hours to
communicate online (Schmidt 2002)
**Benefits of Blended Learning** One of the more potentially impacting
benefits of blended learning is in the area of student accessibility.
According to Jeffrey, the ability to use the Web for the classroom has
the potential to serve any student, at any time, in any place. (Jeffrey
2003) Likewise, the characteristics of blended learning also allow its
K-12 students the same advantages in terms of accessibility. A blended
learning's online course components could possibly minimize the
accessibility concerns for the following K-12 students who cannot meet
in the traditional classroom:
- students in rural or small school districts where the proximity of
the classroom is the main challenge to content/material
accessibility
- home-schooled students with instruction in subjects their parents
feel unable to teach
- handicapped or hospitalized students who cannot travel to the
traditional classroom
- expelled students who are required not to attend the traditional
classroom as a consequence but still can have access to material to
prevent falling behind academically (Jeffrey 2003)
In addition to accessibility issues, blended learning also possesses the
ability to collect and organize digital content material can also
eliminate the use of physical textbooks in the classroom. Electronic
content and resources can substitute for the information found in
textbooks, or the electronic copies of textbooks can be downloaded onto
computers and laptops, thus eliminating the high cost of purchasing
textbooks and the physical and problematic concerns some educators have
with students carrying heavy textbooks. The delivery of textbook
information in an electronic format seems ideal for blended learning
classrooms. According to one article, allowing teachers to use digital
media instead of prescribed textbooks can open up all kinds of
creativity and empowering tools of instruction. (Colin 2005)
|
# Blended Learning in K-12/Pedagogical Models- blending constructivism, behaviorism and cognitivism
|previous=General Comparisons in Blended Learning
|next=Synchronous and asynchronous communication methods}}
```
One of the harshest criticisms of Blended Learning is that the focus
tends to be on the instructor, rather than on the learner. (Oliver and
Trigwell, 2005).
Alonzo *et al.* point out that the concept of e-learning is new enough
that practitioners have not yet begun to apply pedagogical principals to
the process of e-learning. (2005)
Ideally, they continue, e-learning (and therefore Blended Learning)
should be focused on the individual learner. The course designer should
be able to utilize cognitive and constructionist theory to design an
effective course. (Alonzo *et al.*, 2005)
This course would be carefully organized so that students can easily
insert new knowledge into their pre-existing schema. The organization
should then reinforce the acquisition of new knowledge and activities
should provide a scaffolded approach to help learners practice new
skills by applying their knowledge. All of this is consistent with
cognitive theories of learning, which tend to focus on the processes of
information acquisition, organization, retrieval, and application.
Constructivist theories of learning describe learning as a process
whereby the learner takes in new information, and inserts it into
existing schema. Each learner constructs meaning differently based upon
their own experiences. In other words, there is a disconnect between
knowledge that is taught, and knowledge that is learned, because the
learner will re-interpret what is being taught, and construct his or her
own meaning from that knowledge.
To support the constructivist approach, a learning community should be
created, and then guided through the process of collaboration so that
learning is constructed by the group, rather than just the individual.
(Alonzo *et al.*, 2005)
In a traditional face-to-face learning environment, one of the more
common methods of constructing group meaning is through discussion. The
instructor typically begins the discussion by posing a question. The
instructor then invites members of the class to make an impromptu
response. Other class members then respond to the first student, and a
discussion develops. In this way, students are exposed to several
perspectives, and the answer to the original question is constructed for
each learner based upon the individual\'s assessment of the group\'s
responses.
In a blended environment, this discussion format can be easily adapted
and enhanced. The discussion could be held synchronously, in group chat,
or could be held asynchronously, in a forum to which learners post
responses. In a blended environment, students have the capability of
responding to several points at once. Since an asynchronous discussion
can continue over a longer period of time, students can take time to
formulate responses, and can respond to a particular part of a comment,
even if the discussion has taken another route.
|
# Blended Learning in K-12/Synchronous and asynchronous communication methods
|previous=Pedagogical Models- blending constructivism, behaviorism and cognitivism
|next=Design of Blended Learning in K-12}}
```
Synchronous and Asynchronous Communication Methods
` In blended learning, instructors use facets of self-paced instruction and live, collaborative learning to moderate the offline setting. This is also respectively known as asynchronous and synchronous learning. These methods of teaching and learning are essential in encouraging active participation in the blended learning environment. (Im and Lee, 2003) `
Online discussions have the potential to enhance students' learning and
may lead to cognitive development. (Fassinger, 1995) In addition,
preconceived notions of race, gender, educational abilities or social
status of the students is virtually erased. (Im and Lee, 2003) This can
be extremely beneficial with the number of social cliques in both junior
high and high school. The key to the learning process includes
interactions among the students themselves, the interactions between
instructor and students, and the collaborations in learning that result
from these interactions. (Jin, 2005)
A live, collaborative learning environment depends on dynamic
communication between learners that fosters knowledge sharing. (Singh,
2001) Synchronous discussions are extremely beneficial for students who
might not otherwise participate collaboratively within the traditional
classroom. Furthermore, they allow for fast and efficient exchanges of
ideas. (Bremer, 1998) In a traditional classroom setting, participation
of all students is often difficult due to time constraints or simple
shyness. In live, collaborative learning atmospheres the communication
process between learners is just as meaningful and vital as an
educational end product. Collaborative learning emphasizes the following
factors:
- active participation and interaction among learners
- knowledge viewed as a social construct
- environments that facilitate peer interaction, evaluation, and
cooperation
- learners who benefit from self explanation when more experienced or
knowledgeable learners contribute
- learners who benefit from internalization by verbalizing in a
conversation (Hiltz, 1999)
Asynchronous communication encourages time for reflection and reaction
to others. It allows students the ability to work at their own pace and
control the pace of instructional information. In addition, there are
fewer time restrictions with the possibility of flexible working hours.
(Bremer, 1998) The use of the Internet and the World Wide Web allows
learners to have access to information at all times. Students can also
submit questions to instructors at any time of day and expect reasonably
quick responses, rather than waiting until the next face-to-face
meeting. Self-paced instruction will often come in a variety of
asynchronous formats including but not limited to:
- Documents & Web Pages
- Web/Computer Based Training Modules
- Assessments
- Surveys
- Simulations
- Recorded lectures, discussions, or live events
- Online Learning Communities and Discussion Forums (Singh, 2001)
According to Hew and Cheung there are 5 phases in the active
construction of knowledge that exist (2003) and are traversed through
asynchronous communication.
------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Phases** **Real World Examples** **K-12 Examples**
**Phase 1:** Sharing and comparing of information. Statements of agreement or corroborating examples from one or more other participant. Students can discuss an assignment with each other for clarification or share data to be analyzed as a group.
**Phase 2:** Discovery and exploration of dissonance or inconsistency among the ideas, or statements advanced by different participants. Identifying and stating areas of disagreement or asking and answering questions to clarify the source and extent of the disagreements. Multiple student participation ensures feedback with possible differing opinions. Differences can be examined and analyzed while using the Internet for further clarification.
**Phase 3:** Negotiation of meaning. Negotiation of the meaning of terms or identification of areas of agreement or overlap among conflicting concepts. Heterogeneous grouping would allow many students to share their "meaning" and define if for others. Concepts can be explained at many different levels.
**Phase 4:** Testing and modification of proposed synthesis or co-construction. Testing the proposed synthesis against formal data collected or against contradictory information from the literature. Students can peer edit each others work with no face-to-face threat, and may be more honest. Students can collaborate on written assignments.
**Phase 5:** Statement or application of newly constructed knowledge. Summarising (sic) of agreements or students\' self reflective statements that illustrate their knowledge or ways of thinking have changed as a result of the online interaction. Students can analyze their group work/opinions/knowledge base and use this information to improve their own work.
------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
These phases integrate with student goals in a traditional classroom and
further extend student learning with the asynchronous component.
Asynchronous discussions actively involve students and therefore improve
communication between and amongst students and instructors. Inherent
complications would include lack of access to technology and lack of
motivation by the students.
|
# Blended Learning in K-12/Blended Learning's Lesson Design Process
|previous=Design of Blended Learning in K-12|Blended Learning by Design
|next=Guiding Principles of Blended Learning|Guiding Principles]}}
```
### Designing a Lesson
The keystone standards of blended learning are akin to other forms of
learning. Identifying the objective, establishing the timescale, and
recognizing different learning styles are basic principles found in any
successful lesson plan. Streamlining the lesson plan comes with
experience, as does determining appropriate applications of the lesson
and its evaluation.
An online environment can foster close relationships between student and
teacher and between student and other students. When students interact
online, opportunities arise for the sharing of personal information and
personal responses. Therefore, students must first understand that their
classmates are to be treated with respect. Instructors should make clear
what information might be confidential and what can be shared. In
addition, students should understand that they are to use language
appropriate to an academic forum and free of slang or jargon (the latter
can be especially difficult when working in education academia).
Students should also know how to invite, accept, and offer feedback in
ways that promote learning.
With the preceding principles established, the focus turns to the design
of the lesson itself. A sample outline follows. Only the main topics are
mentioned here. Like a jazz score, this outline is a framework, not a
crystallized prescription. Practitioners are advised to start here and
then improvise as their experience and proficiency develop.
I. Purpose Statement (the overall intent of the lesson plainly stated)
II\. Duration
III\. Prerequisites (if any)
IV\. Learning Objectives
V. Content/Learning Activities (For each item of content to be
addressed, show how it would be communicated to the students and the
estimated time needed. This is by far the longest section of the design
document.)
VI\. Application of Learning Strategy
VII\. Evaluation Strategy
The Purpose Statement gives an overall description of the lesson to be
presented. It may include the concept to be taught and the means by
which it may be delivered. Perhaps this statement will state how much of
this lesson is to be performed online and how much is to be completed
during a classroom session. The statement is usually concise, but
divulges an overview of the entire lesson.
When making decisions about the design of a blended learning lesson
plan, there will always be a need for determining an appropriate
timetable. The lesson plan should be designed to include a balance of
online and offline activities, but those activities must remain within
reasonable time limits. Far too many teachers have made the mistake of
heaping an excessive amount of online work on their students. Teachers
are then accused of believing that theirs is the only class in which
their students are currently enrolled. Balance is key. Too much of
either component will cause the lesson or activity to either grow dull
or become overly burdensome, thereby diminishing the learning aspect. At
the same time, the lesson must include enough challenges to promote and
instill the concepts being taught by the lesson. The balance is
delicate, but the very nature of the flexibility of blended learning
helps to maintain that balance. The duration portion of the lesson
becomes streamlined after its first delivery. Ample time must be allowed
to complete activities, but overplanning seems to be the wiser choice.
Students and teachers alike find it awkward to have time remaining at
the end of a lesson with no other learning planned. It is best to have
too many activities planned for a session than to not have enough. You
can always reduce the work load or save it for another time.
Prerequisite skills must be addressed by students and teacher. Offering
a traditional lesson to those without previous skills to complete the
new lesson leads to frustration for everyone concerned. Opportunities to
learn using technology without previously learned technological skills
will cause like frustration. Technical skills learned along with the
academic concepts makes for a very efficient lesson, provided the
academic goals and objectives are met. Technological skills are
important, but should not distract from the academic portion of the
lesson, but rather enhance it.
The learning objective is the meat of a lesson plan. It is the compass
that guides the teacher throughout the lesson. When expressed to the
students, it points the learner into the right direction as well. In a
blended learning class, if the learning objective is rooted in a math
concept, it is crucial that the teacher remain focused on that concept -
not the technological means by which it may be taught. If a technical
skill is being presented, that objective must be made clear in the
lesson plan.
Content and Learning Activities must be introduced into the lesson plan
and provided for ample practice if the student is to grasp the intent of
the lesson. Looking back at the learning objectives and the
\"Content/Learning Activities Outline\" can help answer evaluation
questions. \"Is my test content-valid, based upon the methods of lesson
presentation?\" \"Should my test include a short review time via a
traditional classroom setting, or would an online review better prepare
my students for evaluation?\" \"Should the test be performed online or
in the presence of the teacher?\" Online tests make for easy and quick
grading by the teacher. Security of the test, however, might be
diminished depending on the software used by the teacher. Tests taken
exclusively in the classroom setting, however, negate the natural
lessons of technology. Teachers who evaluate their students\'
performances by using a mixture of tests - some online, some offline -
have experienced more fruitful outcomes. Supplying examples to read as
text online or offline proves to be helpful. Presenting video
explanations or examples online, where students can view a snippet of
the lesson repeatedly gives enough exposure to solidify an idea or
concept. Any tool that can be afforded the student should be considered
to improve retention.
The most crucial step needed in each lesson plan is the preparation of
transfer of learning strategy. If learning is not transferred from the
place of learning to practical application, there can be no positive
return on investment of the time needed to create, implement, and
evaluate the lesson plan. Students are smarter than we might think. If
the lesson doesn\'t apply to something tangible or if it can\'t be used
in real life, you can expect them to ask, \"When are we ever going to
use this stuff?\" Make sure that your objectives are made clear to the
students. The learning standards must be addressed, yes, but also find a
real life application to better your students\' understanding of the
materials covered. If this is not done, much of your time, and your
students\' time, has been greatly wasted. A second look to ensure that
students have indeed learned the objectives might trigger revisions,
allowing for more (or better) class activities and teacher feedback.
This should be done before any evaluation strategy. Technology is useful
in simplifying this task of transferring the learning strategy. Many
times a lesson taught with the use of online instruction or with
technology as its main tool provides a built-in application. Students
see more clearly how the concepts are used in real life situations, and
because the lesson was applied practically, the student retains the
information and skills much longer.
A blended learning class is like any other - when lessons are presented,
it is imperative that assessment is given to check the depth of
learning. Caution must be practiced when using online assessment. If
this method was never practiced during the teaching of the lesson, the
student finds himself at a bit of a disadvantage when being tested.
Instead of devoting proper time to the non-technical concepts taught,
the student might be fighting his way through the technical tool he must
use to perform the task at hand. For example, students that have
graduated from high school may need to take an assessment test at a
nearby community college in order to be placed properly into a Math or
English class. If the college issues an online test and the student has
no past experience with such a method of testing, the scores can be
expected to be lower, causing the student to be placed in an
inappropriate level class. If online testing is to be used, pretests are
advised to familiarize the students with the technical part of the
test-taking task. Not doing so would seemingly violate the equity issue.
Identifying opportunities of learning in blended learning is the same as
identifying any learning opportunity. The focus should remain, however.
The K12 teacher must recognize the need to provide the right methods of
teaching for his students. The overall intent of the lesson should also
address the result of the lesson. Teachers may ask themselves, \"What
exactly do I want my students to know as a result of this lesson?\" Be
sure you understand the objectives of the lesson - many times they
provide the basis for subsequent lessons. Outline the topics and
subtopics that must be addressed by the lesson.
Blended learning is advantageous to the learner. Research has shown the
limitations of applying a generalized style of teaching, rather than
modifying lesson plans to fit the needs of the student. \"Increasingly,
organizations are recognizing the importance of tailoring learning to
the individual rather than applying a \'one-size-fits-all\' approach.\"
(Thorne, 2003) Of course, common needs exist, but blended learning
allows the teacher to look for creative ways and use a variety of media
to address the specific needs of his students.
When a teacher designs his lesson plan, it is important to note the type
of learning activity (e.g. lecture, case study, role play, simulation,
game, etc.) that best conveys the objectives of the lesson. There are
two reasons for listing traditional teaching methods only at this point,
instead of both classroom and online activities:
1. We as teachers usually establish on paper the \"ideal\" learning
experience when you work under a more familiar, traditional style of
teaching. It is live, face-to-face, instructor-facilitated and
student-collaborative.
2. Once you have established the lesson plan for the \"ideal\" learning
experience, you can systematically analyze the elements that can be
delivered online without compromising learning effectiveness. You
will discover here what might be best left in a classroom setting.
Blended learning is not simply adding an online component to a lesson
plan. Technology in a lesson plan should be used wisely - to enhance the
lesson. Technology should not be used just to show off technology.
Excellent opportunities exist for teachers to make learning interactive,
dynamic, and fun when used properly. The technology aspect of a lesson
should be like a good baseball umpire - it (like the umpire) is good if
it (he) goes unnoticed.
\"Since the intent of blended learning is to enhance learning by
combining the best of both worlds\...elements of the outline that appear
to lend themselves to self-study online should be highlighted. Such
elements tend to include easy-to-interpret, straightforward information
that is relatively easy for the (student) to accurately grasp on his/her
own.\" (Troha, 2003) Students should be able to perform required tasks
online with little or no prompting by the instructor. Of course,
teachers should guide their students along, but when a student can
accomplish a task online with limited assistance, that student
encounters a learning experience that is deeper and more rewarding.
Blended learning courses are dynamic by their very nature. Revisions
will need to be made to adapt to the learning needs of its students.
Knowing what works and what does not comes with experience. The best
resource for K-12 teachers to create and implement a blended learning
course is another teacher or a network of teachers who have had
experience with launching such courses.
The next sections of this
chapter
address Guiding
Principles
and some Success
Tips.
|
# Blended Learning in K-12/Guiding Principles of Blended Learning
|previous=Blended Learning's Lesson Design Process|Designing a Lesson
|next=Success Tips}}
```
### Guiding Principles
A definitive statement of what constitutes the best combination of
Information and Communication Technology (ICT) and face-to-face learning
experiences is impossible. No such statement exists for the best
combination of traditional practices much less for the newer world of
blended learning. Singh & Reed (2001) state \"Little formal research
exists on how to construct the most effective blended program designs\"
(p. 6). However, observers have begun to collate principles that, at
least anecdotally, lead to greater success.
*One note before continuing. Most of the literature about blended
learning design comes from work in business training and post-secondary
education. The author makes the assumption that those principles are
generally applicable to K-12 education as well.*
A theme that emerges is that the instruction methods, whether on-line or
face-to-face, are the means, not the end. \"Students never learn from
technology per se. They learn from the strategies teachers use to
communicate effectively through the technologies\" (Cyrs, Cyrs, and
Conway, 2003, General Guideline #1). It follows that the single most
important consideration when designing a blended learning environment is
the learning objective or purpose. It is tempting to assert that it is
the _only_ consideration. However, not only would
that lead to this exposition being overly brief, as desirable as that
might be for writer and reader alike, it would also mean ignoring the
essential truth that all learning occurs in and is shaped by a context.
Important dimensions of this context include (Singh & Reed, 2001, p. 5):
:\***Audience.** What do the learners know and how varied is their level
of knowledge? Are the learners geographicaly centralized or
geographicaly dispersed? Are the learners here because they wish to be
or because they have to be?
:\***Content.** Some content lends itself well to on-line situations.
Other content, a complex and detailed procedure for assembling a valve
train, for example, may work best in face-to-face setting.
:\***Infrastructure.** If physical space is limited, more of the
instruction could be placed on-line. If students do not have access to
high band width connections, on-line video streaming would be a poor
choice.
With purpose and context in mind, the designer can select, combine, and
organize different elements of on-line and traditional instruction.
Carman (2002) identifies five such elements calling them key
\"ingredients\" (p. 2):
1. **Live events.** These are synchronous, instructor-led events.
Traditional lectures, video conferences, and synchronous chat
sessions such as elluminate are examples.
2. **Self-Paced Learning.** Experiences the learner completes
individually on her own time such as an internet or CD-ROM based
tutorial.
3. **Collaboration.** Learners communicate and create with others,
e-mail, threaded discussions, and, come to think of it, this wiki
are all examples.
4. **Assessment.** Measurements of learners\' mastery of the
objectives. Assessment is not limited to conventional tests,
quizzes, and grades. Narrative feedback, portfolio evaluations and,
importantly, a designers reflection about a blended learning
environments effectiveness or usefulness are all forms of
assessment.
5. **Support Materials.** These include reference material, both
physical and virtual, FAQ forums, and summaries. Anything that aids
learning retention and transfer.
It is useful, though ultimately reductive, to think of the interaction
between context and ingredients for a given learning objective as a
rectangular matrix. The intersections suggest the method the designer
should use. The danger of this metaphor is the suggestion that each
purpose, context, and ingredient combination deterministically lead to a
matching method. Such is not the case. The point to the designer is to
think in terms of those conditions, and others unique to her particular
circumstance, as she orchestrates learning activities and creates her
blended environment.
Live Events Self-Paced Learning Collaboration Assessment Support Materials
---------------- ------------- --------------------- --------------- ------------- -------------------
Audience *method~1~* *method~2~* *method~3~* *method~4~* *method~5~*
Content *method~6~* *method~7~* *method~etc~*
Infrastructure
: For Learning Objective α
McCracken and Dobson (2004) provide an example of how learning purpose,
context, and blended learning ingredients lead particular learning
methods. They propose a process with \"five main design activities\"
(p.491) as a framework for designing blended learning courses. The
process is illustrated with a case study of the redesign of a class at
The University of Alberta called Philosophy 101 (pp. 494 - 495):
- **Identifying learning and teaching principles.** The teaching and
learning goals were described as requiring active participation,
sustained discussion, and, most importantly, inquiry and critical
analysis.
- **Describing organizational contexts** Team teaching with three
professors and up to eleven graduate teaching assistants to engage a
class of 250 students in dialogue around ethical and political
philosophy.
- **Describing discipline-specific factors** The designers are
described as being concerned about stereotypes of philosophy as
\"bearded men professing absolute truths\" (p.495). The desire was
to represent philosophy as an activity, not a set truths to be
absorbed.
- **Selecting and situating appropriate learning technologies**
Learning activities focused on the process of engagement: presenting
and defending a thesis and responding to opposing views. For
example, a face-to-face lecture would feature contemporary ethical
dilemmas with newspaper headlines or a video clip. Or, the
instructors would stage a debate in which they would assume the role
of a philosopher under study and then argue from the philosopher\'s
point of view. Online threaded discussion supplemented small group
seminar sections.
- **Articulating the complementary interaction between classroom and
online learning activities** In the Philosophy 101 example, it was
noted how the face-to-face engagement was complemented by more
deliberative, asynchronous discourse.
Even this simplified description illustrates the multilayered,
multifaceted nature of blended learning environments. With such a large
canvass, the most important design principle might be to start small.
\"Creating a blended learning strategy is an evolutionary process.\"
(Singh and Reed, 2001). A good place to begin is to supplement an
existing conventional, environment with one or two on-line activities, a
resource website or an asynchronous discussion for example. As
experience and confidence are gained, new tools can be introduced and a
greater effort put into redesigning the program. It is hoped that this
chapter will help teachers reach that goal.
This
chapter\'s
previous section is about Designing a
Lesson.
The next section describes Success
Tips.
## Additional Resources
See also the developing Wikitext Contemporary Educational
Psychology/Chapter 9: Instructional
Planning
for additional suggestions and ideas about designing and planning
lessons.
|
# Blended Learning in K-12/Success Tips
## Success Tips
Designing a blended learning environment can be a complicated and
involved process. Several experienced authors have offered tips for
success in such an endeavor. One such author, who appears at every
attempt to search the web for blended learning information, is Frank J.
Troha. This section of the chapter on the Design on Blended Learning in
a K-12 environment attempts to outline his six tips for success, and
comment on their relevance to a K-12 learning environment.
In his article entitled "Ensuring E-learning Success: Six Simple Tips
for Initiative Leaders", Troha offers the following six tips for
success:
1. *From design, to development to deployment, consider everyone your
learning initiative will impact, identify the key players within
each constituency and involve them from the very start.*
2. *Precisely define - and get agreement on - roles and
responsibilities from the get-go.*
3. *Do not bring in e-learning providers until you have a thorough
understanding of your target audience's needs, management's
expectations, the scope of the initiative, likely constraints (e.g.,
limited resources), learning objectives, content to be covered,
evaluation strategy and a host of other basic design matters.*
4. *Carefully select the right provider for the job.*
- *Develop and confirm precise, comprehensive selection criteria
(e.g., past experience addressing similar topics for similar
organizations, fee structure, service standards, references,
etc.) before meeting with any prospective providers.*
- \'\'Use the preliminary design document and selection criteria
to interview prospective providers.\
- *If you are new to e-learning or blended learning, start small.*
5. *From start to finish, keep all key individuals informed and
appropriately involved.*
6. *Strive for self-sufficiency and control.*
Let us examine each tip and discuss the implications for K-12 educators.
**1. From design, to development to deployment, consider everyone your
learning initiative will impact, identify the key players within each
constituency and involve them from the very start.** The focus is on the
involvement of and input from the key players in an organization. In an
educational setting, this may include but is not limited to teachers,
administrators, content chairs, parents, and other staff members. This
success tip may be the most important in that without the support of the
people directly impacted, the success or failure of the program may be
severely negatively affected. In an educational environment,
specifically K-12, it is crucial that you have the support of all key
players. Without the involvement and participation of everyone affected,
the program is sure to encounter difficulties that could be prevented
with this consideration in mind.
**2. Precisely define - and get agreement on - roles and
responsibilities from the get-go.** Without clearly defined roles and
responsibilities, the key players within an organization may not fulfill
their obligations to the program. For example, if it is the primary
responsibility of the school district curriculum designer to outline the
curricular content of the blended learning initiative, the resulting
content may not fit the exact needs of the teachers who are asked to
implement the blended learning model. Conversely, if teachers are
involved in a serious way in the development of the blended learning
course, they personally will feel ownership of the program and will
subsequently become the promoters and even defenders of the program.
**3. Do not bring in e-learning providers until you have a thorough
understanding of your target audience's needs, management's
expectations, the scope of the initiative, likely constraints (e.g.,
limited resources), learning objectives, content to be covered,
evaluation strategy and a host of other basic design matters.** In an
educational setting, often an outside provider is not used, but the
success tip is no less valid. The point is still that before a blended
learning model is constructed and implemented, it is crucial that an
understanding of all aspects of the program is established and
communicated to all key players. As in most aspects of education at the
K-12 level, communication is key to the success of any endeavor. The
collaboration of different players is what makes the program being
developed more successful, more useful, and ultimately may dictate
whether the course is adopted on a permanent level.
**4. Carefully select the right provider for the job.** This tip is more
relevant to a business audience where hiring outside vendors is more
common. A school district is more apt to choose an internal employee to
both help with the design of a blended learning model, as well as serve
as a pilot to test the effectiveness of the resulting program. However,
the point is still an important one, but can be rephrased in an
educational setting to "Carefully select the right instructor for the
job." It is important to choose someone who has an interest in utilizing
the inherent benefits of a blended learning course, as well as someone
who has the technical expertise to effectively help in the design and
implementation of such a course. In schools where there is a dedicated
Technology professional, this person may be the obvious choice for
playing a key role in both the design and implementation of the blended
learning course. This should not be a limitation, however, for there are
many able people within each curricular department of a school who would
make able and competent contributors to the design and implementation of
a blended learning program.
**5. From start to finish, keep all key individuals informed and
appropriately involved.** Not only is it important for the key players
to feel like they are a part of the process in order to gain support
from them, it also reduces the amount of time that is needed to answer
questions and provide training for said individuals. As was mentioned
earlier, a sense of ownership by key players needs to be developed and
nourished throughout the process in order to facilitate the positive
development and future success of the blended learning model.
**6. Strive for self-sufficiency and control.** This tip is probably the
most applicable to educators, as they are likely to embark on such an
endeavor without any outside help from professional providers. Teachers
have the advantage of experience in curriculum design and in the
implementation of a course based on their specific curriculum. They also
have an idea of what the end result should look like, and the experience
needed to successfully design the blended learning curriculum for the
specific needs of their students. Teachers have been educated about
different learning styles. This knowledge can help the blended learning
curriculum to best fit the needs of a diverse audience of students. They
know from experience what is fair and reasonable to expect from their
students. Teachers also know about their students' socioeconomic
backgrounds, which may play a key role in the design of the
instructional blended learning model. For example, in some communities
technology limitations may have an effect on choices of content
delivery.
These tips for success are a good reference when designing a blended
learning course. Of course, they should not be relied upon in isolation.
The best advice for a school instituting blended learning is simple:
look for successfully examples. A growing number of schools and other
institutions have realized the benefits blended learning adds to
instruction. Time needn\'t be wasted trying to "reinvent the wheel,"
when so many excellent programs already exist as models for others to
follow.
Previous sections in this
chapter
address Designing a
Lesson
and some Guiding
Principles.
|
# Blended Learning in K-12/Application of Blended Learning in K-12
|previous=Success Tips
|next=Blended_Learning_In_Grades_K-2}}
```
` While e-learning with no face-to-face contact may be a practical method of instructional delivery for college student, it is often not suitable for younger students. When implemented correctly, blended learning can provide the best of both traditional classroom learning and the use of web-based resources more appropriate for students in grades K-12. It should be noted, however, that in some cases, online learning for high school students is available, but also appropriate. In a traditional classroom, blended learning provides elementary and high school students with direct contact with a teacher, while also utilizing resources that extend the learning experience beyond what is available in the classroom. These resources include live synchronous experiences (video-conferencing, instant messaging, chat rooms, virtual classroom), asynchronous collaboration (e-mail, threaded discussions, online bulletin boards, listserve), and self-paced learning experiences (web learning modules, online resource links, simulations, and online assessment).`\
` School districts that encourage blended learning may realize benefits that include cost savings and the opportunity to provide unique learning opportunities to their students. The following pages describe blended learning techniques which vary in their application based on the grade level for which they are utilized.`
|
# Blended Learning in K-12/Blended Learning In Grades K-2
|previous=Application of Blended Learning in K-12
|next=Blended Learning in Grades 3-6}}
```
Blended learning within early childhood education is an interesting
concept; *how can a student who lacks the ability to read and write be
part of a virtual community?*
While many of the children at the primary level lack the ability to
read, incorporating technology enhanced learning is still a reality.
Teachers can enhance an existing curriculum, improve communication with
the school community and devise forums which reinforce & enrich the
early childhood education. Even though adaptations must occur in order
for e-learning to be successful with young children, primary students
\"should not be excluded from the virtual learning world simply because
of their age and developmental levels\" (Scott; 2003).
When the topic of blended learning arises people often think of students
meeting within a classroom setting and then continue the learning
experience online in the comfort of their home. However within primary
classrooms, blended learning can be more comparable to technology
integration; serving the class environment as a teaching aide. Since
many primary classrooms now have a technology center which can include
anywhere from one to half a dozen computers e-learning is becoming a
reality. While this is not the true definition of blended learning, this
type of face-to-face instruction followed by independent activities
based on individual student needs is the building blocks for higher
level blended learning.
**Blended Education: Application Examples**
*Curriculum*
An overview of the primary classroom sees children learning to read,
beginning to add, and exploring numerous topics for the first time. Most
classrooms are brimming with children, lacking an aide and overloaded
with information. By investigating each subject within a primary
classroom, teachers can envision how blended learning can be a real part
of early childhood education.
Incorporation of technology into the primary classroom can be as a
simple as bringing the students to a website which better illustrates a
story explored in class. For example, if a class reads \"Brown Bear,
Brown Bear, What Do You See?\" written by Bill Martin, Jr. to further
extend upon the story a primary teacher may set a website such as Animal
Vocabulary on a computer in the technology center.
Language arts within a primary classroom can be enhanced with a learning
community such as The Monster Exchange. This website has the children
collaborating with children all over the United States and beyond to
work on descriptive writing. Children draw a picture of a monster and
write a description of the creature in class, the teacher inputs the
picture and description into the website. Classrooms connect with
another class, read one anothers\' descriptions, and then try to
recreate the original monster. Pictures are set up side by side in the
Monster Gallery for comparison. Children can access the monster website
at home and work collaborately with their families.
A primary teacher knows all too well that purchasing enough math
manipulatives for the whole class can be quite expensive and often not a
reality for every school. Technology and blended learning offer a
solution. If an educator works with his/her class on a lesson about
patterns, he/she can direct students to practice the lesson on Virtual
Library of Math Manipulatives. To begin this lesson the teacher would
explore patterns in numerous ways using varying sets of manipulatives to
illustrate patterns; however when it comes to having a set of coins or
buttons for each student this might not be a feasible. Solution: if a
teacher uses this website within a lab setting in which all the students
are using a computer, children can play with patterns independently,
make mistakes, ask questions and the best part no buttons, coins or
colored bears to clean up.
Often time history is neglected because the basic skills often take top
priority in the primary classroom. Using e-learning is a great way to
further explore topics such as history. One very interactive website
explores George Washington. During the month a February a 3rd grade
teacher, for example, can set this website up as a favorite within her
computer center. Each child can be required to view the site and make a
comment in the notebook next to the computer. Those comments can later
be shared as a class.
Since science resources are so abundant online, we can look at blended
learning from a different angle. A second grade class designed this
Space Website. After learning about space in class, the children worked
within groups to develop a virtual learning area made for children in
primary grades. The students and teacher took what they knew, blended it
into technology and now other students can benefit.
*Communication*
Blended communication could be the most successful form for the new
generation of parents. Quite often information relayed to a primary
student quite often does not make it to the ears of a parent. Besides
traditional classroom visits, parent/teacher conferences and telephone
calls, many teachers of all students are realizing that reaching parents
through emails, websites and discussion boards are more fruitful in
contacting parents. Designing an online community where teachers can
post and explain information about their teaching methods can help
clarify classroom procedures. With the same regard, parents can ask
questions, review announcements, and become an active part of the
classroom through a virtual environment. Searching within Yahoo groups,
numerous groups can be discovered which join parents and education
groups. "Some schools are exploring the use of video conferencing and
\'streamed\' (stored for viewing at home) videos to promote parent
understanding and involvement in student learning\" (Starr, 2005).
This blended communication is even opening up a place for parent input
to class learning. Teachers can design questions through online
questionnaires from places like SurveyKey. Educators can ask parents
about issues with in the class, specific needs and concerns. As parents
respond, a teacher can make adjusts and improvements. Once again this is
extremely important within younger students, as they often have a
difficult time expressing experiences which they may have in class.
*Reinforcement & Enrichment*
Teachers at every level grapple with the difficulty of addressing the
needs of each child within a classroom, however this challenge is
extremely prevalent within the early childhood classroom as students are
exploring the building blocks of education. This challenge can be aided
with blended learning.
Studies have been surfacing for years that foreign language instruction
should begin at the elementary level instead of postponing that learning
until high school, however due to budgetary concerns, foreign language
classes seem like a frill (Walker, 2004). By teaching another language
to young children, we give them the greatest chance to fully absorb a
second language. If an elementary school does not offer a foreign
language classes, teachers and parents can still expose primary students
to another language through technology. From simple websites which
vocalize the French Alphabet to websites which allow the students to
progress through activities to learn Spanish.
Within primary grades a child many times needs extra practice. The web
has the amazing ability to give kids extra help in a way different from
group classroom instruction, maybe in a form in which a child learns
better. For example, if a teacher has introduced new letter sounds and
she/he notices a student is struggling, the student can either use the
computer center to practice or a Phonics website address can be send
home for parents to use as practice.
**Adaptations for Blended Learning in Early Childhood Education**
One major concern for young children on the Internet is safety. While
students within upper grades understand the seriousness of broadcasting
their personal information, often younger students are ignorant to that
fact. In order to protect the identity of students, it is recommended
that students work as a group. Since group work is quite prevalent
within a primary classroom, it is very realistic that young children
work as a group within a virtual learning environment, collaborating on
answers to contribute. Students can work as a face-to-face group in the
classroom, develop an answer and post the response within their virtual
environment. Working as a group also alleviates the need to post
responses using full names, pictures and other personal information,
instead the children post as, for example, *The Green Group*.
The design of a virtual community needs to be adapted for the younger
set. Since the literacy development varies greatly within grades K-2,
sites should use pictures and common shapes to navigate through the
information. As streaming video and digital voice technologies improve
and are becoming more common site participation becomes more user
friendly for those with limited reading skills. Responses within an
online learning environment need to be configured with developmental
needs in mind. Answers may need to be multiple choice or give the
contributor the ability to \"draw\" an answer.
Sites which are made specifically for online classes such as Moodle or
other course management systems are not appropriate for the early
childhood environment because \"younger students may not have the study
skills, reading abilities and self-discipline to fare well without a
class to go to\"(Russo, 2001). That does not mean they need to be
excluded from the virtual community, we just need to think of these
years as their preparation for becoming part of a **Brave New World** of
teaching.
|
# Blended Learning in K-12/Blended Learning in Grades 3-6
|previous=Blended Learning In Grades K-2
|next=Blended_Learning_In_Grades_7-8}}
```
**Starting Out With Blended Learning**
As students become more confident of their technology skills in grades
3-6, and access to technology at home increases, the opportunities for
blended learning experiences broaden. Web-based resources can provide
more indepth information on academic topics, support slow learners, and
enrich high achievers. Communication between home and school can be
vastly improved by utilizing the web to improve the learning experience.
One simple way to begin using technology with students in 3rd through
6th grade is through an asynchronous communication such as the use of
e-mail. There are numerous ways in which e-mail can be helpful in the
classroom setting. Using e-mail as part of a blended learning experience
can enhance a face-to-face discussion and allow students to further
explore their learning. Many students already have a home e-mail account
which they use to communicate with their friends or family, and by the
age of 10, students are mature enough to learn how to use email.
Teachers can make themselves available to students and parents through
email, to answer questions on specific topics and to discuss classroom
topics. Students can stay in touch with the teacher if they are absent.
Another popular use of e-mail is keypals, in which students are matched
up with students from other schools and participate in an exchange of
information and ideas. Keypals help the students see themselves as part
of worldwide learning community, and learn about other cultures and ways
of life. Another use of email is to adopt a grandparent. More and more
senior citizens are becoming technology savvy, and would love to
exchange information with students. There are some safety measures that
teachers need to set forth if adopting a grandparent. Full student
names, addresses, and phone numbers should not be given out at any time.
This is to ensure the safety of the students in today\'s world.
Instant Message (IM) is a synchronous form of communication that can be
started in the middle grades. IM allows instantaneous feedback from
teachers or students. Students can ask questions of other students or
the teacher about an assignment or participate in a discussion with a
teacher or classmate about concepts or topics being covered in class. IM
does have some drawbacks. Students can send inappropriate messages or
pictures to other students or teachers, so it is important that students
are instructed on acceptable use. Restricted accounts, which many
parents use for their children, might block the use of instant
messaging.
**Improving Home-School Communication**
Getting parents involved in their child\'s education is key to academic
success. Teachers can publish web pages linked to the school website to
provide a multitude of information for parents. Teachers can provide a
weekly agenda of what\'s going on in class, they can provide detail of
homework assignments, permission slips for field trips, and much more,
to be available to parents trying to stay on top of their child\'s
education. Many teachers use blogs for this purpose, which might be a
simpler way to get information online immediately. Teachers can also
link to websites that enhance or expand on topics covered in class.
There is no doubt that technology can improve parent-teacher
communication. Through the use of Edline or classroom websites, parents
can stay more involved in what their student is doing in the class and
also how they are doing in the class. If a parent can quickly view what
their child has to do or see an area where they need assistance, it can
make for easy communication with the teacher about what needs to be
done. For success to be evident, there must be good communication
between the parents and the teachers. \--Nick Hartz
**Curriculum Connections**
Many online activities are available for 3rd through 6th grade students
that provide extra practice on classroom topics, or expand and enrich
learning. Teachers can link to these sites so they are available to
students outside of school. Across the curriculum the web offers
resources that engage students in the learning process, and will
actually make them want to spend additional time outside of school on
learning.
It is important when planning a blended learning lesson within the
primary grades to focus on a unit of study, then intertwine it with
technology. The educators at San Diego State University have designed a
tool to aide teachers in their preparation.
For teachers looking to integrate science experimentation into their
middle grades\' curriculum, a wonderful interactive site is Zoom
Kitchen
Chemistry.
Here, students can conduct virtual experiments to learn about real-life
chemical reactions, or find out about real science experiments they can
do at home with items found in their own kitchens. This site is
wonderful for extending classroom learning using technology. If the
class is studying space and the solar system, an excellent resource for
young astronomers is Star
Child. Here students
can find information about space topics, utilize simulations and a
glossary of space terms.
One method of blended learning in math is to have students practice
their math facts online. This provides the opportunity for students to
spend extra time practicing if needed. At Math
Magician
students can have fun interactively while working on math facts, all
operations, and two levels are available, so more advanced students can
progress at their own pace through more challenging material. If the
teachers seeks ways of using manipulatives to teach math, a wonderful
site that utilizes java applets so students can have a hands-on
experience is The National Library of Virtual
Manipulatives.
A great source for students to work on their reading skills is The
Reading Matrix. There
are numerous reading activities ranging from vocabulary, comprehension,
to proofreading, and short stories. Many of these sites provide an
online quiz for the students to take. Teachers can find good sites on
this page by looking at the ratings that it has received.
Students can have a blast at National Geographic for
Kids. Students will spend
hours going through the website which contains, quizzes, games,
cartoons, and excellent information. This site is great for any social
studies buff, or anyone that wants to have fun while doing research on
the web. This site will have students talking about social studies for
the entire quarter.
Everyone wants to create their own music that they can listen to. The
website Creating Music (which requires
Java) allows students to create their own musical sketch pad and then
listen to what they have designed. Students can learn about beat, tempo,
and rhythm while enjoying being the composer of music. This is a great
site for elementary students to learn about music and to get them
interested without having to pick up a single instrument.
**Virtual Field Trips** Field trips are a large part of any classroom.
Quite often a teacher would love to take her students places to which a
bus trip is not an option. Technology offers the solution with virtual
field trips. Students can look at museum artifacts, visit an aquarium,
or admire beautiful art while sitting with their class. Sharing a field
trip virtually is also a great way to reflect a on a trip and share
experiences with future classes. For example, each year students from
Bennet School in North Carolina design a website about their trip to the
State
Capital,
each year the website gets remodeled, but old versions are kept online
to serve as a scrapbook.
**Video Conferencing**
How about a field trip without even leaving the room? With the creation
of video conferencing this is made possible to all students and teachers
to further enhance their students learning and enthusiasm. Students love
to take field trips and they love to go on the computer, now teachers
can have the best of both worlds. Here are some sites that encompass a
virtual field trip.
Science
Center is a
great site for science educators who want to have their students learn
first hand about the human body, space, dinosaurs, and eyes. For this
field trip a fee of \$150.00 is required for a 45-minute tour. This
might not be feasible for schools who are on a very small budget. They
do provide a 25 minute project for about \$100.00.
Ever want to receive video feeds from underwater? The Aquatic Research
Interactive Site does just that.
Science teachers can have students watch video streams from underwater
for numerous topics. This site has been designed for teachers and
students to better understand concepts below the Earth\'s surface. Not a
science teacher? Math, history, and physics teachers can also benefit
from this site as well. There is one major drawback to this site, the
fee. A whopping \$195.00 fee for the use of the video clips are
required.
Teacher\'s Pet? That might be what students will be talking about after
a trip to The Bronx
Zoo. This two
way interactive site is designed for the elementary students to learn
more about an animal\'s behavior. How about having a lion in your class?
This site allows a class to have them and you don\'t have to worry about
students with allergies or a student being attacked. This site is sure
to have your students talking for a long while. This fieldtrip is
\$125.00 for a maximum for 35 students.
**Conclusion:** Can blended learning work in grades 3-6? That depends on
the teacher, tech support, students, and administration. Lisa Abate
understands that a blended learning classroom will require more work.
She mentioned that she spent a lot of her time \"troublehooting studet
problems (such as lost passwords).\" (Abate, 2004) Essentially, doesn\'t
education come down to \"are students learning?\" If students log on to
a website long after they are finished woith their assignment to further
enrich their studies have teachers accomplished their goals? Is the time
worth the satisfaction a student gets by learning more than have to? In
Lisa\'s beta test of her math classroom she found that students were
spending more time than needed on specific activities. (Abate, 2004)
This is a teacher\'s dream come true, students spending more time than
is required on assignments.
|
# Blended Learning in K-12/Blended Learning In Grades 7-8
|previous=Blended Learning in Grades 3-6
|next=Blended Learning in Grades 9-12}}
```
## Blended Learning Grades 7--8
As students mature and can handle learning without constant teacher
attention, online applications may become more effective for teaching
some curriculum. After teachers and students feel comfortable with
e-mail and webpage design, they can dig further into the realm of
blended learning by accessing some excellent websites. All of these
sites are designed to help the learner better understand a topic. These
are just a few good examples of websites; there are numerous other sites
available.
**Triple A Math** This site is great
for K-8 math teachers because of its content. Students can read the
explanation of each measurement, play some challenge games, and some
interactive practices. Students will have endless hours of fun checking
out this site.
**Seattle Art
Museum**
(Science, Social Studies) This site is for the \"explorer\" learner.
Students can learn about navigation techniques used in the Age of
Exploration. The site includes a video clip of how to use an octant.
This was very cool to see in addition to learning about the navigation
used today.
**Hands On the Land** Imagine
collecting water samples, monitoring the ground ozone levels, and more
on this environmental website. The site provides interaction with other
schools, students, and the forest preserve as students are engaged while
learning about the environment. There are even lesson plans for teachers
to further enhance the students learning.
**BioPoint** This
is an online site for teacher created webquests. They are listed by
grade level and have everything you need to deliver a unit.
**Barking Spiders Poetry**
Barking Spiders is a site devoted to poems for children. Students can
design their own poems online by filling in the blanks. There are mazes
that can be completed and poems to be read.
**1** The
Smithsonian is known world-wide for its magnificent collections. Since
not every class can take a trip to these free museums, the Smithsonian
has made it easy for every class to access the collections and
activities to go with them online. Teachers can search for lessons by
state standard quickly and easily. Teachers can also check out resources
that can be sent to their classrooms.
**Social Science** Some teachers are more comfortable with enhancing
their class through projects. A sample of one would be a project
developed by Kate Purl at Urbana Middle School. The year was ended with
a seventh grade project on Africa. Each student had to research an area
of Africa, learning specifics about the area such as flora and fauna.
The information they found was then added to a website. Once all the
areas of Africa were added, the adventure began. As teams, the students
had to traverse the African continent. The information that the students
provided was combined into a webquest that was a toss up between \"Where
in the World is Carmen San Diego?\" and \"The Oregon Trail.\" Decisions
made as you traveled the trail could lead to success or failure as an
African explorer. The site is still available at: Africa: Choose Your
Own Adventure.
Other successful online adventures for Middle School students might be
Virtual Ancient
Civilizations which is
a work in progress:
**Numerous Subjects** Quiz Hub.
What can\'t you do at this site? The site is loaded with news quizzes,
online maps, chess games, concentration games and much more. Your
students will never want to leave the site. This will have them engaged
for hours upon hours. There is even an art area where you can design
whatever you want. This was my favorite link!!
**Benefits for Students** Research shows that students who are involved
in online learning during the middle school years are more likely to
keep their academic grades higher than those who are not exposed to
online learning (Belanger, 2005). Additionally, the attendance rate of
students using computers is higher along with the ability of students to
do well in group situations and within project-based instruction.
Blended learning is also research based in that it pulls from research
done by Piaget, Vygotsky, Bloom Keller and Gery. Using online education
for middle schools students is a viable way of enhancing the curriculum
by providing live events such as online homework help. This could come
from the teacher, or it could be from other students in a cooperative
learning environment. Blended learning also gives the students a chance
to move at their own pace. What one student may be able to learn quickly
another may need more time to digest and understand. In blended
learning, students moves at their own pace and they have the
availability of a teacher when questions arise. There is also a
collaborative element to blended learning. If a student does not
understand part of an assignment, there are other students available to
ask questions. Students can use a chat room, IM or e-mail to work
together to complete a project. Along with the curriculum, testing can
also take place online. This would allow students to take assessments
when they feel they are ready. It would also allow for quick feedback
from those assessments so that students know where they stand with
respect to their grade. Lastly, when students use the web for learning
there is a wealth of materials available to them at the click of a
mouse. Dictionaries, encyclopedias and research are just some of the
information that students can access during their learning experience.
These five ingredients to blended learning are important for the
students to have the best possible educational experience. It allows
them to remain anonymous so that their mistakes are not broadcast
throughout the classroom, giving them self confidence. It is a way to
allow students to take control of their own learning experience.
**The Down Side** As with every advance in technology, blended learning
can have its faults also. While online courses have been available for
several years to the post secondary student, use of online technology is
generally thought of as an enhancement for the secondary schools.
Schools are using computers more than ever before with an increase from
60% in 1993 to 84% in 2001 and home use of computers growing during the
same time from 25% of students to 66% (NCES, pg 1). Schools might use
computers more if computers were available in each classroom. Many
schools don\'t have that luxury and, instead, must move students to a
central computer lab to make use of the technology available. This
creates more work for teachers and takes away from instructional time.
Most will also admit that it is the teacher, and her comfort with
technology, that affects how well that technology is presented in the
classroom. In his book, "Oversold and Underused," Larry Cuban states
that computers are not successful because teachers who use computers for
instruction do so infrequently and unimaginatively (Harvard University
Press, 2001).
**Taking Technology Further** Some students, for various reasons, need
to work at their own pace. While distance learning got its start at the
post-secondary level, it is slowly gaining momentum at the middle and
high school level. Students who are gifted, challenged or have health
reasons and do not do well in a traditional learning environment now
have the opportunity to complete their education online. Several schools
are now in operation that offer aid to these students. Some worth
looking at are:
Advanced Academics
Colorado
Exel High
School
High School
James Madison High School
Online
**Teacher Controlled Blended Learning** As with any level of education,
there are some teachers who prefer to have control over what their
students are doing online. Instead of accessing one of the above
websites or a WebQuest, these teachers choose to develop their own
curriculum for their students. This curriculum needs to be well thought
out before anything is ever put online. New South Wales Department of
Education and Training has some guiding questions that teachers should
ask themselves before they begin to design a web-enhanced course. They
first suggest that you know WHY you are trying to place information
online for your students. Once you are sure that what you are looking
for is not already online, then it is time to understand the goals that
you have for you students regarding the online content: Do you want them
to learn to work independently? Or do you want to free up your time to
work with those students who need the extra help? The other suggestions
include teaching your students slowly, showing them step by step what
you expect from them. However, the ultimate suggestion was that if a
teacher were to be interested in providing web-enhanced learning for
their class, they should have had the experience of online learning
themselves (NSWDE, 2005).
|
# Blended Learning in K-12/Blended Learning in Grades 9-12
|previous=Blended_Learning_In_Grades_7-8
|next=References}}
```
Today\'s high school student often has the maturity and technical
expertise necessary to participate in e-learning experiences. However,
students of this age frequently require the support of a teacher in a
classroom. Blended learning combines the best of both worlds for high
school students: the fluidity of using Internet resources and the
reassurance of face-to-face experiences. It extends learning beyond the
classroom, and expands the breadth of courses offerings, while providing
the personal support and encouragement from a teacher still necessary
for many students. The following paragraphs describe the effectiveness
of blended learning, how to successfully achieve blended learning in a
high school environment, and provide specific examples for teachers.
## Research on Effectiveness of Blended Learning
High school students are often motivated by online learning, and often
have the maturity and self-discipline to work independently and succeed
in online coursework. \"Evidence overwhelmingly shows that ALN
\[Asynchronous Learning Networks\] are at least as effective as
classroom learning\" (Hilz et al., 2004). Much of the evidence of online
learning success, however, relates to college and graduate level
students who demonstrate a better completion rate than younger learners.
Some high schools have found that providing hybrid courses, or blended
learning experiences that provide more face-to-face support to students
have better completion rates. One example is the Mannheim Township
Virtual High School in Pennsylvania. While receiving many accolades for
their success with individual students participating in their online
learning courses, the dropout rate hovered around 25%. The program was
uniquely revamped to address this issue. The solution was a hybrid
online and traditional course model that has resulted in a 99%
completion rate (Oblender, 2002).
The weaknesses of online courses are addressed by hybrid, or blended
learning courses. Blended learning provides more structured time for
student work while still allowing students the opportunity to proceed at
their own pace. Teachers are available to monitor progress and provide
encouragement and support to students who may lag behind. Blended
learning courses provide physical resources that are not available in
courses that are presented completely online, including language,
technology and science labs (Oblender, 2002).
Blended learning also provides many benefits not available in
traditional classrooms. The need for textbooks is diminished. Material
presented is timely and relevant, and student progress is self-paced.
The students\' learning environment is extended to organizations, people
and facilities not available in the classroom. Students who participate
in blended learning gain advanced technological competencies (Oblender,
2002).
Distinct advantages exist for at-risk students when exposed to blended
learning, particularly synchronous activities. Joining a \"cyber-study
group\" results in higher performance for these students compared to
students who study alone. \"Peer-to-peer interactions needed for
collaboration promote a collective sense of responsibility . . .
students who have low self-efficacy or an external locus of control
receive feedback and encouragement from their study partners.\" Also the
presence of the instructor is more frequent, and results in more
meaningful dialogue between teachers and students (Newlin, *et al.*,
2002).
## Getting Started with Blended Learning
For teachers just getting started in blended learning, the simplest
approach may be to choose websites that expand on what is being taught
in class, and/or provide extra practice for specific skills. In this
scenario, the greatest proportion of the learning experience involves
face-to-face learning, with a less significant web component. High
school teachers who are looking to add a web component should start
simple. The purpose is to expand on and clarify topics covered in class,
and provide opportunities for students to extend the learning experience
beyond the confines of the classroom. Some examples:
Foreign language students can utilize My Language
Exchange. At this website, a student
can locate a student learning his language who is a native speaker of
the language he is learning. Together, the two students can participate
in practice sessions using lesson plans, text and voice chat rooms, a
dictionary, a private notepad, etc.
Math students can utilize online simulations of math concepts at The
National Library of Virtual
Manipulatives. Simulations
are available in Numbers and Operations, Algebra, Geometry, Measurement,
Data Analysis, and Probability. Using these manipulatives can provide a
better understanding of math concepts, and well as practice in various
areas.
English students can extend their learning of the writing process beyond
the walls of the classroom by using Principles of
Composition.
This site contains a full high school composition course with
interactive lessons and practice.
Students of American history can expand and develop their knowledge and
understanding of major events in the history of our country by using
The American Memory Collection
published and maintained by the Library of Congress. Another good source
is to use some of the interactive sites provided by National Geographic.
The Underground Railroad trip is especially good for upper elementary
and jr. high students. It forces them to think and make decisions while
explaining what the underground railroad is.
Enriching science students through technology is made possible by the
wide array of interactive resources available on the web. Students can
conduct an in depth and interactive study of Biology at Interactive
Biology. Chemistry
students can expand on textbook information with information and
animations on periodic table elements by using The Visual Elements
Periodic
Table.
Students of physics can enrich the learning experience by spending time
at The Physics Classroom.
## Moving Forward: Incorporating Synchronous Web Components Into the High School Class
As teachers and students gain confidence in the incorporation of blended
learning, they can discover numerous web components available to enhance
the learning experience for students. Live synchronous experiences are
good examples. These include video conferencing, instant messaging, chat
rooms, and virtual classroom modules.
Video conferencing serves a variety of purposes. Most video conferencing
options are not free, but available for an hourly fee, or through a
subscription service. Students can benefit from the expertise of
specialists in various areas and can participate in virtual field trips.
Students living in remote areas can benefit from resources and people
only available in more populated areas. The Albany Institute of History
and Art
offers numerous \"virtual field trips\" for high school students.
Students are active participants as they join in real time with the
Institute\'s historians, examining artifacts and collaborating with
experts. History students can interact with holocaust survivors via
video-conferencing through The Holocaust Memorial and Educational
Center in New
York. These are just a few of numerous video conferencing options
available across the curriculum.
Video conferencing has purposeful academic applications in language
courses, but some controversy exists over whether these needs can be
addressed via asynchronous video. CUSeeMe was the grandfather of two-way
video conferencing. Successors include NetMeeting, PalTalk and iVisit.
Live Video conferencing has its drawbacks, however. Many schools lack
the bandwidth necessary for effective and problem-free two-way
conferencing (Godwin-Jones, 2003).
Instant messaging, discussion boards and chat rooms provide a means for
remediation and consultation for students outside the school day.
\"Communication tools like discussion boards and chat rooms can be
effective in inter-team collaboration as well as in faculty-student
communication\" (Eastman, *et al.*, 2002). Students can become motivated
in directing their own learning. Through these synchronous activities,
students become empowered, can develop better communications skills, and
develop their ability to work cooperatively. Students who are more timid
in a face-to-face environment often gain confidence in online
discussions. Teachers frequently become more accessible to students
through these types of synchronous activities (Eastman, *et al.*, 2002).
Traditional text chats can now be enhanced with voice and/or video.
\"Apple recently announced multimedia enhancements of its iChat
application, along with introducing the new iSight camera\"
(Godwin-Jones).
## Use of Asynchronous Web Components in Secondary Education
Blogging, which began as an online journal several years ago has
escalated to \"several hundred thousand diarists . . . actively posting
blogs about almost every conceivable topic.\" Blogs provide an instant,
online writing space with the potential for an audience of thousands;
free and instant publishing. Teachers will find that the \"presence of
an audience can increase engagement and a depth of writing\" (Bull,
2003). \"Blogs also help students exchange ideas much like a group of
students waxing poetic at a . . . coffeehouse. Blog sites often
prominently display the e-mail addresses of \[their\] creators, letting
readers instantly provide feedback to the site\" (Toto, 2004). Blogging
has numerous instructional applications in high school. Literary
activities using blogs include character journals, character roundtable,
think-aloud postings, and literature circle group responses. Revision
and grammar activities include nutshelling (extracting a line from a
paragraph that holds the most meaning), devil\'s advocate writing
(online debate), and exploding sentences (slowing down a student from an
earlier post and adding rich, descriptive detail). At Hunderdon Central
Regional High School in Flemington, New Jersey, students use a blog to
discuss \"The Secret Life of Bees\" by Sue Monk Kidd in an American
Literature Class (Bull, 2003).
Blogs can also help improve student writing. While students often begin
their blog experience with sloppy grammar and spelling, the presence of
an audience generally changes that. Since students are often the most
critical audience, the blog writer begins to strive to improve writing
to avoid criticism. Also blogging \"forces students to become more savvy
about the world around them.\" The need to feed the interest of the
audience inspires students to be clever and interesting (Toto, 2004).
Blogging is a tool that inspires collaboration, and encourages students
to extend learning well beyond the traditional school day. Appropriate
use of blogs \"can empower students to become more analytical and
critical; through actively responding to Internet materials, students
can define their positions in the context of others\' writings as well
as outline their own perspectives on particular issues (Oravec, 2002).
Course management systems provide a mode of presentation and
organization for blended learning. Moodle is a good example. Moodle was
design to \"provide a set of tools that support an inquiry- and
discovery-based approach to online learning\" (Brandl, 2005). Being
available free of charge, it also has good financial benefits for school
districts, as opposed to commercial course management systems. Teachers
using moodle are able to provide a virtual classroom round the clock.
\"Moodle has great potential for supporting conventional classroom
instruction, for example, to do additional work outside of class, to
become the delivery system for blended course formats (Brandl, 2005). It
is based on socio-constructivist theory, promoting cooperation among and
between students and teachers. It provides for both synchronous and
asynchronous discussion through a chat option and threaded discussion.
\"At the core of the concept of an asynchronous learning network is the
student as an active---and socially interactive---learner\" (Hilz et
al., 2004).
## Pulling It All Together: The Teacher\'s Perspective
The goal of blended learning is to use technology as a tool for learning
and to promote a discovery-based approach to online learning. It is also
intended to help students become \"anytime, anywhere\" lifelong
learners. Keeping abreast of the technology is a challenge for teachers
as well. The teacher needs to participate in ongoing professional
development, read the latest research, and share ideas with other
teachers. Teachers must actively seek out and utilize individuals that
can act as mentors and technical support providers in their quest for
effective blended learning techniques. If teachers must be lifelong
learners themselves in order to promote lifelong learning among their
students \-- they must lead by example.
## The Role of Blended Learning in the Future of High School Education
While teachers grapple with the challenges of staying up-to-date on new
technologies, students being educated in this technological era are
confident in their use of technology. New teachers enter the workplace
well-equipped to face the technological challenges that await. As access
to technological resources improves for all students and the digital
divide narrows, high schools find themselves better able to implement
blended learning. In the near future, high school learning will no
longer be limited the length of the school day or the confines of the
school building. Resources will be available for all students to learn
anywhere, anytime.
Across the country, high schools are already making this vision a
reality. One example is the Urban School in San Francisco, which has
incorporated a 1:1 student laptop program. Foreign language courses have
been improved with the use of voice files to improve listening and
speaking skills. History students contribute meaningfully beyond the
confines of the classroom and school to the broader community beyond by
providing web-based election materials to the local community and
producing an award-winning website that contains oral histories of area
holocaust survivors. Math and science simulations are available for
extra practice and enrichment to students when and where they need it,
and language arts students participate in online literature circles,
generating "more thoughtful and meaningful responses" than would
typically be expected in a traditional classroom discussion (Levin).
## Summary
The benefits for students abound in blended learning. Teachers have the
opportunity to individualize instruction at all levels and for all
students. Extra help is available to students who need it, and
enrichment opportunities can be provided for students who move at a
faster pace than the rest of the class. Teacher availability extends
beyond the confines of the school day and the school building. Learning
opportunities are expanded for all students. Students become actively
engaged, and are well-prepared for the technological workplace that will
be theirs. Collaboration is encouraged, and students have the
opportunity to collaborate with a diverse and worldwide student body.
Curriculum connections can be made which encourage higher level thinking
(Morehead et al., 2004). Great potential exists for both students and
teachers, and the future of blended learning is significant. What of the
questions that would seem to need to be addressed is the availability of
the technology to allow for blended learning. It would seem that
wealthier districts would have a significant advantage in the resources
available to them. Many poorer districts have poor student to computer
ratios or do not have the ability to make computers available to
classroom students on a regular basis.
|
# FHSST Physics/Info
**Free High School Science Texts (FHSST)** is an initiative to develop
and distribute free science textbooks to grade 11 - 12 learners in South
Africa.
The primary objectives are:
- To provide a \*free\* resource, that can be used alone or in
conjunction with other education initiatives in South Africa, to all
learners and teachers
- To provide a quality, accurate and interesting text that adheres to
the South African school curriculum and the outcomes-based education
system
- To make all developed content available internationally to support
Education on the largest possible scale
- To provide a text that is easy to read and understand even for
second-language English speakers
- To make a difference in South Africa through helping to educate
young South Africans
FHSST Website - FHSST Physics on
Wikibooks
Other FHSST books on Wikibooks:
- FHSST Biology
- FHSST Computer Literacy
- FHSST Chemistry
|
# FHSST Physics/Introduction
\<\< Main Page \-- First Chapter (Units)
\>\>
## Introduction
Physics is the study of the laws which govern space, structure and time.
In a sense we are more qualified to do physics than any other science.
From the day we are born we study the things around us in an effort to
understand how they work and relate to each other. For example, learning
how to catch or throw a ball is a physics undertaking.
In the field of study we refer to as physics we just try to make the
things everyone has been studying more clear. We attempt to describe
them through simple rules and mathematics. Mathematics is merely the
language we use. The best approach to physics is to relate everything
you learn to things you have already noticed in your everyday life.
Sometimes when you look at things closely, you discover things you had
initially overlooked.
It is the continued scrutiny of everything we know about the world
around us that leads people to the lifelong study of physics. You can
start with asking a simple question like \"Why is the sky blue?\", which
could lead you to electromagnetic
waves,
which in turn could lead you to wave particle duality and energy levels
of atoms. Before long you are studying quantum mechanics or the
structure of the universe.
In the sections that follow notice that we will try to describe how we
will communicate the things we are dealing with. This is our language.
Once this is done we can begin the adventure of looking more closely at
the world we live in.
|
# FHSST Physics/Waves
## Waves and Wavelike Motion
Waves occur frequently in nature. The most obvious examples are waves in
water on a dam, in the ocean, or in a bucket, but sound waves and
electromagnetic waves are other, less visible examples. We are most
interested in the properties that waves have. All waves have the same
basic properties, so by studying waves in water we can transfer our
knowledge and predict how other types of waves will behave.
Waves are associated with energy. As the waves move, they carry energy
from one point to another in space. It is true for water waves as well.
You can see the wave energy working while a ship drifts along the wave
in rough sea. The most spectacular example is the enormous amount of
energy we receive from the sun in the form of light and heat, which are
transmitted as electromagnetic waves - not even requiring a medium to
propagate. wave was being discovered by Sir Reagan Lulu Wokoz
## Simple Harmonic Motion
Simple Harmonic motion is a wavelike motion. It is considered wavelike
because the graph of time vs. displacement from the equilibrium position
is a sine curve.
An example of simple harmonic motion is a mass oscillating on a spring.
It will be hard to understand the forces involved this early in the
course that cause the motion to simple harmonic, but it is still
possible to look at a mass oscillating on a spring and understand that
it is indeed simple harmonic. When a mass is oscillating on a spring,
the further the string stretches, the slower the mass will be moving.
Then the mass reaches a point where the string won\'t stretch any
further, so it quits moving and then it reverses direction. As it moves
closer to the equilibrium position is moves faster.
|
# FHSST Physics/Vectors
# Vectors
NOTE TO SELF: SW: initially this chapter had a very mathematical
approach. I have toned this down and tried to present in a logical way
the techniques of vector manipulation after first exploring the
mathematical properties of vectors. Most of the PGCE comments revolved
around the omission of the graphical techniques of vector addition (i.e.
scale diagrams), incline questions and equilibrium of forces. I have
addressed the first two equilibrium of forces and the triangle law of
three forces in equilibrium I think would be better off in the Forces
Chapter. Also the Forces chapter should include some examples of incline
planes. Inclines are introduced here merely as an example of components
in action!
|
# FHSST Physics/Momentum
**Momentum** is the product of the mass and velocity of an object. In
general the momentum of an object can be conceptually thought of as the
tendency for an object to continue in its current state of motion, speed
and direction.
As such, it is a natural consequence of Newtons first law.
Momentum is a conserved quantity, meaning that the total momentum of any
closed system cannot be changed.
## Momentum in classical mechanics
If an object is moving in any reference frame, then it has momentum *in
that frame*. It is important to note that momentum is frame dependent.
That is, the same object may have a certain momentum in one frame of
reference, but a different amount in another frame.
The amount of momentum that an object has depends on two physical
quantities: the mass and the velocity of the moving object in the frame
of reference. In physics, the symbol for momentum is usually denoted by
$\vec p$ , so this can be written:
: `<font size=5>`{=html}$\vec p=m\vec v$`</font size=5>`{=html}
where
$$\vec p$$ is the momentum,
$$m$$ is the mass, and
$$\vec v$$ is the velocity.
The velocity of an object is given by its speed and its direction.
Because momentum depends on velocity, it too has a magnitude and a
direction and is a vector quantity. For example the momentum of a $5kg$
bowling ball would have to be described by the statement that it was
moving westward at $2\frac{m}{s}$ . It is insufficient to say that the
ball has $10kg\cdot\frac{m}{s}$ of momentum because momentum is not
fully described unless its direction is given.
## Conservation of momentum
As far as we know, momentum is a conserved quantity. **Conservation of
momentum** (sometimes also **conservation of impulse**) states that the
total amount of momentum of all the things in the universe will never
change. One of the consequences of this is that the center of mass of
any system of objects will always continue with the same velocity unless
acted on by a force outside the system.
Conservation of momentum is a consequence of the homogeneity of space.
In an isolated system (one where external forces are absent) the total
momentum will be constant: this is implied by Newton\'s first law of
motion. Newton\'s third law of motion, the law of reciprocal actions,
which dictates that the forces acting between systems are equal in
magnitude, but opposite in sign, is due to the conservation of momentum.
Since momentum is a vector quantity it has direction. Thus when a gun is
fired, although overall movement has increased compared to before the
shot was fired, the momentum of the bullet in one direction is equal in
magnitude, but opposite in sign, to the momentum of the gun in the other
direction. These then sum to zero which is equal to the zero momentum
that was present before either the gun or the bullet was moving.
|
# FHSST Physics/Newtonian Gravitation
# Newtonian Gravitation
!Sir Isaac
Newton All
objects on Earth are pulled downward, towards the ground. This
phenomenon is called **gravity**. Every object falls just as fast as any
other object (unless the air slows it down like a feather, or pushes it
up like a balloon), as first shown by
Galileo. In 1687 Isaac
Newton stated that gravity
is not restricted to the Earth, but instead, there is gravity everywhere
in the universe. Newton explained that planets, moons, and comets move
in orbits because of the effect of gravity.
|
# FHSST Physics/Pressure
**Essay 3: Pressure and Forces**
Author: Asogan Moodaly
Asogan Moodaly received his Bachelor of Science degree (with honours) in
Mechanical Engineering from the University of Natal, Durban in South
Africa. For his final year design project he worked on a 3-axis filament
winding machine for composite (Glass re-enforced plastic in this case)
piping. He worked in Vereeniging, Gauteng at Mine Support Products (a
subsidiary of Dorbyl Heavy Engineering) as the design engineer once he
graduated. He currently lives in the Vaal Triangle area and is working
for Sasol Technology Engineering as a mechanical engineer, ensuring the
safety and integrity of equipment installed during projects.
# Pressure and Forces
In the mining industry, the roof (hangingwall) tends to drop as the face
of the tunnel (stope) is excavated for rock containing gold.
As one can imagine, a roof falling on one\'s head is not a nice
prospect! Therefore the roof needs to be supported.
![](Fhsst_press1.png "Fhsst_press1.png"){width="363"}
The roof is not one big uniform chunk of rock. Rather it is broken up
into smaller chunks. It is assumed that the biggest chunk of rock in the
roof has a mass of less than 20 000 kg therefore each support has to be
designed to resist a force related to that mass. The strength of the
material (either wood or steel) making up the support is taken into
account when working out the minimum required size and thickness of the
parts to withstand the force of the roof.
![](Fhsst_press2.png "Fhsst_press2.png"){width="363"}
Sometimes the design of the support is such that the support needs to
withstand the rock mass without the force breaking the roof..
Therefore hydraulic supports (hydro = water) use the principles of force
and pressure such that as a force is exerted on the support, the water
pressure increases. A pressure relief valve then squirts out water when
the pressure (and thus the force) gets too large. Imagine a very large,
modified doctor\'s syringe.
![](Fhsst_press3.png "Fhsst_press3.png"){width="363"}
In the petrochemical industry, there are many vessels and pipes that are
under high pressures. A vessel is a containment unit (Imagine a pot
without handles, that has the lid welded to the pot that would be a
small vessel) where chemicals mix and react to form other chemicals,
amongst other uses.
![](Fhsst_press4.png "Fhsst_press4.png"){width="335"}
The end product chemicals are sold to companies that use these chemicals
to make shampoo, dishwashing liquid, plastic containers, fertilizer,
etc. Anyway, some of these chemical reactions require high temperatures
and pressures in order to work. These pressures result in forces being
applied to the insides of the vessels and pipes. Therefore the minimum
thickness of the pipe and vessels walls must be determined using
calculations, to withstand these forces. These calculations take into
account the strength of the material (typically steel, plastic or
composite), the diameter and of course the pressure inside the
equipment.
|
# FHSST Physics/Electrostatics
# Electrostatics
The study of the effects of static charges. These charges are produced
by too many or too few electrons. Electrons, in turn, are the most
movable charge forms, and are found spread around the positive charge in
the nucleus of the atom. If there are too few electrons, a positive
charge will appear. If there are too many, a negative charge will
appear. If the charges are in balance, then no charge is detected. The
electron is a fundamental particle, and has interesting characteristics,
besides the charge, it spins, and this spin gives rise to a magnetic
field, much like an electric motor.
The decision to call some charges positive and others negative was
*arbitrary*. This gives rise to a potential misunderstanding when
charges are moving in a circuit. This will be deferred to a later
section, on electricity.
The fundamental study of these charges in isolation can best be observed
by experiment. If we generate a charge by friction, say fur on hard
rubber, or silk on glass, we can transfer the charge to a small ball
coated with aluminum foil. This ball is held up by a thin thread. Nylon
or silk work well.
If however the air is damp, the charge will leak off the metal balls.
Two such balls, charged by the same static source will repel each other.
If we use two different sources, they will attract each other. Maybe.
We can measure the amount of attraction or repulsion by a simple means.
If we measure the weight of these balls, then we know that they are
attracted by the earth. Other forces on these balls will cause them to
swing away from hanging straight down.
By measuring the movement, or the angle, we can figure out what tiny
force is pushing the ball away from the vertical. The tangent function
is the perfect candidate to resolve this problem. ((add more text and
diagrams here))
It would be nice to find out what happens when some charge is removed.
It turns out, that if we use a third ball, of the same size as the other
two, we can exactly halve the charge.
From this we can see that we can measure the effects of increasing
distance, and decreasing charge.
|
# FHSST Physics/Electricity
# Electricity
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> *Warning*: We believe in experimenting and learning about physics at every opportunity, BUT playing with electricity can be **EXTREMELY DANGEROUS!** Do not try to build home made circuits without someone who knows if what you are doing is safe. Normal electrical outlets are dangerous. Treat electricity with respect in your everyday life.
> You will encounter electricity every day for the rest of your life and to make sure you are able to make wise decisions we have included an entire chapter on electrical safety. Please read it - not only will it make you safer but it will show the applications of many of the ideas you will learn in this chapter.
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
# FHSST Physics/Atomic Nucleus
# Inside atomic nucleus
Amazingly enough, the human mind that is contained inside a couple of
liters of human\'s brain, is able to deal with extremely large as well
as extremely small objects such as the whole universe and its smallest
building blocks. So, what are these building blocks? As we already know,
the universe consists of galaxies, which consist of stars with planets
moving around. The planets are made of molecules, which are bound groups
(chemical compounds) of atoms.
There are more than stars in the universe. Currently, scientists know
over 12 million chemical compounds i.e. 12 million different molecules.
All this variety of molecules is made of only a hundred of different
atoms. For those who believe in beauty and harmony of nature, this
number is still too large. They would expect to have just few different
things from which all other substances are made. In this chapter, we are
going to find out what these elementary things are.
|
# Physics Study Guide/Purpose
## Purpose of Physics
The aim of the study of physics is to understand the natural world, in
its broadest and most fundamental sense. By understanding it, we hope to
be able to explain and predict (typically through the mathematics that
are developed) and ultimately modify (oftentimes through resulting
technology) events that occur. Physics deals with everything that
happens in this Universe, tries to understand it and provides the root
cause for the same in a logical manner.
|
# Physics Study Guide/Scientific Method
## Scientific Method
In order to uncover these \'laws of nature\', physics (like all science)
relies on a *deliberately-structured process* of
- observe a natural phenomenon, (for example thanks to an
experimentation)
- creating a theory or model,
- testing the theory or model,
- adjusting the theory or model based on the results of the test,
- and repeating the above process with the adjusted theory or model.
This process is known as the \'scientific method\'. Most curriculums
thus incorporate elements of both theory (principally, what laws others
have found in the past) and practical (how to undertake experiments and
observations).
|
# Physics Study Guide/Normal Force and Friction
## The Normal Force
Why is it that we stay steady in our chairs when we sit down? According
to the first law of motion, if an object is translationally in
equilibrium (velocity is constant), the sum of all the forces acting on
the object must be equal to zero. For a person sitting on a chair, it
can thus be postulated that a **normal force** is present balancing the
**gravitational force** that pulls the sitting person down. However, it
should be noted that only some of the normal force can cancel the other
forces to zero like in the case of a sitting person. In Physics, the
term **normal** as a modifier of the **force** implies that this force
is acting perpendicular to the surface at the point of contact of the
two objects in question. Imagine a person leaning on a vertical wall.
Since the person does not stumble or fall, he/she must be in
equilibrium. Thus, the component of his/her weight along the horizontal
is balanced or countered (opposite direction) by an equal amount of
force \-- this force is the *normal force* on the wall. So, on a slope,
the normal force would not point upwards as on a horizontal surface but
rather perpendicular to the slope surface.
The normal force can be provided by any one of the four fundamental
forces, but is typically provided by electromagnetism since
microscopically, it is the repulsion of electrons that enables
interaction between surfaces of matter. There is no easy way to
calculate the normal force, other than by assuming first that there is a
normal force acting on a body in contact with a surface (direction
perpendicular to the surface). If the object is not accelerating (for
the case of uniform circular motion, the object is accelerating) then
somehow, the magnitude of the normal force can be solved. In most cases,
the magnitude of the normal force can be solved together with other
unknowns in a given problem.
Sometimes, the problem does not warrant the knowledge of the normal
force(s). It is in this regard that other formalisms (e.g. Lagrange
method of undetermined coefficients) can be used to eventually *solve*
the physical problem.
## Friction
When there is relative motion between two surfaces, there is a
resistance to the motion. This force is called friction. Friction is the
reason why people may have trouble accepting Newton\'s first law of
Motion, that an object tends to keep its state of motion. Friction acts
opposite to the direction of the original force **The frictional force**
is equal to the **frictional coefficient** times the **normal force**
In order to set a body into a state of motion, the forward force or the
thrust force exerted upon the body must be greater in magnitude than the
maximum frictional value encountered upon the surface with which the
body is in contact with. If the thrust force does not exceed in
magnitude over the maximum frictional value or limiting value of motion
then the body shall not be set into motion.
Friction is caused due to attractive forces between the molecules near
the surfaces of the objects. If two steel plates are made really flat
and polished and cleaned and made to touch in a vacuum, it bonds
together. It would look as if the steel was just one piece.The bonds are
formed as in a normal steel piece. This is called cold welding. And this
is the main cause of friction.
The above equation is an empirical one --- in general, the frictional
coefficient is not constant. However, for a large variety of contact
surfaces, there is a well characterized value. This kind of friction is
called Coulomb friction. There is a separate coefficient for both static
and kinetic friction. This is because once an object is pushed on, it
will suddenly jerk once you apply enough force and it begins to move.
Also, the frictional coefficient varies greatly depending on what two
substances are in contact, and the temperature and smoothness of the two
substances. For example, the frictional coefficients of glass on glass
are very high. When you have similar materials, in most cases you don\'t
have Coulomb friction.
For **static friction**, the force of friction actually increases
proportionally to the force applied, keeping the body immobile. Once,
however, the force exceeds the maximum frictional force, the body will
begin to move. The maximum frictional force is calculated as follows:
**The static frictional force** is less than or equal to the
**coefficient of static friction** times the **normal force**. Once the
frictional force equals the coefficient of static friction times the
normal force, the object will break away and begin to move.
Once it is moving, the frictional force then obeys:
**The kinetic frictional force** is equal to the **coefficient of
kinetic friction** times the **normal force**. As stated before, this
always opposes the direction of motion.
## Variables
Symbol Units Definition
------------- --------------- -------------------------
$\vec{F}_f$ $\mathrm{N}\$ Force of friction
$\mu\$ none Coefficient of friction
```{=html}
<H2>
```
Definition of Terms
```{=html}
</H2>
```
```{=html}
<table WIDTH="100%" >
```
```{=html}
<tr>
```
```{=html}
<td style="background-color: #FFFFEE; border: solid 1px #FFC92E; padding: 1em;" valign=top>
```
**Normal force (N):** The force on an object perpendicular to the
surface it rests on utilized in order to account for the body\'s lack of
movement. Units: newtons (N)\
\
**Force of friction (F~f~):** The force placed on a moving object
opposite its direction of motion due to the inherent roughness of all
surfaces. Units: newtons (N)\
\
**Coefficient of friction (**μ**):** The coefficient that determines the
amount of friction. This varies tremendously based on the surfaces in
contact. There are no units for the coefficient of either static or
kinetic friction\
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
It\'s important to note that in real life we often have to deal with
viscous and turbulent friction - they appear when you move the body
through the matter.
Viscous friction is proportional to velocity and takes place at
approximately low speeds. Turbulent friction is proportional to $V^2$
and takes place at higher velocities.
|
# Physics Study Guide/Circular Motion
## Uniform Circular Motion
### Speed and frequency
!A two dimensional polar co-ordinate system. The point $M$ can be
located in 2D plane as $(a,b)$ in Cartesian coordinate
system or $(r,\theta)$ in
polar coordinate
system in Cartesian coordinate system or (r,\theta) in polar coordinate system")
Uniform circular motion assumes that an object is moving (1) in circular
motion, and (2) at constant speed $v$; then
where $r$ is the radius of the circular path, and $T$ is the time period
for one revolution.
Any object travelling on a circle will return to its original starting
point in the period of one revolution, $T$. At this point the object has
travelled a distance $2\pi r$. If $T$ is the time that it takes to
travel distance $2\pi r$ then the object\'s speed is
where $f=\frac1T$
### Angular frequency
Uniform circular motion can be explicitly described in terms of polar
coordinates through angular frequency, $\omega$ :
where $\theta$ is the angular coordinate of the object (see the diagram
on the right-hand side for reference).
Since the speed in uniform circular motion is constant, it follows that
From that fact, a number of useful relations follow:
The equations that relate how $\theta$ changes with time are analogous
to those of linear motion at constant speed. In particular,
The angle at $t=0$, $\theta_0$, is commonly referred to as *phase*.
### Velocity, centripetal acceleration and force
The position of an object in a plane can be converted from polar to
cartesian coordinates through the equations
Expressing $\theta$ as a function of time gives equations for the
cartesian coordinates as a function of time in uniform circular motion:
Differentiation with respect to time gives the components of the
velocity vector:
Velocity in circular motion is a vector tangential to the trajectory of
the object. Furthermore, even though the speed is constant the velocity
vector changes direction over time. Further differentiation leads to the
components of the acceleration (which are just the rate of change of the
velocity components):
The acceleration vector is perpendicular to the velocity and oriented
towards the centre of the circular trajectory. For that reason,
acceleration in circular motion is referred to as *centripetal
acceleration*.
The absolute value of centripetal acceleration may be readily obtained
by
For centripetal acceleration, and therefore circular motion, to be
maintained a *centripetal force* must act on the object. From Newton\'s
Second Law it follows directly that the force will be given by
the components being
and the absolute value
thumbtime=5:44\|middle\|720px\|Example of finding the centripetal
acceleration of moon in
orbit.\'\'
|
# Physics Study Guide/Torque
## Torque and Circular Motion
Circular motion is the motion of a particle at a set distance (called
radius) from a point. For circular motion, there needs to be a force
that makes the particle turn. This force is called the \'centripetal
force.\' Please note that the centripetal force is *not* a new type of
force-it is just a force causing rotational motion. To make this
clearer, let us study the following examples:
1. If Stone ties a piece of thread to a small pebble and rotates it in
a horizontal circle above his head, the circular motion of the
pebble is caused by the tension force in the thread.
2. In the case of the motion of the planets around the sun (which is
roughly circular), the force is provided by the gravitational force
exerted by the sun on the planets.
Thus, we see that the centripetal force acting on a body is always
provided by some other type of force \-- centripetal force, thus, is
simply a name to indicate the force that provides this circular motion.
This centripetal force is *always* acting inward toward the center. You
will know this if you swing an object in a circular motion. If you
notice carefully, you will see that you have to continuously pull
inward. We know that an opposite force should exist for this centripetal
force(by Newton\'s 3rd Law of Motion). This is the centrifugal force,
which exists only if we study the body from a non-inertial frame of
reference(an accelerating frame of reference, such as in circular
motion). This is a so-called \'pseudo-force\', which is used to make the
Newton\'s law applicable to the person who is inside a non-inertial
frame. e.g. If a driver suddenly turns the car to the left, you go
towards the right side of the car because of centrifugal force. The
centrifugal force is equal and opposite to the centripetal force. It is
caused due to inertia of a body.
$$\omega_{\text{avg}}=\frac{\omega+\omega_f}{2}=\frac{\theta}{t}$$
**Average angular velocity** is equal to one-half of the sum of
**initial** and **final angular velocities** assuming constant
acceleration, and is also equal to the **angle gone through** divided by
the **time taken**.
------------------------------------------------------------------------
$$\alpha=\frac{\Delta\omega}{t}$$
**Angular acceleration** is equal to**change in angular velocity**
divided by **time taken**.
### Angular momentum
**Angular momentum** of an object revolving around an external axis $O$
is equal to the cross-product of the **position vector** with respect to
$O$ and its **linear momentum**.
**Angular momentum** of a rotating object is equal to the **moment of
inertia** times **angular velocity**.
$$L=I\omega$$
------------------------------------------------------------------------
$$\tau=I\alpha=\frac{\Delta L}{t}$$\>
**Rotational Kinetic Energy** is equal to one-half of the product of
**moment of inertia** and the **angular velocity** squared.
IT IS USEFUL TO NOTE THAT
The equations for rotational motion are analogous to those for linear
motion-just look at those listed above. When studying rotational
dynamics, remember:
- the place of force is taken by torque
```{=html}
<!-- -->
```
- the place of mass is taken by moment of inertia
```{=html}
<!-- -->
```
- the place of displacement is taken by angle
```{=html}
<!-- -->
```
- the place of linear velocity, momentum, acceleration, etc. is taken
by their angular counterparts.
### Definition of terms
+----------------------------------------------------------------------+
| **Torque** ($\vec\tau$): Force times distance. A vector. |
| $N\!\cdot\!m$ |
| |
| **Moment of inertia** ($I$): Describes the object\'s resistance to |
| torque --- the rotational analog to inertial mass. $kg\!\cdot\!m^2$ |
| |
| **Angular momentum** ($\vec L$): $kg\!\cdot\!\frac{m^2}{s}$ |
| |
| **Angular velocity** ($\vec\omega$): $\frac{\text{rad}}{s}$ |
| |
| **Angular acceleration** ($\vec\alpha$): $\frac{\text{rad}}{s^2}$ |
| |
| **Rotational kinetic energy** ($K_r$): |
| $J=kg\!\cdot\!\left(\frac{m}{s}\right)^2$ |
| |
| **Time** ($t$): $s$ |
+----------------------------------------------------------------------+
|
# Physics Study Guide/Waves
# Waves
Wave is defined as the movement of any periodic motion like a spring, a
pendulum, a water wave, an electric wave, a sound wave, a light wave,
etc.
!A wave with constant
amplitude.{width="400"}
Any periodic wave that has amplitude varied with time, phase
sinusoidally can be expressed mathematically as
:; R(t , θ) = R Sin (ωt + θ)
- Minimum wave height (trough) at angle 0, π, 2π, \...
:; F(R,t,θ) = 0 at θ = nπ
- Maximum wave height (peak or crest) at π/2, 3π/2, \...
:; F(R,t,θ) = R at θ = (2n+1)π/2
- Wavelength (distance between two crests) λ = 2π.
: λ = 2π - A circle or a wave
: 2λ = 2(2π) - Two circles or two waves
: kλ = k2π - Circle k or k amount of waves
- Wave Number,
:; k
- Velocity (or Angular Velocity),
:; ω = 2πf
- Time Frequency,
:; f = 1 / t
- Time
:; t = 1 / f
**Wave speed** is equal to the **frequency** times the **wavelength**.
It can be understood as how frequently a certain distance (the
wavelength in this case) is traversed.
**Frequency** is equal to **speed** divided by **wavelength**.
**Period** is equal to the inverse of **frequency**.
```{=html}
<H2>
```
Variables
```{=html}
</H2>
```
\
+-------------------------------+
| **λ:** wavelength (m)\ |
| **v:** wave speed (m/s)\ |
| **f:** frequency (1/s), (Hz)\ |
| **T:** period (s) |
+-------------------------------+
```{=html}
<H2>
```
Definition of terms
```{=html}
</H2>
```
+----------------------------------------------------------------------+
| **Wavelength (*λ*)**: The length of one wave, or the distance from a |
| point on one wave to the same point on the next wave. Units: meters |
| (m). In light, *λ* tells us the color.\ |
| \ |
| **Wave speed (v)**: the speed at which the wave pattern moves. |
| Units: meters per second, (m/s)\ |
| \ |
| **Frequency of oscillation (*f*)** (or just **frequency**): the |
| number of times the wave pattern repeats itself in one second. |
| Units: seconds^-1^ = (1/s) = hertz (Hz) In sound, *f* tells us the |
| pitch. The inverse of frequecy is the period of oscillation.\ |
| \ |
| **Period of oscillation (*T*)** (or just **period**): duration of |
| time between one wave and the next one passing the same spot. Units: |
| seconds (s). The inverse of the period is frequency. Use a capital, |
| italic *T* and not a lowercase one, which is used for time.\ |
| \ |
| **Amplitude (*A*)**: the maximum height of the wave measured from |
| the average height of the wave (the wave's center). Unit: meters (m) |
+----------------------------------------------------------------------+
Image here
The wave's extremes, its peaks and valleys, are called **antinodes**. At
the middle of the wave are points that do not move, called **nodes**.
*Examples of waves:* Water waves, sound waves, light waves, seismic
waves, shock waves, electromagnetic waves ...
## Oscillation
A wave is said to oscillate, which means to move back and forth in a
regular, repeating way. This fluctuation can be between extremes of
position, force, or quantity. Different types of waves have different
types of oscillations.
**Longitudinal waves:** Oscillation is parallel to the direction of the
wave. Examples: sound waves, waves in a spring.
**Transverse waves:** Oscillation is perpendicular to direction of the
wave. Example: light
## Interference
When waves overlap each other it is called **interference**. This is
divided into **constructive** and **destructive** interference.
**Constructive interference:** the waves line up perfectly and add to
each others' strength.
**Destructive interference:** the two waves cancel each other out,
resulting in no wave.This happens when angle between them is 180degrees.
## Resonance
In real life, waves usually give a mishmash of constructive and
destructive interference and quickly die out. However, at certain
wavelengths standing waves form, resulting in **resonance**. These are
waves that bounce back into themselves in a strengthening way, reaching
maximum amplitude.
*Resonance is a special case of forced vibration when the frequency of
the impressed periodic force is equal to the natural frequency of the
body so that it vibrates with increased amplitude, spontaneously.*
|
# Physics Study Guide/Standing waves
------------------------------------------------------------------------
# Standing waves
----------------------------------------------
$\|\vec{v}\|=\sqrt{\frac{\|\vec{T}\|}{\mu}}$
----------------------------------------------
**Wave speed** is equal to the square root of **tension** divided by the
**linear density** of the string.
+-------------------------------------------------+
| ```{=html} |
| <div style="text-align: center;"> |
| ``` |
| `<big>`{=html}***μ* = *m*/*L*** `</big>`{=html} |
| |
| ```{=html} |
| </div> |
| ``` |
+-------------------------------------------------+
**Linear density** of the string is equal to the **mass** divided by the
**length** of the string.
+---------------------------------------------------+
| ```{=html} |
| <div style="text-align: center;"> |
| ``` |
| `<big>`{=html}***λ*~max~ = 2*L*** `</big>`{=html} |
| |
| ```{=html} |
| </div> |
| ``` |
+---------------------------------------------------+
The **fundamental wavelength** is equal to two times the **length** of
the string.
```{=html}
<H2>
```
Variables
```{=html}
</H2>
```
\
+---------------------------------------+
| *λ:* wavelength (m)\ |
| *λ~max~:* fundamental wavelength (m)\ |
| *μ:* linear density (g/m)\ |
| *v:* wave speed (m/s)\ |
| *F:* force (N)\ |
| *m:* mass (kg)\ |
| *L:* length of the string (m)\ |
| *l:* meters (m) |
+---------------------------------------+
```{=html}
<H2>
```
Definition of terms
```{=html}
</H2>
```
+----------------------------------------------------------------------+
| **Tension (*F*):** (not frequency) in the string (*t* is used for |
| time in these equations). Units: newtons (N)\ |
| \ |
| **Linear density (*μ*):** of the string, Greek mu. Units: grams per |
| meter (g/m)\ |
| \ |
| **Velocity (*v*)** of the wave (m/s)\ |
| \ |
| **Mass (*m*)**: Units: grams (g). (We would use kilograms but they |
| are too big for most strings).\ |
| \ |
| **Length of the string (*L*):** Units: meters (m) |
+----------------------------------------------------------------------+
Fundamental frequency: the frequency when the wavelength is the longest
allowed, this gives us the lowest sound that we can get from the system.
In a string, the length of the string is half of the largest wavelength
that can create a standing wave, called its fundamental wavelength.
|
# Physics Study Guide/Sound
Sound is defined as mechanical sinosodial vibratory longitudinal impulse
waves which oscillate the pressure of a transmitting medium by means of
adiabatic compression and decompression consequently resulting in the
increase in the angular momentum and hence rotational kinetic energy of
the particles present within the transmitting medium producing
frequencies audible within hearing range, that is between the threshold
of audibility and the threshold of pain on a Fletchford Munson equal
loudness contour diagram.
## Intro
When two glasses collide, we hear a sound. When we pluck a guitar
string, we hear a sound.
Different sounds are generated from different sources. Generally
speaking, the collision of two objects results in a sound.
Sound does not exist in a vacuum; it travels through the materials of a
medium. Sound is a longitudinal wave in which the mechanical vibration
constituting the wave occurs along the direction of the wave\'s
propagation.
The velocity of sound waves depends on the temperature and the pressure
of the medium. For example, sound travels at different speeds in air and
water. We can therefore define sound as a mechanical disturbance
produced by the collision of two or more physical quantities from a
state of equilibrium that propagates through an elastic material medium.
# Sound
```{=html}
<table width=75%>
```
```{=html}
<tr>
```
```{=html}
<td style="background-color: white; border:1px solid #D6D6FF; padding:1em;" valign=top >
```
```{=html}
<center>
```
`<big>`{=html}**$decibel(\mathrm{dB}) = 10\cdot \log\left(\frac{I_1}{I_0}\right)$**`</big>`{=html}
```{=html}
</center>
```
```{=html}
</td>
```
```{=html}
</table>
```
thumb\|400px\|right\|Fig. 1: The Fletcher-Munson equal-loudness
contours. Phons are labelled in
blue. The amplitude is the
magnitude of sound pressure change within a sound wave.
Sound amplitude can be measured in pascals (Pa),
though its more common to refer to the *sound (pressure) level* as
Sound intensity(dB,dBSPL,dB(SPL)), and
the *perceived sound level* as Loudness(dBA,
dB(A)). **Sound intensity** is flow of sound energy per unit time
through a fixed area. It has units of watts per square meter. The
reference Intensity is defined as the minimum Intensity that is audible
to the human ear, it is equal to 10^-12^ W/m^2^, or one picowatt per
square meter. When the intensity is quoted in decibels this reference
value is used. **Loudness** is sound intensity altered according to the
frequency response of the human ear and is measured in a unit called the
A-weighted decibel (dB(A) "wikilink"), also used to be called
phon).
## The Decibel
The decibel is not, as is commonly believed, the unit of sound. Sound is
measured in terms of pressure. However, the decibel is used to express
the pressure as very large variations of pressure are commonly
encountered. The decibel is a dimensionless quantity and is used to
express the ratio of one power quantity to another. The definition of
the decibel is $10\cdot \log_{10}\left(\frac{x}{x_0}\right)$, where x is
a squared quantity, ie pressure squared, volts squared etc. The decibel
is useful to define relative changes. For instance, the required sound
decrease for new cars might be 3 dB, this means, compared to the old car
the new car must be 3 dB quieter. The absolute level of the car, in this
case, does not matter.
+-------------------------------------------------------------------------+
| ```{=html} |
| <center> |
| ``` |
| `<big>`{=html}**$I_0 = 10^{-12} \mbox{ W}/\mbox{m}^2$** `</big>`{=html} |
| |
| ```{=html} |
| </center> |
| ``` |
+-------------------------------------------------------------------------+
## Definition of terms
+----------------------------------------------------------------------+
| **Intensity (I):** the amount of energy transferred through 1 m^2^ |
| each second. Units: watts per square meter\ |
| \ |
+----------------------------------------------------------------------+
+----------------------------------------------------------------------+
| **Lowest audible sound:** I = 0 dB = 10^-12^ W/m^2^ (A sound with dB |
| \< 0 is inaudible to a human.)\ |
| \ |
| **Threshold of pain:** I = 120 dB = 10 W/m^2^ |
+----------------------------------------------------------------------+
*Sample equation:* **Change in sound intensity**\
Δβ = β~2~ - β~1~\
= 10 log(*I*~2~/*I*~0~) - 10 log(*I*~1~/*I*~0~)\
= 10 \[log(*I*~2~/*I*~0~) - log(*I*~1~/*I*~0~)\]\
= 10 log\[(*I*~2~/*I*~0~)/(*I*~1~/*I*~0~)\]\
= 10 log(*I*~2~/*I*~1~)\
where log is the base-10 logarithm.
## Doppler effect
+------------------------------------------------------------------------+
| ```{=html} |
| <center> |
| ``` |
| `<big>`{=html} $f' = f \, \frac{v \pm v_0}{v \mp v_s}$ `</big>`{=html} |
| |
| ```{=html} |
| </center> |
| ``` |
+------------------------------------------------------------------------+
\
f\' is the observed frequency, f is the actual frequency, v is the speed
of sound ($v=336+0.6T$), T is temperature in degrees Celsius $v_0$ is
the speed of the observer, and $v_s$ is the speed of the source. If the
observer is approaching the source, use the top operator (the +) in the
numerator, and if the source is approaching the observer, use the top
operator (the -) in the denominator. If the observer is moving away from
the source, use the bottom operator (the -) in the numerator, and if the
source is moving away from the observer, use the bottom operator (the +)
in the denominator.
### Example problems
A. An ambulance, which is emitting a 400 Hz siren, is moving at a speed
of 30 m/s towards a stationary observer. The speed of sound in this case
is 339 m/s.
$f' = 400\,\mathrm{Hz} \left( \frac{339 + 0}{339 - 30} \right)$
B. An M551 Sheridan, moving at 10 m/s is following a Renault FT-17 which
is moving in the same direction at 5 m/s and emitting a 30 Hz tone. The
speed of sound in this case is 342 m/s.
$f' = 30\,\mathrm{Hz} \left( \frac{342 + 10}{342 + 5} \right)$
|
# Physics Study Guide/Fluids
## Buoyancy
**Buoyancy** is the force due to pressure differences on the top and
bottom of an object under a fluid (gas or liquid).
Net force = buoyant force - force due to gravity on the object
## Bernoulli\'s Principle
Fluid flow is a complex phenomenon. An ideal fluid may be described as:
- The fluid flow is **steady** i.e its velocity at each point is
constant with time.
- The fluid is **incompressible**. This condition applies well to
liquids and in certain circumstances to gases.
- The fluid flow is **non-viscous**. Internal friction is neglected.
An object moving through this fluid does not experience a retarding
force. We relax this condition in the discussion of **Stokes\'
Law**.
- The fluid flow is **irrotational**. There is no angular momentum of
the fluid about any point. A very small wheel placed at an arbitrary
point in the fluid does not rotate about its center. Note that if
turbulence is present, the wheel would most likely rotate and its
flow is then not irrotational.
As the fluid moves through a pipe of varying cross-section and
elevation, the pressure will change along the pipe. The Swiss physicist
Daniel Bernoulli
(1700-1782) first derived an
expression relating the pressure to fluid speed and height. This result
is a consequence of conservation of energy and applies to ideal fluids
as described above.
\'\'Consider an ideal fluid flowing in a pipe of varying cross-section.
A fluid in a section of length $\Delta x_1$ moves to the section of
length $\Delta x_2$ in time $\Delta t$ . The relation given by Bernoulli
is:
where:
$$P$$ is pressure at cross-section
$$h$$ is height of cross-section
$$\rho$$ is density
$$v$$ is velocity of fluid at cross-section
In words, the Bernoulli relation may be stated as: *As we move along a
streamline the sum of the pressure ($P$), the kinetic energy per unit
volume and the potential energy per unit volume remains a constant.*
------------------------------------------------------------------------
*(To be concluded)*
|
# Physics Study Guide/Fields
# Fields
A field is one of the more difficult concepts to grasp in physics. A
field is an area or region in which an influence or force is effective
regardless of the presence or absence of a material medium. Simply put,
a **field** is a collection of vectors often representing the force an
object *would* feel if it were placed at any particular point in space.
With gravity, the field is measured in newtons, as it depends solely on
the mass of an object, but with electricity, it is measured in newtons
per coulomb, as the force on an electrical charge depends on the amount
of that charge. Typically these fields are calculated based on canceling
out the effect of a body in the point in space that the field is
desired. As a result, a field is a vector, and as such, it can (and
should) be added when calculating the field created by TWO objects at
one point in space.
Fields are typically illustrated through the use of what are called
**field lines** or **lines of force**. Given a source that exerts a
force on points around it, sample lines are drawn representing the
direction of the field at points in space around the force-exerting
source.
There are three major categories of fields:
1. **Uniform fields** are fields that have the same value at any point
in space. As a result, the lines of force are parallel.
2. **Spherical fields** are fields that have an origin at a particular
point in space and vary at varying distances from that point.
3. **Complex fields** are fields that are difficult to work with
mathematically (except under simple cases, such as fields created by
two point object), but field lines can still typically be drawn.
**Dipoles** are a specific kind of complex field.
Magnetism also has a field, measured in Tesla, and it also has **field
lines**, but its use is more complicated than simple \"force\" fields.
Secondly, it also only appears in a two-pole form, and as such, is
difficult to calculate easily.
The particles that form these magnetic fields and lines of force are
called electrons and not magnetons. A magneton is a quantity in
magnetism.
## Definition of terms
+----------------------------------------------------------------------+
| **Field:** A collection of vectors that often represents the force |
| that an object *would* feel if it were placed in any point in |
| space.\ |
| \ |
| **Field Lines:** A method of diagramming fields by drawing several |
| sample lines showing direction of the field through several points |
| in space.\ |
| \ |
+----------------------------------------------------------------------+
|
# Physics Study Guide/Thermodynamics
# Introduction
Thermodynamics deals with the movement of heat and its conversion to
mechanical and electrical energy among others.
# Laws of Thermodynamics
### First Law
The `<b>`{=html}First Law`</b>`{=html} is a statement of conservation of
energy law:
+----------------------------------+
| ```{=html} |
| <center> |
| ``` |
| `<big>`{=html}$\Delta U = Q - W$ |
+----------------------------------+
The `<b>`{=html}First Law`</b>`{=html} can be expressed as the change in
internal energy of a system ($\Delta U$) equals the amount of energy
added to a system (Q), such as heat, minus the work expended by the
system on its surroundings (W).
If Q is positive, the system has *gained* energy (by heating).
If W is positive, the system has *lost* energy from doing work on its
surroundings.
As written the equations have a problem in that neither Q or W are
`<b>`{=html}state functions`</b>`{=html} or quantities which can be
known by direct measurement without knowing the history of the system.
In a gas, the first law can be written in terms of state functions as
+----------------------------------+
| ```{=html} |
| <center> |
| ``` |
| `<big>`{=html}$dU = T ds - p dV$ |
+----------------------------------+
### Zero-th Law
After the first law of Thermodynamics had been named, physicists
realised that there was another more fundamental law, which they termed
the \'zero-th\'.
This is that:
--------------------------------------------------------------------------------------------
*If two bodies are at the same temperature, there is no resultant heat flow between them.*
--------------------------------------------------------------------------------------------
An alternate form of the \'zero-th\' law can be described:
----------------------------------------------------------------------------------------------------------
*If two bodies are in thermal equilibrium with a third, all are in thermal equilibrium with each other.*
----------------------------------------------------------------------------------------------------------
This second statement, in turn, gives rise to a definition of
Temperature (T):
---------------------------------------------------------------------------------------------------------------------------------------
*Temperature is the only thing that is the same between two otherwise unlike bodies that are in thermal equilibrium with each other.*
---------------------------------------------------------------------------------------------------------------------------------------
### Second Law
This law states that heat will never flow from a cold object to a hot
object.
$S = k_B \cdot ln(\Omega)$
where $k_B$ is the Boltzmann constant
($k_B = 1.380658 \cdot 10^{-23} \mbox{ kg m}^2 \mbox{ s}^{-2} \mbox{ K}^{-1}$)
and $\Omega$ is the partition function, i. e. the number of all possible
states in the system.
This was the statistical definition of entropy, there is also a
\"macroscopic\" definition:
$S = \int \frac{\mathrm{d}Q}{T}$
where *T* is the temperature and d*Q* is the increment in energy of the
system.
### Third Law
The third law states that a temperature of absolute zero cannot be
reached.
# Temperature Scales
There are several different scales used to measure temperature. Those
you will most often come across in physics are degrees Celsius and
kelvins.
Celsius temperatures use the symbol **Θ**. The symbol for degrees
Celsius is **°C**. Kelvin temperatures use the symbol **T**. The symbol
for kelvins is **K**.
### The Celsius Scale
The Celsius scale is based on the melting and boiling points of water.
The temperature for freezing water is 0 °C. This is called the *freezing
point*
The temperature of boiling water is 100 °C. This is called the *steam
point*.
The Celsius scale is sometimes known as \'Centigrade\', but the CGPM
chose *degrees Celsius* from among the three names then in use way back
in 1948, and centesimal and centigrade should no longer be used. See
Wikipedia for more details.
### The Kelvin Scale
The Kelvin scale is based on a more fundamental temperature than the
melting point of ice. This is absolute zero (equivalent to −273.15 °C),
the lowest possible temperature anything could be cooled to---where the
kinetic energy of *any* system is at its minimum. The Kelvin scale was
developed from an observation on how the pressure and volume of a sample
of gas changes with temperature- PV/T is a constant. If the temperature
( T)was reduced, then the pressure ( P) exerted by Volume (V) the Gas
would also reduce, in direct proportion. This is a simple experiment and
can be carried out in most school labs. Gases were assumed to exert no
pressure at -273 degree Celsius. ( In fact all gases will have condensed
into liquids or solids at a somewhat higher temperature)
Although the Kelvin scale starts at a different point to Celsius, its
units are of exactly the same size.
Therefore:
---------------------------------------------------------------------------------
*Temperature in kelvins (*K*) = Temperature in degrees Celsius (*°C*) + 273.15*
---------------------------------------------------------------------------------
# Specific Latent Heat
Energy is needed to break bonds when a substance changes state. This
energy is sometimes called the *latent heat*. Temperature remains
constant during changes of state.
To calculate the energy needed for a change of state, the following
equation is used:
---------------------------------------------------------------------------------------------
*Heat transferred, ΔQ (*J*) = Mass, m (*kg*) x specific latent heat capacity,*L*(*J*/*kg*)*
---------------------------------------------------------------------------------------------
The specific latent heat, *L*, is the energy needed to change the state
of 1 kg of the substance without changing the temperature.
The latent heat of *fusion* refers to melting. The latent heat of
*vapourisation* refers to boiling.
# Specific Heat Capacity
The specific heat capacity is the energy needed to raise the temperature
of a given mass by a certain temperature.
The change in temperature of a substance being heated or cooled depends
on the mass of the substance and on how much energy is put in. However,
it also depends on the properties of that given substance. How this
affects temperature variation is expressed by the substance\'s *specific
heat capacity* (*c*). This is measured in J/(kg·K) in SI units.
------------------------------------------------------------------------------------------------------------------------------------
*Change in internal energy,*Δ*U (*J*) = mass, m (*kg*) x specific heat capacity, c (*J*/*(kg·K)*) x temperature change,*Δ*T (*K*)*
------------------------------------------------------------------------------------------------------------------------------------
|
# Physics Study Guide/Theories of Electricity
## Intro
All atoms are made of charged particles called electrons, neutrons and
protons. At the center of each atom is a nucleus of neutrons and
protons, which is surrounded by electrons orbiting it on circular paths.
## Charged Particles
The three main subatomic particles have very different properties, the
most important of which are:
Particle Charge Mass (kg)
---------- ---------------- ----------------
Electron negative ( - ) 9.11 x 10^-31^
Proton positive ( + ) 1.67 x 10^-27^
Neutron zero ( 0 ) 1.67 x 10^-27^
## Charge
Most objects are electrically neutral, i.e. the sum of their electric
charges equal to zero. However, when an object loses or gains electrons
it will become positively or negatively charged, respectively:
:;object + electron → negatively charged object
:;object - electron → positively charged object
A positively charged object has a quantity of charge *+Q* and electric
field lines radiate outward. A negatively charged object has a quantity
of charge *-Q* and electric field lines radiate inward.
**Like charges will repel each other and opposite charges will
attract**, i.e. negatively charged objects attract positively charged
objects and vice versa.
## Electrostatic/Coulomb Force
The force between 2 stationary charges is called the electrostatic force
or Coulomb force.
If two charges, *Q~1~* and *Q~2~*, are at a distance *r* from each
other, they will interact with a force:
:; $F = k_e \frac{Q_1 Q_2}{r^2}$ ,
where *k~e~* is Coulomb\'s constant (*k~e~* = 8.99 × 10^9^ N m^2^
C^−2^). The force of interaction between the charges is attractive if
the charges are opposite signed and repulsive if like signed.
The electrostatic force from a charge will be experienced by any other
charge around it, and the strength and direction of this force at all
positions around the charge is known as the electric field E. The
electric field is directly proportional to the force:
: $E = \frac {F}{Q}$
## Electromotive Force
When a moving charge passes through a magnetic field, perpendicular to
the field lines,that has direction from left to right. The magnetic
field exerts a force on the charge to make it go up or down. Positive
charge goes up, Negative charge goes down.
: $\mathbf{F_B} = Q(\mathbf{v} \times \mathbf{B})$
## Electromagnetic Force
For a moving charge the sum of Electrostatic Force and the Electromotive
Force gives Electromagnetic Force acting on the charge
: F~EB~ = Q E + Q V B = Q (E + V B)
Electrostatic Force is the force generates Current going from left to
right . Electromotive Force is the force generates Current going
perpendicular to current of electrostatic
Electromagnetic Force generates an Electric Field going from left to
right and a magnetic Field perpendicular to Electric Fields
Electromagnetic force may be carried out by Electromagnetic induction
which is the inducing of current using electricity
## Electricity and Conductors
In all conductors, charges move freely in any direction. If there is an
Electric Force
:; F~E~ = Q E
Electric Force will exert a pressure F~E~ / A that force charges in
conductor to move in a straight line. This action generates a current of
charge moving in a straight line.
The Pressure from the Electric Force is called Voltage and the straight
line of moving charges is called Current.
If Voltage is V and Current is I, then the ratio of Current over Voltage
gives the Conductance of the Conductor and the ratio of Voltage over
Current gives the Resistance of the conductor.
:; $G = \frac{I}{V}$\
:; $Z = \frac{V}{I}$
Therefore, *All conductors have a Resistance and a Conductance*
If there exists a straight line conductor of length l, that has surface
area A with conductivity ρ then the Conductance of the conductor
:;G = ρ $\frac{l}{A}$
From above,
$$G = \frac{I}{V}$$ = ρ $\frac{l}{A}$
Therefore, the conduction of all material can be calculated by
:;ρ = $\frac{I}{V} \frac{A}{l}$
## Resistor
If there exists a straight line conductor. As shown above, every
conductor has a Resistance R equal to the ratio of Voltage over Current
$$R = \frac {V}{I}$$
:;$I_R = \frac{V}{R}$
A straight line conductor has a capability of reducing current. This can
be used in an electric circuit to reduce current. In an electric
circuit, straight line conductor has a symbol \--\^\^\^\-- with a
resistance R measured in Ohms Ώ and is called a resistor.
Resistance can be connected in series or in parallel to increase
Resistance or to decrease resistance.
If there are n resistors connected in a series, the total resistance is
:; $R_t = R_1 + R_2 + ... + R_n$
If there are n resistors connected in parallel, then the total
resistance is
:; $\frac{1}{R_t} = \frac{1}{R_1} + \frac{1}{R_2} + ... + \frac{1}{R_n}$
|
# Physics Study Guide/Physics constants
# Commonly Used Physical Constants
Name Symbol Value Units Relative Uncertainty
------------------------------------------------ ---------------------------------------------------------------------- ------------------------------------------------------------- --------------------------------------------------- ----------------------
Speed of light (in vacuum) $c$ $299\ 792\ 458$ $\mathrm{m}\ \mathrm{s}^{-1}$ (exact)
Magnetic Constant $\mu_0$ $4\pi\times10^{-7}\approx12.566\ 370\ 6\times10^{-7}$ $\mathrm{N}\ \mathrm{A}^{-2}$ (exact)
Electric Constant $\varepsilon_0 = 1/\left( \mu_0c^2 \right)$ $\approx8.854\ 187\ 817\times10^{-12}$ $\mathrm{F}\ \mathrm{m}^{-1}$ (exact)
Newtonian Gravitaional Constant $G$ $6.674\ 2(10)\times10^{-11}$ $\mathrm{m}^3\ \mathrm{kg}^{-1}\ \mathrm{s}^{-2}$ $1.5\times10^{-4}$
Plank\'s Constant $h$ $6.626\ 069\ 3(11)\times10^{-34}$ $\mathrm{J}\ \mathrm{s}$ $1.7\times10^{-7}$
Elementary charge $e$ $1.602\ 176\ 53(14)\times10^{-19}$ $\mathrm{C}$ $8.5\times10^{-8}$
Mass of the electron $m_e$ $9.109\ 382\ 6(16)\times10^{-31}$ $\mathrm{kg}$ $1.7\times10^{-7}$
Mass of the proton $m_p$ $1.672\ 621\ 71(29)\times10^{-27}$ $\mathrm{kg}$ $1.7\times10^{-7}$
Fine structure constant $\alpha=\frac{e^2}{4 \pi \varepsilon_0 \hbar c}$ $7.297\ 352\ 568(24)\times10^{-3}$ dimensionless $3.3\times10^{-9}$
Molar gass constant $R$ $8.314\ 472(15)$ $\mathrm{J}\ \mathrm{mol}^{-1}\ \mathrm{K}^{-1}$ $1.7\times10^{-6}$
Boltzmann\'s constant $k$ $1.380\ 650\ 5(24)\times10^{-23}$ $\mathrm{J}\ \mathrm{K}^{-1}$ $1.8\times10^{-6}$
Avogadro\'s Number $N_{\text{A}}, L$ $6.022\ 141\ 5(10)\times10^{23}$ $\mathrm{mol}^{-1}$ $1.7\times10^{-7}$
Rydberg constant $R_\infty$ $10\ 973\ 731.568\ 525(73)$ $\mathrm{m}^{-1}$ $6.6\times10^{-12}$
Standard acceleration of gravity $g$ $9.806\ 65$ $\mathrm{m}\ \mathrm{s}^{-2}$ defined
Atmospheric pressure $\mathrm{atm}$ $101\ 325$ $\mathrm{Pa}$ defined
Bohr Radius $a_0\$ $0.529\ 177\ 208\ 59(36)\times10^{-10}\$ $\mathrm{m}\$ $6.8\times10^{-10}$
Electron Volt $eV$ $1.602\ 176\ 53(14)\times10^{-19}$ $\mathrm{J}$ $8.7\times10^{-8}$
Luminous efficacy of monochromatic radiation $K_{cd}$ $683$ $\mathrm{lm/W}$ (exact)
hyperfine transition frequency of Cs-133 $\Delta\nu_\text{Cs}$ $9\ 192\ 631\ 770$ $\mathrm{Hz}$ (exact)
Reduced Planck constant $\hbar = h/2\pi$ $1.054\ 571\ 817\times{10}^{-34}$ $\mathrm{J}\cdot \mathrm{s}$ (exact)
atomic mass of Carbon 12 $m({}^{12}\text{C})$ $1.992\ 646\ 879\ 92(60)\times {10}^{-26}$ $\mathrm{kg}$
molar mass of Carbon-12 $M({}^{12}\text{C}) = N_{\text{A}} m({}^{12}\text{C})$ $11.999\ 999\ 9958(36)\times{10}^{-3}$ $\mathrm{kg\cdot {mol}^{-1}}$
atomic mass constant $m_{\text{u}} = m({}^{12}\text{C}) / 12 = 1\,\text{Da}$ $1.660\ 539\ 066\ 60(50)\times{10}^{-27}$ $\mathrm{kg}$
molar mass constant $M_{\text{u}} = M({}^{12}\text{C}) / 12 = N_{\text{A}} m_{\text{u}}$ $0.999\ 999\ 999\ 65(30)\times{10}^{-3}$ $\mathrm{kg\cdot {mol}^{-1}}$
molar volume of silicon $V_{m}(\mathrm{Si})$ $1.205\ 883\ 199(60)\times{10}^{-5}$ $\mathrm{m^3\cdot {mol}^{-1}}$
molar Planck constant $N_{\text{A}} h$ $3.990\ 312\ 712\ldots{10}^{-10}$ $\mathrm{J\cdot {Hz}^{-1}\cdot{mol}^{-1}}$
Stefan-Boltzmann constant $\sigma = \pi^2 k_B^4 / 60 \hbar^3 c^2$ $5.670\ 374\ 419\ldots \times {10}^{-8}$ $\mathrm{W\cdot m^{-2}\cdot K^{-4}}$
first radiation constant $c_1 = 2 \pi h c^2$ $3.741\ 771\ 852\ldots{10}^{-16}$ $\mathrm{W\cdot m^2}$
first radiation constant for spectral radiance $c_{\text{1L}} = 2 h c^2 / sr$[^1] $1.191\ 042\ 972\ 397\ 188\ 414\ 079\ 4892\times{10}^{-16}$ $\mathrm{W\cdot m^2 {sr}^{-1}}$
second radiation constant $c_2 = h c / k_B$ $1.438\ 776\ 877\ldots\times{10}^{-2}$ $\mathrm{m\cdot K}$
Wien wavelength displacement constant $b$ $2.897\ 771\ 955\ldots \times {10}^{-3}$ $\mathrm{m\cdot K}$
Wien frequency displacement constant $b' = c / b$ $5.878\ 925\ 757\ \times {10}^{10}$ $\mathrm{Hz \cdot K^{-1}}$
Wien entropy displacement constant $b_\text{entropy}$ $3.002\ 916\ 077\ldots \times {10}^{-3}$ $\mathrm{m\cdot K}$
Faraday constant $F = N_{\text{A}} e$[^2] $96\ 485.332\ 123\ 310\ 0184$ $\mathrm{C \cdot {mol}^{-1}}$
: Uncertainty should be read as 1.234(56) =
1.234$\pm$`<!-- -->`{=html}0.056
\_\_TOC\_\_
## To Be Merged Into Table
This list is prepared in the format
- Constant (symbol) : value
------------------------------------------------------------------------
- Coulomb\'s Law Constant (**k**) : 1/(4 π ε~0~) *=* 9.0 × 10^9^
N·m^2^/C^2^
- Faraday constant (**F**) : 96,485 C·mol^−1^
- Mass of a neutron (**m~n~**) : 1.67495 × 10^−27^ kg
- Mass of Earth : 5.98 × 10^24^ kg
- Mass of the Moon : 7.35 × 10^22^ kg
- Mean radius of Earth : 6.37 × 10^6^ m
- Mean radius of the Moon : 1.74 × 10^6^ m
- Dirac\'s Constant (**$\hbar$**) : $h/(2\pi)$ = 1.05457148 × 10^−34^
J·s
- Speed of sound in air at STP : 3.31 × 10^2^ m/s
- Unified Atomic Mass Unit (**u**) : 1.66 × 10^−27^ kg
```{=html}
<CENTER>
```
```{=html}
<TABLE BORDER CELLPADDING=0 CELLSPACING=1 WIDTH="60%">
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Item
```{=html}
</TD>
```
```{=html}
<TD>
```
Proton
```{=html}
</TD>
```
```{=html}
<TD>
```
Neutron
```{=html}
</TD>
```
```{=html}
<TD>
```
Electron
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Mass
```{=html}
</TD>
```
```{=html}
<TD>
```
1
```{=html}
</TD>
```
```{=html}
<TD>
```
1
```{=html}
</TD>
```
```{=html}
<TD>
```
Negligible
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Charge
```{=html}
</TD>
```
```{=html}
<TD>
```
+1
```{=html}
</TD>
```
```{=html}
<TD>
```
0
```{=html}
</TD>
```
```{=html}
<TD>
```
-1
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
</TABLE>
```
```{=html}
</CENTER>
```
# See Also
## Wiki-links
- Wikipedia Article
## External Links
- NIST Physics
Lab
[^1]:
[^2]:
|
# Physics Study Guide/Frictional Coefficients
## Approximate Coefficients of Friction
```{=html}
<CENTER>
```
```{=html}
<TABLE BORDER=1 CELLPADDING=1 CELLSPACING=1 WIDTH="60%">
```
```{=html}
<TR>
```
```{=html}
<TD>
```
**Material**
```{=html}
</TD>
```
```{=html}
<TD>
```
**Kinetic**
```{=html}
</TD>
```
```{=html}
<TD>
```
**Static**
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Rubber on concrete (dry)
```{=html}
</TD>
```
```{=html}
<TD>
```
0.68
```{=html}
</TD>
```
```{=html}
<TD>
```
0.90
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Rubber on concrete (wet)
```{=html}
</TD>
```
```{=html}
<TD>
```
0.58
```{=html}
</TD>
```
```{=html}
<TD>
```
-.\--
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Rubber on asphalt (dry)
```{=html}
</TD>
```
```{=html}
<TD>
```
0.72
```{=html}
</TD>
```
```{=html}
<TD>
```
0.68
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Rubber on asphalt (wet)
```{=html}
</TD>
```
```{=html}
<TD>
```
0.53
```{=html}
</TD>
```
```{=html}
<TD>
```
-.\--
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Rubber on ice
```{=html}
</TD>
```
```{=html}
<TD>
```
0.15
```{=html}
</TD>
```
```{=html}
<TD>
```
0.15
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Waxed ski on snow
```{=html}
</TD>
```
```{=html}
<TD>
```
0.05
```{=html}
</TD>
```
```{=html}
<TD>
```
0.14
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Wood on wood
```{=html}
</TD>
```
```{=html}
<TD>
```
0.30
```{=html}
</TD>
```
```{=html}
<TD>
```
0.42
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Steel on steel
```{=html}
</TD>
```
```{=html}
<TD>
```
0.57
```{=html}
</TD>
```
```{=html}
<TD>
```
0.74
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Copper on steel
```{=html}
</TD>
```
```{=html}
<TD>
```
0.36
```{=html}
</TD>
```
```{=html}
<TD>
```
0.53
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
<TR>
```
```{=html}
<TD>
```
Teflon on teflon
```{=html}
</TD>
```
```{=html}
<TD>
```
0.04
```{=html}
</TD>
```
```{=html}
<TD>
```
-.\--
```{=html}
</TD>
```
```{=html}
</TR>
```
```{=html}
</TABLE>
```
```{=html}
</CENTER>
```
|
# Physics Study Guide/Greek alphabet
# About the *Common uses in Physics*
While these are indeed common usages, it should be pointed out that
there are many other usages and that other letters are used for the same
purpose. The reason is quite simple: there are only so many symbols in
the Greek and Latin alphabets, and scientists and mathematicians
generally do not use symbols from other languages. It is a common trap
to associate a symbol exclusively with some particular meaning, rather
than learning and understanding the physics and relations behind it.
+------------------+------------------+---------+------------------+
| Capital | Lower case | Name | Common use in |
| | | | Physics |
+==================+==================+=========+==================+
| ```{=mediawiki} | ```{=mediawiki} | alpha | Angular |
| {{math| | {{math| | | acceleration\ |
| size=1.5em|<math | size=1.5em|<math | | Linear |
| >\Alpha</math>}} | >\alpha</math>}} | | expansion\ |
| ``` | ``` | | Coefficient\ |
| | | | Alpha particle |
| | | | (helium |
| | | | nucleus)\ |
| | | | Fine Structure |
| | | | Constant |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | beta | Beta particle |
| {{math | {{math | | --- high energy |
| |size=1.5em|<mat | |size=1.5em|<mat | | electron\ |
| h>\Beta</math>}} | h>\beta</math>}} | | Sound intensity |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | gamma | Gamma ray (high |
| {{math| | {{math| | | energy EM wave)\ |
| size=1.5em|<math | size=1.5em|<math | | Ratio of heat |
| >\Gamma</math>}} | >\gamma</math>}} | | capacities (in |
| ``` | ``` | | an ideal gas)\ |
| | | | Relativistic |
| | | | correction |
| | | | factor Shear |
| | | | strain |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | delta | Δ=\"Change in\"\ |
| {{math| | {{math| | | δ |
| size=1.5em|<math | size=1.5em|<math | | =\"Infinitesimal |
| >\Delta</math>}} | >\delta</math>}} | | change in (), |
| ``` | ``` | | also used to |
| | | | denote the Dirac |
| | | | delta function |
| | | | (reference |
| | | | needed)\" |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | epsilon | Emissivity\ |
| {{math|si | {{math|si | | Strain (Direct |
| ze=1.5em|<math>\ | ze=1.5em|<math>\ | | e.g. tensile or |
| Epsilon</math>}} | epsilon</math>}} | | compression)\ |
| ``` | ``` | | Permittivity\ |
| | | | EMF |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | zeta | (no common use) |
| {{math | {{math | | |
| |size=1.5em|<mat | |size=1.5em|<mat | | |
| h>\Zeta</math>}} | h>\zeta</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | eta | Viscosity\ |
| {{mat | {{mat | | Energy |
| h|size=1.5em|<ma | h|size=1.5em|<ma | | efficiency |
| th>\Eta</math>}} | th>\eta</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | theta | Angle (°, rad)\ |
| {{math| | {{math| | | Temperature |
| size=1.5em|<math | size=1.5em|<math | | |
| >\Theta</math>}} | >\theta</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | iota | The lower case |
| {{math | {{math | | $\iota\;$ is |
| |size=1.5em|<mat | |size=1.5em|<mat | | rarely used, |
| h>\Iota</math>}} | h>\iota</math>}} | | while $\Iota$ is |
| ``` | ``` | | sometimes used |
| | | | for the identity |
| | | | matrix or the |
| | | | moment of |
| | | | inertia. Note |
| | | | that $\iota$ is |
| | | | not to be |
| | | | confused with |
| | | | the Roman |
| | | | character $i$ |
| | | | (which has a dot |
| | | | and is much more |
| | | | widely used in |
| | | | mathematics and |
| | | | physics). |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | kappa | Spring constant\ |
| {{math| | {{math| | | Dielectric |
| size=1.5em|<math | size=1.5em|<math | | constant |
| >\Kappa</math>}} | >\kappa</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | lambda | Wavelength\ |
| {{math|s | {{math|s | | Thermal |
| ize=1.5em|<math> | ize=1.5em|<math> | | conductivity\ |
| \Lambda</math>}} | \lambda</math>}} | | Constant\ |
| ``` | ``` | | Eigenvalue of a |
| | | | matrix\ |
| | | | Linear density |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | mu | Coefficient of |
| {{ma | {{ma | | friction\ |
| th|size=1.5em|<m | th|size=1.5em|<m | | Electrical |
| ath>\Mu</math>}} | ath>\mu</math>}} | | mobility\ |
| ``` | ``` | | Reduced mass\ |
| | | | Permeability |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | nu | Frequency |
| {{ma | {{ma | | |
| th|size=1.5em|<m | th|size=1.5em|<m | | |
| ath>\Nu</math>}} | ath>\nu</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | xi | Damping |
| {{ma | {{ma | | cofficient |
| th|size=1.5em|<m | th|size=1.5em|<m | | |
| ath>\Xi</math>}} | ath>\xi</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | omicron | (no common use) |
| {{math|si | {{math|si | | |
| ze=1.5em|<math>\ | ze=1.5em|<math>\ | | |
| Omicron</math>}} | omicron</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | pi | Product symbol |
| {{ma | {{ma | | $\Pi$\ |
| th|size=1.5em|<m | th|size=1.5em|<m | | Circle number |
| ath>\Pi</math>}} | ath>\pi</math>}} | | $ |
| ``` | ``` | | \pi:=3.14159...$ |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | rho | Volume density\ |
| {{mat | {{mat | | Resistivity |
| h|size=1.5em|<ma | h|size=1.5em|<ma | | |
| th>\Rho</math>}} | th>\rho</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | sigma | Sum symbol\ |
| {{math| | {{math| | | Boltzmann |
| size=1.5em|<math | size=1.5em|<math | | constant\ |
| >\Sigma</math>}} | >\sigma</math>}} | | Electrical |
| ``` | ``` | | conductivity\ |
| | | | Uncertainty\ |
| | | | Stress (Direct |
| | | | e.g. tensile, |
| | | | compression)\ |
| | | | Surface density |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | tau | Torque\ |
| {{mat | {{mat | | Tau particle (a |
| h|size=1.5em|<ma | h|size=1.5em|<ma | | lepton)\ |
| th>\Tau</math>}} | th>\tau</math>}} | | Time constant |
| ``` | ``` | | Shear stress |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | upsilon | mass to light |
| {{math|si | {{math|si | | ratio |
| ze=1.5em|<math>\ | ze=1.5em|<math>\ | | |
| Upsilon</math>}} | upsilon</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | phi | M |
| {{mat | {{mat | | agnetic/electric |
| h|size=1.5em|<ma | h|size=1.5em|<ma | | flux\ |
| th>\Phi</math>}} | th>\phi</math>}} | | Angle (°, rad) |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | chi | Rabi frequency |
| {{mat | {{mat | | (lasers)\ |
| h|size=1.5em|<ma | h|size=1.5em|<ma | | Susceptibility |
| th>\Chi</math>}} | th>\chi</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | psi | Wave function |
| {{mat | {{mat | | |
| h|size=1.5em|<ma | h|size=1.5em|<ma | | |
| th>\Psi</math>}} | th>\psi</math>}} | | |
| ``` | ``` | | |
+------------------+------------------+---------+------------------+
| ```{=mediawiki} | ```{=mediawiki} | omega | Ohms (unit of |
| {{math| | {{math| | | electrical |
| size=1.5em|<math | size=1.5em|<math | | resistance)\ |
| >\Omega</math>}} | >\omega</math>}} | | ω Angular |
| ``` | ``` | | velocity |
+------------------+------------------+---------+------------------+
: **Greek Alphabet**
# See Also
Greek alphabet on
Wikipedia
Greek letters used in mathematics, science, and
engineering
|
# Physics Study Guide/Vectors and scalars
**Vectors** are quantities that are characterized by having both a
numerical **quantity** (called the \"magnitude\" and denoted as \|*v*\|)
and a **direction**. Velocity is an example of a vector; it describes
the time rated change in position with a numerical quantity (meters per
second) as well as indicating the direction of movement.
The definition of a vector is any quantity that adds according to the
parallelogram law (there are some physical quantities that have
magnitude and direction that are not vectors).
**Scalars** are quantities in physics that have **no direction**. Mass
is a scalar; it can describe the quantity of matter with units
(kilograms) but does not describe any direction.
## Multiplying vectors and scalars
- A **scalar** times a **scalar** gives a **scalar** result.
- A **vector** scalar-multiplied by a **vector** gives a **scalar**
result (called the dot-product).
- A **vector** cross-multiplied by a **vector** gives a **vector**
result (called the cross-product).
- A **vector** times a **scalar** gives a **vector** result.
## Frequently Asked Questions about Vectors
##### When are scalar and vector compositions essentially the same?
**Answer:** when multiple vectors are in same direction then we can just
add the magnitudes.so, the scalar and vector composition will be same as
we do not add the directions.
##### What is a \"dot-product\"? (work when force not parallel to displacement)
!A Man walking up a
hill **Answer:**
Let\'s take gravity as our force. If you jump out of an airplane and
fall you will pick up speed. (for simplicity\'s sake, let\'s ignore air
drag). To work out the kinetic energy at any point you simply multiply
the *value* of the force caused by gravity by the *distance* moved in
the direction of the force. For example, a 180 N boy falling a distance
of 10 m will have 1800 J of extra kinetic energy. We say that the man
has had 1800 J of work done on him by the force of gravity.
Notice that energy is *not* a vector. It has a value but no direction.
Gravity and displacement are vectors. They have a value plus a
direction. (In this case, their directions are down and down
respectively) The reason we can get a scalar energy from vectors gravity
and displacement is because, in this case, they happen to point in the
same direction. Gravity acts downwards and displacement is also
downwards.
When two vectors point in the same direction, you can get the scalar
product by just multiplying the *value* of the two vectors together and
ignoring the direction.
But what happens if they don\'t point in the same direction?
Consider a man walking up a hill. Obviously it takes energy to do this
because you are going against the force of gravity. The steeper the
hill, the more energy it takes every step to climb it. This is something
we all know unless we live on a salt lake.
In a situation like this we can still work out the work done. In the
diagram, the green lines represent the displacement. To find out how
much work *against* gravity the man does, we work out the *projection*
of the displacement along the line of action of the force of gravity. In
this case it\'s just the y component of the man\'s displacement. This is
where the cos θ comes in. θ is merely the angle between the velocity
vector and the force vector.
If the two forces do not point in the same direction, you can still get
the scalar product by multiplying the projection of one force in the
direction of the other force. Thus:
`{{PSG/eq|<math>\vec{a}\cdot\vec{b}\equiv \|\vec{a}\|\ \|\vec{b}\|\ \cos\theta</math>}}`{=mediawiki}
There is another method of defining the dot product which relies on
components.
--------------------------------------------------
$\vec{a}\cdot\vec{b}\equiv a_xb_x+a_yb_y+a_zb_z$
--------------------------------------------------
##### What is a \"cross-product\"? (Force on a charged particle in a magnetic field)
**Answer:** Suppose there is a charged particle moving in a constant
magnetic field. According to the laws of electromagnetism, the particle
is acted upon by a force called the Lorentz force. If this particle is
moving from left to right at 30 m/s and the field is 30 Tesla pointing
straight down perpendicular to the particle, the particle will actually
curve in a circle spiraling out of the plane of the two with an
acceleration of its charge in coulombs times 900 newtons per coulomb!
This is because the calculation of the Lorentz force involves a
cross-product.when cross product can replace the sin0 can take place
during multiplication. A cross product can be calculated simply using
the angle between the two vectors and your right hand. If the forces
point parallel or 180° from each other, it\'s simple: the cross-product
does not exist. If they are exactly perpendicular, the cross-product has
a magnitude of the product of the two magnitudes. For all others in
between however, the following formula is used:
--------------------------------------------------------------------------------------------------
$\left\|\vec{a}\times\vec{b}\right\| = \left\|\vec{a}\right\|\ \left\|\vec{b}\right\|\sin\theta$
--------------------------------------------------------------------------------------------------
!The right-hand rule: point your index finger along the first vector
and your middle finger across the second; your thumb will point in the
direction of the resulting
vector
But if the result is a vector, then what is the direction? That too is
fairly simple, utilizing a method called the \"right-hand rule\".
The right-hand rule works as follows: Place your right-hand flat along
the first of the two vectors with the palm facing the second vector and
your thumb sticking out perpendicular to your hand. Then proceed to curl
your hand towards the second vector. The direction that your thumb
points is the direction that cross-product vector points! Though this
definition is easy to explain visually it is slightly more complicated
to calculate than the dot product.
---------------------------------------------------------------------------------------------
$(a_x,\ a_y,\ a_z)\times(b_x,\ b_y,\ b_z) =(a_yb_z-a_zb_y\ ,a_xb_z-a_zb_x\ ,a_xb_y-a_yb_x)$
---------------------------------------------------------------------------------------------
##### How to draw vectors that are in or out of the plane of the page (or board)
!How to draw vectors in the plane of the
paper
!Standard symbols of a vector going into or out of a
page
**Answer:** Vectors in the plane of the page are drawn as arrows on the
page. A vector that goes into the plane of the screen is typically drawn
as circles with an inscribed X. A vector that comes out of the plane of
the screen is typically drawn as circles with dots at their centers. The
X is meant to represent the fletching on the back of an arrow or dart
while the dot is meant to represent the tip of the arrow.
|
# Physics Study Guide/Topics
1. Displacement, velocity, and
acceleration
- Vectors and
scalars
2. Force
3. Ropes and
tension
4. Gravity
5. Momentum and collision
force
6. Energy
- Kinetic energy
7. Friction
8. Periodic Motion
9. Torque
10. Circular Motion
- Center of Mass
- Inertia and moment of
inertia
11. Strengths of materials: Stress and
strain
12. Thermodynamics
- Power
13. Gases
- Ideal gases
- Pressure and partial
pressure
14. Fluids
15. Density
16. Laminar flow in ideal
fluids
17. Energy density
18. Real liquids
19. Sheer thickening and thinning
fluids
|
# Physics Study Guide/Style Guide
#### Equations
For uniformity in the book a template should be used for embeding
equations. View the template at
Template:PSG/eq use it as follows
`{{PSG/eq|<math>ax^2+bx+c=0</math>}}`
Which gives
#### Units
Units should be given using the \\mathrm{} command and small spaces \\,
between the units. Vectors should be bold and italicized if they are
variables with the \\boldsymbol{} command.
`{{PSG/eq|<math>\sum\Delta\boldsymbol{p}=0~\mathrm{kg\,m/s}</math>}}`
- Barry N. Taylor (2004), Guide for the Use of the International
System of Units (SI) (version 2.2).
Available1. National Institute of
Standards and Technology, Gaithersburg, MD.
## Global Style Guidelines
- Wikibooks Manual of Style
|
# Anatomy and Physiology of Animals/Chemicals
!original image by
jurvetson
CC-BY{width="400"}
## Objectives
After completing this section, you should know the:
- symbols used to represent elements;
- names of molecules commonly found in animal cells;
- characteristics of ions and electrolytes;
- basic structure of carbohydrates with examples;
- carbohydrates can be divided into mono- di- and poly-saccharides;
- basic structure of fats or lipids with examples;
- basic structure of proteins with examples;
- function of carbohydrates, lipids and proteins in the cell and
animals\' bodies;
- foods which supply carbohydrates, lipids and proteins in animal
diets.
## Elements And Atoms
The elements (simplest chemical substances) found in an animal's body
are all made of basic building blocks or atoms. The most common elements
found in cells are given in the table below with the symbol that is used
to represent them.
Element Symbol
------------- --------
Calcium Ca
Carbon C
Chlorine Cl
Copper Cu
Iodine I
Hydrogen H
Iron Fe
Magnesium Mg
Nitrogen N
Oxygen O
Phosphorous P
Potassium K
Sodium Na
Sulphur S
Zinc Zn
## Compounds And Molecules
A **molecule** is formed when two or more **atoms** join together. A
compound is formed when two or more different elements combine in a
fixed ratio by mass. Note that some atoms are never found alone. For
example **oxygen** is always found as molecules of 2 oxygen atoms
(represented as O~2~).
The table below gives some common compounds.
Compound Symbol
---------------------------------- ---------------
Calcium carbonate CaCO~3~
Carbon dioxide CO~2~
Copper sulphate CuSO~4~
Glucose C~6~H~12~O~6~
Hydrochloric acid HCl
Sodium bicarbonate (baking soda) NaHCO~3~
Sodium chloride (table salt) NaCl
Sodium hydroxide NaOH
Water H~2~O
## Chemical Reactions
Reactions occur when atoms combine or separate from other atoms. In the
process new products with different chemical properties are formed.
Chemical reactions can be represented by **chemical equations**. The
starting atoms or compounds are usually put on the left-hand side of the
equation and the products on the right-hand side.
For example
- H~2~O + CO~2~ gives H~2~CO~3~
- or H~2~O + CO~2~ = H~2~CO~3~
- Water + Carbon dioxide gives Carbonic acid
## Ionization
When some atoms dissolve in water they become charged particles called
**ions**. Some become positively charged ions and others negatively
charged. Ions may have one, two or sometimes three charges.
The table below shows examples of positively and negatively charged ions
with the number of their charges.
Positive Ions Negative Ions
--------------- ----------------
H+ Hydrogen
Ca^2^+ Calcium
Na+ Sodium
K+ Potassium
Mg^2^+ Magnesium
Fe^2^+ Iron (ferrous)
Fe^3^+ Iron (ferric)
Positive and negative ions attract one another to hold compounds
together.
Ions are important in cells because they conduct electricity when
dissolved in water. Substances that ionise in this way are known as
**electrolytes**.
The molecules in an animal's body fall into two groups: **inorganic
compounds** and **organic compounds**. The difference between these is
that the first type does not contain **carbon** and the second type
does.
## Organic And Inorganic Compounds
Inorganic compounds include water, sodium chloride, potassium hydroxide
and calcium phosphate.
**Water** is the most abundant inorganic compound, making up over 60% of
the volume of cells and over 90% of body fluids like blood. Many
substances dissolve in water and all the chemical reactions that take
place in the body do so when dissolved in water. Other inorganic
molecules help keep the **acid/base balance (pH)** and concentration of
the blood and other body fluids stable (see Chapter 8).
Organic compounds include **carbohydrates, proteins** and **fats** or
**lipids**. All organic molecules contain carbon atoms and they tend to
be larger and more complex molecules than inorganic ones. This is
largely because each carbon atom can link with four other atoms. Organic
compounds can therefore consist of from one to many thousands of carbon
atoms joined to form chains, branched chains and rings (see diagram
below). All organic compounds also contain hydrogen and they may also
contain other elements.
![](Anatomy_and_physiology_of_animals-organic_compounds.jpg "Anatomy_and_physiology_of_animals-organic_compounds.jpg")
## Carbohydrates
The name "carbohydrate" tells you something about the composition of
these "hydrated carbon" compounds. They contain carbon, hydrogen and
oxygen and like water (H~2~O), there are always twice as many hydrogen
atoms as oxygen atoms in each molecule. Carbohydrates are a large and
diverse group that includes sugars, starches, glycogen and cellulose.
Carbohydrates in the diet supply an animal with much of its energy and
in the animal's body, they transport and store energy.
Carbohydrates are divided into three major groups based on size:
**monosaccharides** (single sugars), **disaccharides** (double sugars)
and **polysaccharides** (multi sugars).
**Monosaccharides** are the smallest carbohydrate molecules. The most
important monosaccharide is glucose which supplies much of the energy in
the cell. It consists of a ring of 6 carbon atoms with oxygen and
hydrogen atoms attached.
**Disaccharides** are formed when 2 monosaccharides join together.
Sucrose (table sugar), maltose, and lactose (milk sugar), are three
important disaccharides. They are broken down to monosaccharides by
digestive enzymes in the gut.
**Polysaccharides** like starch, glycogen and cellulose are formed by
tens or hundreds of monosaccharides linking together. Unlike mono- and
di-saccharides, polysaccharides are not sweet to taste and most do not
dissolve in water.
:\* **Starch** is the main molecule in which plants store the energy
gained from the sun. It is found in grains like barley and roots like
potatoes.
:\* **Glycogen**, the polysaccharide used by animals to store energy, is
found in the liver and the muscles that move the skeleton.
:\* **Cellulose** forms the rigid cell walls of plants. Its structure is
similar to glycogen, but it can't be digested by mammals. Cows and
horses can eat cellulose with the help of bacteria which live in
specialised parts of their gut.
![](Anatomy_and_physiology_of_animals-Polysaccharides.jpg "Anatomy_and_physiology_of_animals-Polysaccharides.jpg")
## Fats
**Fats** or **lipids** are important in the plasma membrane around cells
and form the insulating fat layer under the skin. They are also a highly
concentrated source of energy, and when eaten in the diet provide more
than twice as much energy per gram as either carbohydrates or proteins.
Like carbohydrates, fats contain carbon, hydrogen and oxygen, but unlike
them, there is no particular relationship between the number of hydrogen
and oxygen atoms.
The fats and oils animals eat in their diets are called
**triglycerides** or **neutral fats**. The building blocks of
triglycerides are 3 **fatty acids** attached to a backbone of
**glycerol** (**glycerine**). When fats are eaten the digestive enzymes
break down the molecules into separate fatty acids and glycerol again.
**Fatty acids** are divided into two kinds: **saturated** and
**unsaturated fatty acids** depending on if they contain much
(saturation) or little (unsaturation) hydrogen in their composition, and
whether any there is at least one double bond (saturation) between
carbons or not (unsaturation). The fat found in animals bodies and in
dairy products contains mainly saturated fatty acids and tends to be
solid at room temperature. Fish and poultry fats and plant oils contain
mostly unsaturated fatty acids and are more liquid at room temperature.
**Phospholipids** are lipids that contain a phosphate group. They are
important in the plasma membrane of the cell.
![](Triglyceride.JPG "Triglyceride.JPG")
![](Chain_of_amino_acids.jpg "Chain_of_amino_acids.jpg")
## Proteins
**Proteins** are the third main group of organic compounds in the cell -
in fact, if you dried out a cell, you would find that about 2/3 of the
dry dust you were left with would consist of protein. Like carbohydrates
and fats, proteins contain C, H and O, but all also **nitrogen**. Many
also contain **sulphur** and **phosphorus**.
In the cell, proteins are an important part of the plasma membrane of
the cell, but their most essential role is as **enzymes**. These are
molecules that act as biological catalysts and are necessary for
biochemical reactions to proceed. Protein is also found as **keratin**
in the skin, feathers and hair, in muscles, as well as in antibodies and
some hormones.
Proteins are built up of long chains of smaller molecules called **amino
acids**. There are 20 common types of amino acid and different numbers
of these arranged in different orders create the multitude of individual
proteins that exist in an animal's body.
Long chains of amino acids often link with other amino acid chains and
the resulting protein molecule may twist, spiral and fold up to make a
complex 3-dimensional shape. As an example, see the diagram of the
protein lysozyme below. Believe it or not, this is a small and
relatively simple protein.
![](Protein_conformation.jpg "Protein_conformation.jpg")
It is this shape that determines how proteins behave in cells,
particularly when they are acting as enzymes. If for any reason this
shape is altered, the protein stops working. This is what happens if
proteins are heated or put in a solution that is too acidic or alkaline.
Think what happens to the "white'" of an egg when it is cooked. The
"white" contains the protein albumin, which is changed or
"**denatured**" permanently by cooking. The catastrophic effect that
heat has on enzymes is one of the reasons animals die if exposed to high
temperatures.
In the animal's diet, proteins are found in meat (muscle), dairy
products, seeds, nuts and legumes like soya. When the enzymes in the gut
digest proteins they break them down into the separate amino acids,
which are small enough to be absorbed into the blood.
## Summary
- **Ions** are charged particles, and **electrolytes** are solutions
of ions in water.
- **Carbohydrates** are made of carbon with hydrogen and oxygen (in
the same ratio as water) linked together. The cell mainly uses
carbohydrates for energy.
- **Fats** are also made of carbon, hydrogen and oxygen. They are a
powerful energy source, and are also used for insulation.
- **Proteins** are the building materials of the body, and as
**enzymes** make cell reactions happen. They contain nitrogen as
well as carbon, hydrogen and oxygen.
- Many also contain sulphur and phosphorous.
## Worksheet
Worksheet on Chemicals in the
Cell
## Test Yourself
1\. What is the difference between an atom and a molecule?
2\. What is the chemical name for baking soda?
: And its formula?
3\. Write the equation for carbonic acid splitting into water and carbon
dioxide.
4\. A solution of table salt in water is an example of an electrolyte.
: What ions are present in this solution?
5\. What element is always present in proteins but not usually in fats
or carbohydrates?
6\. List three differences between glucose and glycogen.
: 1\.
: 2\.
: 3\.
7\. Which will provide you with the most energy -- one gram of sugar or
one gram of butter?
8\. Why do organic compounds tend to be more complex and larger than
inorganic compounds ?
/Test Yourself Answers/
## Website
- Survey of the living world organic
molecules
A good summary of carbohydrates, fats and proteins.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/Classification
!original image by
R\'Eyes cc
by{width="400"}
## Objectives
After completing this section, you should know:
- how to write the scientific name of animals correctly
- know that animals belong to the Animal kingdom and that this is
divided into phyla, classes, orders, families
- know the definition of a species
- know the phylum and class of the more common animals dealt with in
this course
**Classification** is the process used by scientists to make sense of
the 1.5 million or so different kinds of living organisms on the planet.
It does this by describing, naming and placing them in different groups.
As veterinary nurses you are mainly concerned with the Animal Kingdom
but don't forget that animals rely on the Plant Kingdom for food to
survive. Also many diseases that animals are affected by are members of
the other Kingdoms---fungi, bacteria and single celled animals.
## Naming And Classifying Animals
There are more than 1.5 million different kinds of living organism on
Earth ranging from small and simple bacteria to large, complex mammals.
From the earliest time that humans have studied the natural world they
have named these living organisms and placed them in different groups on
the basis of their similarities and differences.
## Naming Animals
Of course we know what a cat, a dog and a whale are but, in some
situations using the common names for animals can be confusing. Problems
arise because people in different countries, and even sometimes in the
same country, have different common names for the same animals. For
example a cat can be a chat, a Katze, gato, katt, or a moggie, depending
on which language you use. To add to the confusion sometimes the same
name is used for different animals. For example, the name 'gopher' is
used for ground squirrels, rodents (pocket gophers), for moles and in
the south-eastern United States for a turtle. This is the reason why all
animals have been given an official **scientific** or **binomial name**.
Unfortunately these names are always in Latin. For example:
- Common rat: *Rattus rattus*
- Human: *Homo sapiens*
- Domestic cat: *Felis domesticus*
- Domestic dog: *Canis familiaris*
As you can see from the above there are certain rules about writing
scientific names:
- They always have **2 parts** to them.
- The first part is the **genus** name and is always written with a
**capital** first letter.
- The second name is the **species** name and is always written in
**lower case**.
- The name is always **underlined** or printed in **italics**.
The first time you refer to an organism you should write the whole name
in full. If you need to keep referring to the same organism you can then
abbreviate the genus name to just the initial. Thus "*Canis familiaris*"
becomes "*C. familiaris*" the second and subsequent times you refer to
it.
## Classification Of Living Organisms
To make some sense of the multitude of living organisms they have been
placed in different groups. The method that has been agreed by
biologists for doing this is called the **classification system**. The
system is based on the assumption that the process of evolution has,
over the millennia, brought about slow changes that have converted
simple one-celled organisms to complex multi-celled ones and generated
the earth's incredible diversity of life forms. The classification
system attempts to reflect the evolutionary relationships between
organisms.
Initially this classification was based only on the appearance of the
organism. However, the development of new techniques has advanced our
scientific knowledge. The light microscope and later the electron
microscope have enabled us to view the smallest structures, and now
techniques for comparing DNA have begun to clarify still further the
relationships between organisms. In the light of the advances in
knowledge the classification has undergone numerous revisions over time.
At present most biologists divide the living world into 5 kingdoms,
namely:
- bacteria
- protists
- fungi
- plants
- animal
We are concerned here almost entirely with the **Animal Kingdom**.
However, we must not forget that bacteria, protists, and fungi cause
many of the serious diseases that affect animals, and all animals rely
either directly or indirectly on the plant world for their nourishment.
## The Animal Kingdom
So what are animals? If we were suddenly confronted with an animal we
had never seen in our lives before, how would we know it was not a plant
or even a fungus? We all intuitively know part of the answer to this.
Animals:
- eat organic material (plants or other animals)
- move to find food
- take the food into their bodies and then digest it
- and most reproduce by fertilizing eggs by sperm
If you were tempted to add that animals are furry, run around on four
legs and give birth to young that they feed on milk you were thinking
only of mammals and forgetting temporarily that frogs, snakes and
crocodiles, birds as well as fish, are also animals.
These are all members of the group called the **vertebrates** (or
animals with a backbone) and mammals make up only about 8% of this
group. The diagram on the next page shows the percentage of the
different kinds of vertebrates.
![](Proportions_of_different_kinds_of_vertebrate.JPG "Proportions_of_different_kinds_of_vertebrate.JPG")
However, the term animal includes much more than just the Vertebrates.
In fact this group makes up only a very small portion of all animals.
Take a look at the diagram below, which shows the size of the different
groups of animals in the Animal Kingdom as proportions of the total
number of different animal species. Notice the small size of the segment
representing vertebrates! All the other animals in the Animal Kingdom
are animals with no backbone, or **invertebrates**. This includes the
worms, sea anemones, starfish, snails, crabs, spiders and insects. As
more than 90% of the invertebrates are insects, no wonder people worry
that insects may take over the world one day!
![](Fraction_of_vertebrates_within_the_animal_kingdom.jpg "Fraction_of_vertebrates_within_the_animal_kingdom.jpg")
## The Classification Of Vertebrates
As we have seen above the Vertebrates are divided into 5 groups or
classes namely:
- Fish
- Amphibia (frogs and toads)
- Reptiles (snakes and crocodiles)
- Birds
- Mammals
These classes are all based on similarities. For instance all mammals
have a similar skeleton, hair on their bodies, are warm bodied and
suckle their young.
The class Mammalia (the mammals) contains 3 **subclasses**:
- **Monotremes** (Duck billed platypus and the echidna)
- **Marsupials** (animals like the kangaroo with pouches)
- **True mammals** (with a placenta)
Within the subclass containing the true mammals, there are groupings
called **orders** that contain mammals that are more closely similar or
related, than others. Examples of six mammalian orders are given below:
- Rodents (Rodentia) (rats and mice)
- Carnivores (Carnivora) (cats, dogs, bears and seals)
- Even-toed grazers (Artiodactyla) (pigs, sheep, cattle, antelopes)
- Odd-toed grazers (Perissodactyla) (horses, donkeys, zebras)
- Marine mammals (Cetacea) (whales, sea cows)
- Primates (monkeys, apes, humans)
Within each order there are various **families**. For example within the
carnivore mammals are the families:
- Canidae (dog-like carnivores)
- Felidae (cat-like carnivores)
Even at this point it is possible to find groupings that are more
closely related than others. These groups are called **genera**
(singular genus). For instance within the cat family Felidae is the
genus Felis containing the cats, as well as genera containing panthers,
lynxes, and sabre toothed tigers!
The final groups within the system are the **species**. The definition
of a species is a **group of animals that can mate successfully and
produce fertile offspring**. This means that all domestic cats belong to
the species *Felis domesticus*, because all breeds of cat whether
Siamese, Manx or ordinary House hold cat can cross breed. However,
domestic cats can not mate successfully with lions, tigers or jaguars,
so these are placed in separate species, e.g. *Felis leo, Felis tigris
and Felis onca*.
Even within the same species, there can be animals with quite wide
variations in appearance that still breed successfully. We call these
different **breeds, races** or **varieties**. For example there are many
different breeds of dogs from Dalmatian to Chihuahua and of cats, from
Siamese to Manx and domestic short-hairs, but all can cross breed. Often
these breeds have been produced by**selective breeding** but varieties
can arise in the wild when groups of animals are separated by a mountain
range or sea and have developed different characteristics over long
periods of time.
To summarise, the classification system consists of:
The **A**nimal **K**ingdom which is divided into
**P**hyla which are divided into
**C**lasses which are divided into
**O**rders which are divided into
**F**amilies which are divided into
**G**enera which are divided into
**S**pecies.
"**Kings Play Cricket On Flat Green Surfaces**" OR "**Kindly Professors
Cannot Often Fail Good Students**" are just two of the phrases students
use to remind themselves of the order of these categories - on the other
hand you might like to invent your own.
## Summary
- The **scientific name** of an animal has two parts, the **genus**
and the **species**, and must be written in **italics** or
**underlined**.
- Animals are divided into **vertebrates** and **invertebrates**.
- The classification system has groupings called **phyla**,
**classes**, **orders**, **families**, **genera** and **species**.
- Furry, milk-producing animals are all in the class **Mammalia.**
- Members within a **species** can mate and produce fertile offspring.
- Sub-groups within a species include **breeds, races** and
**varieties**.
## Worksheet
Work through the exercises in this Classification
Worksheet to help
you learn how to write scientific names and classify different animals.
## Test Yourself
1a) True or False. Is this name written correctly? trichosurus
Vulpecula.
1b) What do you need to change?
2\. Rearrange these groups from the biggest to the smallest:
: a\) cars \| diesel cars \| motor vehicles \| my diesel Toyota \|
transportation
```{=html}
<!-- -->
```
: b\) Class \| Species \| Phylum \| Genus \| Order \| Kingdom \|
Family
## Websites
### Classification
- <http://www.mcwdn.org/Animals/AnimalClassQuiz.html> Animal
classification quiz
In fact much more than that. There is an elementary cell biology and
classification quiz but the best thing about this website are the links
to tables of characteristics of the different animal groups, for animals
both with and without backbones.
- <http://animaldiversity.ummz.umich.edu/site/index.html> Animal
diversity web
Careful! You could waste all day exploring this wonderful website. Chose
an animal or group of animals you want to know about and you will see
not only the classification but photos and details of distribution,
behaviour and conservation status etc.
- <http://www.indianchild.com/animal_kingdom.htm> Indian child
Nice clear explanation of the different categories used in the
classification of animals.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/The Cell
!original image by pong cc
by{width="400"}
## Objectives
After completing this section, you should know:
:\*that cells can be of different shapes and sizes
:\*the role and function of the plasma membrane; cytoplasm, ribosomes,
rough endoplasmic reticulum; smooth endoplasmic reticulum, mitochondria,
golgi bodies, lysosomes, centrioles and the nucleus
:\*the structure of the plasma membrane
:\*that substances move across the plasma membrane by passive and active
processes
:\*that passive processes include diffusion, osmosis and facilitated
diffusion and active processes include active transport, pinocytosis,
phagocytosis and exocytosis
:\*what the terms hypotonic, hypertonic isotonic and haemolysis mean
:\*that the nucleus contains the chromosomes formed from DNA
:\*that mitosis is the means by which ordinary cells divide
:\*the main stages of mitosis
:\*that meiosis is the process by which the chromosome number is halved
when ova and sperm are formed
## The Cell
!**Diagram 3.1**: A variety of animal
cells{width="400"}
The cell is the basic building block of living organisms. Bacteria and
the parasite that causes malaria consist of single cells, while plants
and animals are made up of trillions of cells. Most cells are spherical
or cube shaped but some are a range of different shapes (see diagram
3.1).
Most cells are so small that a microscope is needed to see them,
although a few cells, e.g. the ostrich's egg, are so large that they
could make a meal for several people.
A normal cell is about 0.02 of a millimetre (0.02mm) in diameter. (Small
distances like this are normally expressed in micrometres or microns
(μm). Note there are 1000 μms in every mm).
!**Diagram 3.2**: An animal
cell{width="400"}
When you look at a typical animal cell with a light microscope it seems
quite simple with only a few structures visible (see diagram 3.2).
Three main parts can be seen:
- an outer cell membrane (plasma membrane),
- an inner region called the cytoplasm and
- the nucleus
!**Diagram 3.3**: An animal cell as seen with an electron
microscope{width="400"}
However, when you use an electron microscope to increase the
magnification many thousands of times you see that these seemingly
simple structures are incredibly complex, each with its own specialized
function. For example the plasma membrane is seen to be a double layer
and the cytoplasm contains many special structures called **organelles**
(meaning little organs) which are described below. A drawing of the cell
as seen with an electron microscope is shown in diagram 3.3.
## The Plasma Membrane
!**Diagram 3.4**: The structure of the plasma
membrane{width="400"}
The thin plasma membrane surrounds the cell, separating its contents
from the surroundings and controlling what enters and leaves the cell.
The plasma membrane is composed of two main
molecules,phospholipids(fats) and proteins. The phospholipids are
arranged in a double layer with the large protein molecules dotted about
in the membrane (see diagram 3.4). Some of the protein molecules form
tiny channels in the membrane while others help transport substances
from one side of the membrane to the other.
### How substances move across the Plasma Membrane
Substances need to pass through the membrane to enter or leave the cell
and they do so in a number of ways. Some of these processes require no
energy i.e. they are passive, while others require energy i.e. they are
**active**.
Passive processes include: a) diffusion and b) osmosis, while active
processes include: c) active transport, d) phagocytosis, e) pinocytosis
and f) exocytosis. These will be described below.
!**Diagram 3.5**: Diffusion in a
liquid{width="400"}
**a) Diffusion**
Although you may not know it, you are already familiar with the process
of diffusion. It is diffusion that causes a smell (expensive perfume or
smelly socks) in one part of the room to gradually move through the room
so it can be smelt on the other side. Diffusion occurs in the air and in
liquids.
Diagram 3.5 shows what happens when a few crystals of a dark purple dye
called potassium permanganate are dropped into a beaker of water. The
dye molecules diffuse into the water moving from high to low
concentrations so they become evenly distributed throughout the beaker.
In the body, diffusion causes molecules that are in a high concentration
on one side of the cell membrane to move across the membrane until they
are present in equal concentrations on both sides. It takes place
because all molecules have an in-built vibration that causes them to
move and collide until they are evenly distributed. It is an absolutely
natural process that requires no added energy.
Small molecules like oxygen, carbon dioxide, water and ammonia as well
as fats, diffuse directly through the double fat layer of the membrane.
The small molecules named above as well as a variety of charged
particles (ions) also diffuse through the protein-lined channels. Larger
molecules like glucose attach to a carrier molecule that aids their
diffusion through the membrane. This is called **facilitated
diffusion**.
In the animal's body diffusion is important for moving oxygen and carbon
dioxide between the lungs and the blood, for moving digested food
molecules from the gut into the blood and for the removal of waste
products from the cell. !**Diagram 3.6**:
Osmosis{width="400"}
**b) Osmosis**
Although the word may be unfamiliar, you are almost certainly acquainted
with the effects of osmosis. It is osmosis that plumps out dried fruit
when you soak it before making a fruit cake or makes that wizened old
carrot look almost like new when you soak it in water. Osmosis is in
fact the diffusion of water across a membrane that allows water across
but not larger molecules. This kind of membrane is called a
**semi-permeable membrane**.
Take a look at side **A** of diagram 3.6. It shows a container divided
into two parts by an artificial semi-permeable membrane. Water is poured
into one part while a solution containing salt is poured into the other
part. Water can cross the membrane but the salt cannot. The water
crosses the semi-permeable membrane by diffusion until there is an equal
amount of water on both sides of the membrane. The effect of this would
be to make the salt solution more diluted and cause the level of the
liquid in the right-hand side of the container to rise so it looked like
side **B** of diagram 3.6. This movement of water across the
semi-permeable membrane is called osmosis. It is a completely natural
process that requires no outside energy.
Although it would be difficult to do in practice, imagine that you could
now take a plunger and push down on the fluid in the right-hand side of
container **B** so that it flowed back across the semi-permeable
membrane until the level of fluid on both sides was equal again. If you
could measure the pressure required to do this, this would be equal to
the **osmotic pressure** of the salt solution. (This is a rather
advanced concept at this stage but you will meet this term again when
you study fluid balance later in the course).
!**Diagram 3.7**: Osmosis in red cells placed in a hypotonic
solution{width="400"}
The plasma membrane of cells acts as a semi-permeable membrane. If red
blood cells, for example, are placed in water, the water crosses the
membrane to make the amount of water on both sides of it equal (see
diagram 3.7). This means that the water moves into the cell causing it
to swell. This can occur to such an extent that the cell actually bursts
to release its contents. This bursting of red blood cells is called
**haemolysis**. In a situation such as this when the solution on one
side of a semi-permeable membrane has a lower concentration than that on
the other side, the first solution is said to be **hypotonic** to the
second. !**Diagram 3.8**: Osmosis in red cells
placed in a hypertonic
solution{width="400"}
Now think what would happen if red blood cells were placed in a salt
solution that has a higher salt concentration than the solution within
the cells (see diagram 3.8). Such a bathing solution is called a
**hypertonic** solution. In this situation the "concentration" of water
within the cells would be higher than that outside the cells. Osmosis
(diffusion of water) would then occur from the inside of the cells to
the outside solution, causing the cells to shrink.
!**Diagram 3.9**: Red cells placed in an isotonic
solution{width="400"}
A solution that contains 0.9% salt has the same concentration as body
fluids and the solution within red cells. Cells placed in such a
solution would neither swell nor shrink (see diagram 3.9). This solution
is called an **isotonic** solution. This strength of salt solution is
often called **normal saline** and is used when replacing an animal's
body fluids or when cells like red blood cells have to be suspended in
fluid.
**Remember** - osmosis is a special kind of diffusion. It is the
diffusion of water molecules across a semi-permeable membrane. It is a
completely passive process and requires no energy.
Sometimes it is difficult to remember which way the water molecules
move. Although it is not strictly true in a biological sense, many
students use the phrase **"SALT SUCKS"** to help them remember which way
water moves across the membrane when there are two solutions of
different salt concentrations on either side.
As we have seen water moves in and out of the cell by osmosis. All water
movement from the intestine into the blood system and between the blood
capillaries and the fluid around the cells (tissue or extra cellular
fluid) takes place by osmosis. Osmosis is also important in the
production of concentrated urine by the kidney.
**c) Active transport**
When a substance is transported from a low concentration to a high
concentration i.e. uphill against the concentration gradient, energy has
to be used. This is called **active transport**.
Active transport is important in maintaining different concentrations of
the ions sodium and potassium on either side of the nerve cell membrane.
It is also important for removing valuable molecules such as glucose,
amino acids and sodium ions from the urine.
!**Diagram 3.10**:
Phagocytosis{width="400"}
**d) Phagocytosis**
Phagocytosis is sometimes called "cell eating". It is a process that
requires energy and is used by cells to move solid particles like
bacteria across the plasma membrane. Finger-like projections from the
plasma membrane surround the bacteria and engulf them as shown in
diagram 3.10. Once within the cell, enzymes produced by the lysosomes of
the cell (described later) destroy the bacteria.
The destruction of bacteria and other foreign substance by white blood
cells by the process of phagocytosis is a vital part of the defense
mechanisms of the body.
**e) Pinocytosis**
Pinocytosis or "cell drinking" is a very similar process to phagocytosis
but is used by cells to move fluids across the plasma membrane. Most
cells carry out pinocytosis (note the pinocytotic vesicle in diagram
3.3).
**f) Exocytosis**
Exocytosis is the process by means of which substances formed in the
cell are moved through the plasma membrane into the fluid outside the
cell (or extra-cellular fluid). It occurs in all cells but is most
important in secretory cells (e.g. cells that produce digestive enzymes)
and nerve cells.
## The Cytoplasm
Within the plasma membrane is the **cytoplasm**. It consists of a clear
jelly-like fluid called the a) **cytosol** or **intracellular fluid** in
which b) **cell inclusions**, c) **organelles** and d)
**microfilaments** and **microtubules** are found.
### a) Cytosol
The cytosol consists mainly of water in which various molecules are
dissolved or suspended. These molecules include proteins, fats and
carbohydrates as well as sodium, potassium, calcium and chloride ions.
Many of the reactions that take place in the cell occur in the cytosol.
### b) Cell inclusions
These are large particles of fat, glycogen and melanin that have been
produced by the cell. They are often large enough to be seen with the
light microscope. For example the cells of adipose tissue (as in the
insulating fat layer under the skin) contain fat that takes up most of
the cell.
### c) Organelles
**Organelles** are the "little organs" of the cell - like the heart,
kidney and liver are the organs of the body. They are structures with
characteristic appearances and specific "jobs" in the cell. Most can not
be seen with the light microscope and so it was only when the electron
microscope was developed that they were discovered. The main organelles
in the cell are the **ribosomes, endoplasmic reticulum, mitochondrion,
Golgi complex** and **lysosomes**. A cell containing these organelles as
seen with the electron microscope is shown in diagram 3.3.
#### Ribosomes
!**Diagram 3.11**: Rough endoplasmic
reticulum{width="400"}
**Ribosomes** are tiny spherical organelles that make proteins by
joining amino acids together. Many ribosomes are found free in the
cytosol, while others are attached to the rough endoplasmic reticulum.
#### Endoplasmic reticulum
The **endoplasmic reticulum (ER)** is a network of membranes that form
channels throughout the cytoplasm from the nucleus to the plasma
membrane. Various molecules are made in the ER and transported around
the cell in its channels. There are two types of ER: smooth ER and rough
ER.
:
: **Smooth ER** is where the fats in the cell are made and in some
cells, where chemicals like alcohol, pesticides and carcinogenic
molecules are inactivated.
```{=html}
<!-- -->
```
:
: The **Rough ER** has ribosomes attached to its surface. The
function of the Rough ER is therefore to make proteins that are
modified stored and transported by the ER (Diagram 3.11).
#### Mitochondria
!**Diagram 3.12**: A
mitochondrion{width="400"}
**Mitochondria** (singular mitochondrion) are oval or rod shaped
organelles scattered throughout the cytoplasm. They consist of two
membranes, the inner one of which is folded to increase its surface
area. (Diagram 3.12)
Mitochondria are the "power stations" of the cell. They make energy by
"burning" food molecules like glucose. This process is called **cellular
respiration**. The reaction requires oxygen and produces carbon dioxide
which is a waste product. The process is very complex and takes place in
a large number of steps but the overall word equation for cellular
respiration is-
:
: **Glucose + oxygen = carbon dioxide + water + energy**
: **or C~6~H~12~O~6~ + 6O~2~** = **6CO~2~** + **6H~2~O** +
**energy**
**Note** that cellular respiration is different from respiration or
breathing. Breathing is the means by which air is drawn into and
expelled from the lungs. Breathing is necessary to supply the cells with
the oxygen required by the mitochondria and to remove the carbon dioxide
produced as a waste product of cellular respiration.
Active cells like muscle, liver, kidney and sperm cells have large
numbers of mitochondria.
#### Golgi Apparatus
!**Diagram 3.13**: A Golgi
body{width="400"}
The **Golgi bodies** in a cell together make up the **Golgi apparatus**.
Golgi bodies are found near the nucleus and consist of flattened
membranes stacked on top of each other rather like a pile of plates (see
diagram 3.13). The Golgi apparatus modifies and sorts the proteins and
fats made by the ER, then surrounds them in a membrane as **vesicles**
so they can be moved to other parts of the cell.
#### Lysosomes
**Lysosomes** are large vesicles that contain digestive enzymes. These
break down bacteria and other substances that are brought into the cell
by phagocytosis or pinocytosis. They also digest worn-out or damaged
organelles, the components of which can then be recycled by the cell to
make new structures.
### d) Microfilaments And Microtubules
Some cells can move and change shape and organelles and chemicals are
moved around the cell. Threadlike structures called **microfilaments**
and **microtubules** that can contract are responsible for this
movement.
These structures also form the projections from the plasma membrane
known as **flagella** (singular flagellum) as in the sperm tail, and
**cilia** found lining the respiratory tract and used to remove mucus
that has trapped dust particles (see chapter 4).
Microtubules also form the pair of cylindrical structures called
**centrioles** found near the nucleus. These help organise the spindle
used in cell division.
## The Nucleus
!**Diagram 3.14**: A cell with an enlarged
chromosome{width="400"}
!**Diagram 3.15**: A full set of human
chromosomes
The **nucleus** is the largest structure in a cell and can be seen with
the light microscope. It is a spherical or oval body that contains the
**chromosomes**. The nucleus controls the development and activity of
the cell. Most cells contain a nucleus although mature red blood cells
have lost theirs during development and some muscle cells have several
nuclei.
A double membrane similar in structure to the plasma membrane surrounds
the nucleus (now called the nuclear envelope). Pores in this nuclear
membrane allow communication between the nucleus and the cytoplasm.
Within the nucleus one or more spherical bodies of darker material can
be seen, even with the light microscope. These are called **nucleoli**
and are made of RNA. Their role is to make new ribosomes.
### Chromosomes
Inside the nucleus are the chromosomes which:
- contain DNA;
- control the activity of the cell;
- are transmitted from cell to cell when cells divide;
- are passed to a new individual when sex cells fuse together in
sexual reproduction.
In cells that are not dividing the chromosomes are very long and thin
and appear as dark grainy material. They become visible just before a
cell divides when they shorten and thicken and can then be counted (see
diagram 3.14).
The number of chromosomes in the cells of different species varies but
is constant in the cells of any one species (e.g. horses have 64
chromosomes, cats have 38 and humans 46). Chromosomes occur in pairs
(i.e. 32 pairs in the horse nucleus and 19 in that of the cat). Members
of each pair are identical in length and shape and if you look carefully
at diagram 3.15, you may be able to see some of the pairs in the human
set of chromosomes.
## Cell Division
!**Diagram 3.16**: Division by mitosis results in 2 new cells identical
to each other and to parent
cell{width="400"}
!**Diagram 3.17**: Division by meiosis results in 4 new cells that are
genetically different to each
other{width="400"}
Cells divide when an animal grows, when its body repairs an injury and
when it produces sperm and eggs (or ova). There are two types of cell
division: **Mitosis** and **meiosis**.
**Mitosis**. This is the cell division that occurs when an animal grows
and when tissues are repaired or replaced. It produces two new cells
(daughter cells) each with a full set of chromosomes that are identical
to each other and to the parent cell. All the cells of an animal's body
therefore contain identical DNA.
**Meiosis.** This is the cell division that produces the ova and sperm
necessary for sexual reproduction. It only occurs in the ovary and
testis.
The most important function of meiosis it to half the number of
chromosomes so that when the sperm fertilises the ovum the normal number
is regained. Body cells with the full set of chromosomes are called
**diploid** cells, while **gametes** (sperm and ova) with half the
chromosomes are called **haploid** cells.
Meiosis is a more complex process than mitosis as it involves two
divisions one after the other and the four cells produced are all
genetically different from each other and from the parent cell.
This fact that the cells formed by meiosis are all genetically different
from each other and from the parent cell can be seen in litters of
kittens where all the members of the litter are different from each
other as well as being different from the parents although they display
characteristics of both.
## The Cell As A Factory
To make the function of the parts of the cell easier to understand and
remember you can compare them to a factory. For example:
- The nucleus (1) is the managing director of the factory consulting
the blueprint (the chromosomes) (2);
- The mitochondria (3) supply the power
- The ribosomes (4) make the products;
- The chloroplasts of plant cells (5) supply the fuel (food)
- The Golgi apparatus (6) packages the products ready for dispatch;
- The ER (7) modifies, stores and transports the products around the
factory;
- The plasma membrane is the factory wall and the gates (8);
- The lysosomes dispose of the waste and worn-out machinery.
The cell compared to a
factory
## Summary
- Cells consist of three parts: the **plasma membrane, cytoplasm** and
**nucleus**.
- Substances pass through the plasma membrane by **diffusion** (gases,
lipids), **osmosis** (water), **active transport** (glucose, ions),
**phagocytosis** (particles), **pinocytosis** (fluids) and
**exocytosis** (particles and fluids).
- **Osmosis** is the diffusion of **water** through a **semipermeable
membrane**. Water diffuses from high water \"concentration\" to low
water \"concentration\".
- The cytoplasm consists of **cytosol** in which are suspended **cell
inclusions** and **organelles**.
- organelles include **ribosomes, endoplasmic reticulum, mitochondria,
Golgi bodies** and **lysosomes**.
- The **nucleus** controls the activity of the cell. It contains the
**chromosomes** that are composed of **DNA**.
- The cell divides by **mitosis** and **meiosis**
## Worksheets
There are several worksheets you can use to help you understand and
learn about the cell.
Plasma Membrane
Worksheet
Diffusion and Osmosis Worksheet
1
Diffusion and Osmosis Worksheet
2
Cell Division
Worksheet
## Test Yourself
You can then test yourself to see how much you remember.
1\. Complete the table below:
\|Requires energy \|Requires a semi permeable membrane? \|Is the movement of water molecules only? \|Molecules move from high to low concentration? \|Molecules move from low to high concentration?
------------------ ------------------- --------------------------------------- -------------------------------------------- -------------------------------------------------- --------------------------------------------------
Diffusion ? ? ? Yes ?
Osmosis ? ? ? ? ?
Active Transport ? Yes ? ? Yes
2\. Red blood cells placed in a 5% salt solution would:
: swell/stay the same/ shrink?
3\. Red blood cells placed in a 0.9% solution of salt would be in a:
: hypotonic/isotonic/hypertonic solution?
4\. White blood cells remove foreign bodies like bacteria from the body
by engulfing them. This process is known as
..............................
5\. Match the organelle in the left hand column of the table below with
its function in the right hand column.
Organelle Function
---------------------------------- ------------------------------------------------
a\. Nucleus 1\. Modifies proteins and fats
b\. Mitochondrion 2\. Makes, modifies and stores proteins
c\. Golgi body 3\. Digests worn out organelles
d\. Rough endoplasmic reticulum 4\. Makes fats
e\. Lysosome 5\. Controls the activity of the cell proteins
f\. Smooth endoplasmic reticulum 6\. Produces energy
6\. The cell division that causes an organism to grow and repairs
tissues is called:
7\. The cell division that produces sperm and ova is called:
8\. TWO important differences between the two types of cell division
named by you above are:
: a\.
: b\.
/Test Yourself Answers/
## Websites
- <http://www.cellsalive.com/> Cells alive
: Cells Alive gives good animations of the animal cell.
- Cell Wikipedia
: Wikipedia is good for almost anything you want to know about
cells. Just watch as there is much more here than you need to
know.
- <http://personal.tmlp.com/Jimr57/textbook/chapter3/chapter3.htm>
Virtual cell
: The Virtual Cell has beautiful pictures of lots of (virtual?)
cell organelles.
- <http://www.wisc-online.com/objects/index_tj.asp?objid=AP11403>
Typical animal cell
: Great interactive animal cell.
- <http://www.wiley.com/college/apcentral/anatomydrill/> Anatomy drill
and practice
: Cell to test yourself on by dragging labels.
- <http://www.maxanim.com/physiology/index.htm> Max Animations
: Great animations here of diffusion, osmosis, facilitated
diffusion, endo- and exocytosis and the development and action
of lysosomes. A bit higher level than you need but still not to
be missed.
- <http://www.stolaf.edu/people/giannini/flashanimat/transport/diffusion.swf>
Diffusion
: Diffusion animation - good and clear.
- <http://www.tvdsb.on.ca/westmin/science/sbi3a1/Cells/Osmosis.htm>
Osmosis
: Nice simple osmosis animation.
- <http://zoology.okstate.edu/zoo_lrc/biol1114/tutorials/Flash/Osmosis_Animation.htm>
Osmosis
: Diffusion and osmosis. Watch what happens to the water and the
solute molecules.
- <http://www.wisc-online.com/objects/index_tj.asp?objid=NUR4004>
Osmotic Pressure
: Do an online experiment to illustrate osmosis and osmotic
pressure.
- <http://www.stolaf.edu/people/giannini/flashanimat/transport/osmosis.swf>
Osmosis
: Even better osmosis demonstration - you get to add the salt.
## Glossary
- Link to Glossary
|
# Anatomy and Physiology of Animals/Body Organisation
!original image by
grrphoto cc
by{width="400"}
In this chapter, the way the cells of the body are organised into
different tissues is described. You will find out how these tissues are
arranged into organs, and how the organs form systems such as the
digestive system and the reproductive system. Also in this chapter, the
important concept of homeostasis is defined. You are also introduced to
those pesky things---directional terms.
## Objectives
After completing this section, you should know:
- the "Mrs Gren" characteristics of living organisms
- what a tissue is
- four basic types of tissues, their general function and where they
are found in the body
- the basic organisation of the body of vertebrates including the main
body cavities and the location of the following major organs:
thorax, heart, lungs, thymus, abdomen, liver, stomach, spleen,
intestines, kidneys, sperm ducts, ovaries, uterus, cervix, vagina,
urinary bladder
- the 11 body systems
- what homeostasis is
- directional terms including dorsal, ventral, caudal, cranial,
medial, lateral, proximal, distal, rostral, palmar and plantar. Plus
transverse and longitudinal sections
## The Organisation Of Animal Bodies
Living organisms move, feed, respire (burn food to make energy), grow,
sense their environment, excrete and reproduce. These seven
characteristics are sometimes summarized by the words "MRS GREN".
functions of:
**M**ovement
**R**espiration
**S**ensitivity
**G**rowth
**R**eproduction
**E**xcretion
**N**utrition
Living organisms are made from cells which are organised into tissues
and these are themselves combined to form organs and systems.
Skin cells, muscle cells, skeleton cells and nerve cells, for example.
These different types of cells are not just scattered around randomly
but similar cells that perform the same function are arranged in groups.
These collections of similar cells are known as **tissues**.
There are four main types of tissues in animals. These are:
- **Epithelial** tissues that form linings, coverings and glands,
- **Connective** tissues for transport and support
- **Muscle** tissues for movement and
- **Nervous** tissues for carrying messages.
### Epithelial Tissues
Epithelium (plural epithelia) is tissue that covers and lines. It covers
an organ or lines a tube or space in the body. There are several
different types of epithelium, distinguished by the different shapes of
the cells and whether they consist of only a single layer of cells or
several layers of cells.
#### Simple Epithelia - with a single layer of cells
!**Diagram 4.1**: Squamous
epithelium
##### Squamous epithelium
Squamous epithelium consists of a single layer of flattened cells that
are shaped rather like 'crazy paving'. It is found lining the heart,
blood vessels, lung alveoli and body cavities (see diagram 4.1). Its
thinness allows molecules to diffuse across readily.
!**Diagram 4.2**: Cuboidal
epithelium
##### Cuboidal epithelium
Cuboidal epithelium consists of a single layer of cube shaped cells. It
is rare in the body but is found lining kidney tubules (see diagram
4.2). Molecules pass across it by diffusion, osmosis and active
transport. !**Diagram 4.3**: Columnar
epithelium
##### Columnar epithelium
Columnar epithelium consists of column shaped cells. It is found lining
the gut from the stomach to the anus (see diagram 4.3). Digested food
products move across it into the blood stream.
!**Diagram 4.4**: Columnar epithelium with
cilia
##### Columnar epithelium with cilia
Columnar epithelium with cilia on the free surface (also known as the
apical side of the cell) lines the respiratory tract, fallopian tubes
and uterus (see diagram 4.4). The cilia beat rhythmically to transport
particles. !**Diagram 4.5**: Transitional
epithelium
#### Transitional epithelium - with a variable number of layers
The cells in transitional epithelium can move over one another allowing
it to stretch. It is found in the wall of the bladder (see diagram 4.5).
#### Stratified epithelia - with several layers of cells
!**Diagram 4.6**: Stratified squamous
epithelium
Epithelia with several layers of cells are found where toughness and
resistance to abrasion are needed.
##### Stratified squamous epithelium
Stratified squamous epithelium has many layers of flattened cells. It is
found lining the mouth, cervix and vagina. Cells at the base divide and
push up the cells above them and cells at the top are worn or pushed off
the surface (see diagram 4.6). This type of epithelium protects
underlying layers and repairs itself rapidly if damaged.
##### Keratinised stratified squamous epithelium
Keratinised stratified squamous epithelium has a tough waterproof
protein called **keratin** deposited in the cells. It forms the skin
found covering the outer surface of mammals. (Skin will be described in
more detail in Chapter 5).
### Connective Tissues
Blood, bone, tendons, cartilage, fibrous connective tissue and fat
(adipose) tissue are all classed as connective tissues. They are tissues
that are used for supporting the body or transporting substances around
the body. They also consist of three parts: they all have cells
suspended in a ground substance or **matrix** and most have **fibres**
running through it.
#### Blood
Blood consists of a matrix - plasma, with several types of cells and
cell fragments suspended in it. The fibres are only evident in blood
that has clotted. Blood will be described in detail in chapter 8.
#### Lymph
Lymph is similar in composition to blood plasma with various types of
white blood cell floating in it. It flows in lymphatic vessels.
#### Connective tissue 'proper'
!**Diagram 4.7**: Loose connective
tissue
Connective tissue \'proper\' consists of a jelly-like matrix with a
dense network of collagen and elastic fibres and various cells embedded
in it. There are various different forms of 'proper' connective tissue
(see 1, 2 and 3 below).
##### Loose connective tissue
Loose connective tissue is a sticky whitish substance that fills the
spaces between organs. It is found in the dermis of the skin (see
diagram 4.7).
##### Dense connective tissue
Dense connective tissue contains lots of thick fibres and is very
strong. It forms tendons, ligaments and heart valves and covers bones
and organs like the kidney and liver.
#### Adipose tissue
Adipose tissue consists of cells filled with fat. It forms the fatty
layer under the dermis of the skin, around the kidneys and heart and the
yellow marrow of the bones.
!**Diagram 4.8**:
Cartilage
#### Cartilage
Cartilage is the 'gristle' of the meat. It consists of a tough
jelly-like matrix with cells suspended in it. It may contain collagen
and elastic fibres. It is a flexible but tough tissue and is found at
the ends of bones, in the nose, ear and trachea and between the
vertebrae (see diagram 4.8).
#### Bone
Bone consists of a solid matrix made of calcium salts that give it its
hardness. **Collagen** fibres running through it give it its strength.
Bone cells are found in spaces in the matrix. Two types of bone are
found in the skeleton namely **spongy** and **compact bone**. They
differ in the way the cells and matrix are arranged. (See Chapter 6 for
more details of bone).
### Muscle Tissues
Muscle tissue is composed of cells that contract and move the body.
There are three types of muscle tissue:
!**Diagram 4.9**: Smooth muscle
fibres
#### Smooth muscle
Smooth muscle consists of long and slender cells with a central nucleus
(see diagram 4.9). It is found in the walls of blood vessels, airways to
the lungs and the gut. It changes the size of the blood vessels and
helps move food and fluid along. Contraction of smooth muscle fibres
occurs without the conscious control of the animal.
!**Diagram 4.10**: Skeletal muscle
fibres
#### Skeletal muscle
Skeletal muscle (sometimes called **striated**, **striped** or
**voluntary muscle**) has striped fibres with alternating light and dark
bands. It is attached to bones and is under the voluntary control of the
animal (see diagram 4.10). !**Diagram 4.11**:
Cardiac muscle
fibres
#### Cardiac muscle
Cardiac muscle is found only in the walls of the heart where it produces
the 'heart beat'. Cardiac muscle cells are branched cylinders with
central nuclei and faint stripes (see diagram 4.11). Each fibre
contracts automatically but the heart beat as a whole is controlled by
the **pacemaker** and the involuntary **autonomic nervous system**.
!**Diagram 4.12**: A motor
neuron
### Nervous Tissues
Nervous tissue forms the nerves, spinal cord and brain. Nerve cells or
**neurons** consist of a cell body and a long thread or axon that
carries the nerve impulse. An insulating sheath of fatty material
(**myelin**) usually surrounds the axon. Diagram 4.12 shows a typical
motor neuron that sends messages to muscles to contract.
## Vertebrate Bodies
We are so familiar with animals with backbones (i.e. vertebrates) that
it seems rather unnecessary to point out that the body is divided into
three sections. There is a well-defined **head** that contains the
brain, the major sense organs and the mouth, a **trunk** that contains
the other organs and a well-developed **tail**. Other features of
vertebrates may be less apparent. For instance, vertebrates that live on
the land have developed a flexible neck that is absent in fish where it
would be in the way of the gills and interfere with streamlining.
Mammals but not other vertebrates have a sheet of muscle called the
**diaphragm** that divides the trunk into the chest region or **thorax**
and the **abdomen**.
## Body Cavities
!**Diagram 4.13**: The body
cavities
In contrast to many primitive animals, vertebrates have spaces or **body
cavities** that contain the body organs. Most vertebrates have a single
body cavity but in mammals the diaphragm divides the main cavity into a
**thoracic** and an **abdominal cavity**. In the thoracic cavity the
heart and lungs are surrounded by their own membranes so that cavities
are created around the heart - the **pericardial cavity**, and around
the lungs -- the **pleural cavity** (see diagram 4.13).
## Organs
!**Diagram 4.14**: Cells, tissues and organs forming the digestive
system
Just as the various parts of the cell work together to perform the
cell's functions and a large number of similar cells make up a tissue,
so many different tissues can "cooperate" to form an organ that performs
a particular function. For example, connective tissues, epithelial
tissues, muscle tissue and nervous tissue combine to make the organ that
we call the stomach. In turn the stomach combines with other organs like
the intestines, liver and pancreas to form the digestive system (see
diagram 4.14).
## Generalised Plan Of The Mammalian Body
!**Diagram 4.15**: The main organs of the vertebrate
body
At this point it would be a good idea to make yourself familiar with the
major organs and their positions in the body of a mammal like the
rabbit. Diagram 4.15 shows the main body organs.
## Body Systems
Organs do not work in isolation but function in cooperation with other
organs and body structures to bring about the MRS GREN functions
necessary to keep an animal alive. For example the stomach can only work
in conjunction with the mouth and oesophagus (gullet). These provide it
with the food it breaks down and digests. It then needs to pass the food
on to the intestines etc. for further digestion and absorption. The
organs involved with the taking of food into the body, the digestion and
absorption of the food and elimination of waste products are
collectively known as the digestive system.
### The 11 body systems
1. Skin
: The skin covering the body consists of two layers, the
**epidermis** and **dermis**. Associated with these layers are
hairs, feathers, claws, hoofs, glands and sense organs of the
skin.
2. Skeletal System
: This can be divided into the bones of the skeleton and the
joints where the bones move over each other.
3. Muscular System
: The muscles, in conjunction with the skeleton and joints, give
the body the ability to move.
4. Cardiovascular System
: This is also known as the circulatory system. It consists of the
heart, the blood vessels and the blood. It transports substances
around the body.
5. Lymphatic System
: This system is responsible for collecting and "cleaning" the
fluid that leaks out of the blood vessels. This fluid is then
returned to the blood system. The lymphatic system also makes
antibodies that protect the body from invasion by bacteria etc.
It consists of lymphatic vessels, lymph nodes, the spleen and
thymus glands.
6. Respiratory System
: This is the system involved with bringing oxygen in the air into
the body and getting rid of carbon dioxide, which is a waste
product of processes that occur in the cell. It is made up of
the trachea, bronchi, bronchioles, lungs, diaphragm, ribs and
muscles that move the ribs in breathing.
7. Digestive System
: This is also known as the **gastrointestinal system**,
**alimentary system** or **gut**. It consists of the digestive
tube and glands like the liver and pancreas that produce
digestive secretions. It is concerned with breaking down the
large molecules in foods into smaller ones that can be absorbed
into the blood and lymph. Waste material is also eliminated by
the digestive system.
8. Urinary System
: This is also known as the **renal system**. It removes waste
products from the blood and is made up of the kidneys, ureters
and bladder.
9. Reproductive System
: This is the system that keeps the species going by making new
individuals. It is made up of the ovaries, uterus, vagina and
fallopian tubes in the female and the testes with associated
glands and ducts in the male.
10. Nervous System
: This system coordinates the activities of the body and responses
to the environment. It consists of the sense organs (eye, ear,
semicircular canals, and organs of taste and smell), the nerves,
brain and spinal cord.
11. Endocrine System
: This is the system that produces chemical messengers or
hormones. It consists of various **endocrine glands** (ductless
glands) that include the pituitary, adrenal, thyroid and pineal
glands as well as the testes and ovary.
## Homeostasis
All the body systems, except the reproductive system, are involved with
keeping the conditions inside the animal more or less stable. This is
called **homeostasis**. These constant conditions are essential for the
survival and proper functioning of the cells, tissues and organs of the
body. The skin, for example, has an important role in keeping the
temperature of the body constant. The kidneys keep the concentration of
salts in the blood within limits and the islets of Langerhans in the
pancreas maintain the correct level of glucose in the blood through the
hormone insulin. As long as the various body processes remain within
normal limits, the body functions properly and is healthy. Once
homeostasis is disturbed disease or death may result. (See Chapters 12
and16 for more on homeostasis).
## Directional Terms
!**Diagram 4.16**: The directional terms used with
animals{width="522"}
!**Diagram 4.17**: Transverse and longitudinal sections of a
mouse
In the following chapters the systems of the body in the list above will
be covered one by one. For each one the structure of the organs involved
will be described and the way they function will be explained.
In order to describe structures in the body of an animal it is necessary
to have a system for describing the position of parts of the body in
relation to other parts. For example it may be necessary to describe the
position of the liver in relation to the diaphragm, or the heart in
relation to the lungs. Certainly if you work further with animals, in a
veterinary clinic for example, it will be necessary to be able to
accurately describe the position of an injury. The terms used for this
are called **directional terms**.
The most common directional terms are **right** and **left**. However,
even these are not completely straightforward especially when looking at
diagrams of animals. The convention is to show the left side of the
animal or organ on the right side of the page. This is the view you
would get looking down on an animal lying on its back during surgery or
in a post-mortem. Sometimes it is useful to imagine 'getting inside' the
animal (so to speak) to check which side is which. The other common and
useful directional terms are listed below and shown in diagram 4.16.
Term Definition Example
----------------------- ---------------------------------------------------------------- -----------------------------------------------------------
Dorsal Nearer the back of the animal than The backbone is dorsal to the belly
Ventral Nearer the belly of the animal than The breastbone is ventral to the heart
Cranial (or anterior) Nearer to the skull than The diaphragm is cranial to the stomach
Caudal (or posterior) Nearer to the tail than The ribs are caudal to the neck
Proximal Closer to the body than (only used for structures on limbs) The shoulder is proximal to the elbow
Distal Further from the body than (only used for structures on limbs) The ankle is distal to the knee
Medial Nearer to the midline than The bladder is medial to the hips
Lateral Further from the midline than The ribs are lateral to the lungs
Rostral Towards the muzzle There are more grey hairs in the rostral part of the head
Palmar The \"walking\" surface of the front paw There is a small cut on the left palmar surface
Plantar The \"walking\" surface of the hind paw The pads are on the plantar side of the foot
Note that we don't use the terms **superior** and **inferior** for
animals. They are only used to describe the position of structures in
the human body (and possibly apes) where the upright posture means some
structures are above or superior to others.
In order to look at the structure of some of the parts or organs of the
body it may be necessary to cut them open or even make thin slices of
them that they can be examined under the microscope. The direction and
position of slices or sections through an animal's body have their own
terminology.
If an animal or organ is sliced lengthwise this section is called a
**longitudinal** or **sagittal section**. This is sometimes abbreviated
to LS.
If the section is sliced crosswise it is called a **transverse** or
**cross section**. This is sometimes abbreviated to TS or XS (see
diagram 4.17).
## Summary
- The characteristics of living organisms can be summarised by the
words "**MRS GREN**."
- There are 4 main types of tissue namely: **epithelial, connective,
muscle** and **nervous tissues**.
- **Epithelial tissues** form the skin and line the gut, respiratory
tract, bladder etc.
- **Connective tissues** form tendons, ligaments, adipose tissue,
blood, cartilage and bone, and are found in the dermis of the skin.
- **Muscular tissues** contract and consist of 3 types: **smooth,
skeletal and cardiac**.
- Vertebrate bodies have a **head, trunk** and **tail**. Body organs
are located in **body cavities**. 11 body systems perform essential
body functions most of which maintain a stable environment or
**homeostasis** within the animal.
- **Directional terms** describe the location of parts of the body in
relation to other parts.
## Worksheets
Students often find it hard learning how to use directional terms
correctly. There are two worksheets to help you with these and another
on tissues.
Directional Terms Worksheet
1
Directional Terms Worksheet
2
Tissues Worksheet
## Test Yourself
1\. Living organisms can be distinguished from non-living matter because
they usually move and grow. Name 5 other functions of living organisms:
: 1\.
: 2\.
: 3\.
: 4\.
: 5\.
2\. What tissue types would you find\...
: a\) lining the intestine:
: b\) covering the body:
: c\) moving bones:
: d\) flowing through blood vessels:
: e\) linking the eye to the brain:
: f\) lining the bladder:
3\. Name the body cavity in which the following organs are found:
: a\) heart:
: b\) bladder:
: c\) stomach:
: d\) lungs:
4\. Name the body system that\...
: a\) includes the bones and joints:
: b\) includes the ovaries and testes:
: c\) produces hormones:
: d\) includes the heart, blood vessels and blood:
5\. What is homeostasis?
6\. Circle which is correct:
: a\) The head is cranial \| caudal to the neck
: b\) The heart is medial \| lateral to the ribs
: c\) The elbow is proximal \| distal to the fingers
: d\) The spine is dorsal \| ventral to the heart
7\. Indicate whether or not these statements are true.
: a\) The stomach is cranial to the diaphragm - true \| false
: b\) The heart lies in the pelvic cavity - true \| false
: c\) The spleen is roughly the same size as the stomach and lies near
it - true \| false
: d\) The small intestine is proximal to the kidneys - true \| false
: e\) The bladder is medial to the hips - true \| false
: f\) The liver is cranial to the heart - true \| false
/Test Yourself Answers/
## Websites
- Animal organ systems and homeostasis
Overview of the different organ systems (in humans) and their functions
in maintaining homeostasis in the body.
<http://www.emc.maricopa.edu/faculty/farabee/biobk/BioBookANIMORGSYS.html>
- Wikipedia
Directional terms for animals. A little more detail than required but
still great.
<http://en.wikipedia.org/wiki/Anatomical_terms_of_location>
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/The Skin
!original image by
Fran-cis-ca cc
by{width="400"}
The skin is the first of the eleven body systems to be described. Each
chapter from now on will cover one body system.
The skin, sometimes known as the **Integumentary System** is, in fact,
the largest organ of the body. It has a complex structure, being
composed of many different tissues. It performs many functions that are
important in maintaining homeostasis in the body. Probably the most
important of these functions is the control of body temperature. The
skin also protects the body from physical damage and bacterial invasion.
The skin has an array of sense organs which sense the external
environment, and also cells which can make **vitamin D** in sunlight.
The skin is one of the first systems affected when an animal becomes
sick so it is important for anyone working with animals to have a sound
knowledge of the structure and functioning of the skin so they can
quickly recognize signs of disease.
## Objectives
After completing this section, you should know:
- the general structure of the skin
- the function of the keratin deposited in the epidermis
- the structure and function of keratin skin structures including
calluses, scales, nails, claws, hoofs and horns
- that antlers are not made either of keratin or in the epidermis
- the structure of hairs
- the structure of the different types of feathers and the function of
preening
- the general structure and function of sweat, scent, preen and
mammary glands
- the basic functions of the skin in sensing stimuli, temperature
control and production of vitamin D
- the mechanisms by which the skin regulates body temperature
## The Skin
The skin comes in all kinds of textures and forms. There is the dry
warty skin of toads and crocodiles, the wet slimy skin of fish and
frogs, the hard shell of tortoises and the soft supple skin of snakes
and humans. Mammalian skin is covered with hair, that of birds with
feathers, and fish and reptiles have scales. Pigment in the skin, hairs
or feathers can make the outer surface almost any color of the rainbow.
` Skin is one of the largest organs of the body, making up 6-8% of the total body weight. It consists of two distinct layers. The top layer is called the `**`epidermis`**` and under that is the '''dermis'`
The epidermis is the layer that bubbles up when we have a blister and as
we know from this experience, it has no blood or nerves in it. The cells
at the base of the epidermis continually divide and push the cells above
them upwards. As these cells move up they die and become the dry flaky
scales that fall off the skin surface. The cells in the epidermis die
because a special protein called **keratin** is deposited in them.
Keratin is an extremely important substance for it makes the skin
waterproof. Without it, land vertebrates like reptiles, birds, and
mammals would, like frogs, be able to survive only in damp places.
## Skin Structures Made Of Keratin
### Claws, Nails and Hoofs
Reptiles, birds, and mammals all have nails or claws on the ends of
their toes. They protect the end of the toe and may be used for
grasping, grooming, digging or in defense. They are continually worn
away and grow continuously from a growth layer at their base (see
diagram 5.2).
![](_Anatomy_and_physiology_of_animals_Carnivores_claw.jpg "_Anatomy_and_physiology_of_animals_Carnivores_claw.jpg")
Diagram 5.2 - A carnivore's claw
**Hoofs** are found in sheep, cows, horses etc. otherwise known as
**ungulate mammals**. These are animals that have lost toes in the
process of evolution and walk on the "nails" of the remaining toes. The
hoof is a cylinder of horny material that surrounds and protects the tip
of the toe (see diagram 5.3).
![](_Anatomy_and_physiology_of_animal_Horses_hoof.jpg "_Anatomy_and_physiology_of_animal_Horses_hoof.jpg")
Diagram 5.3 - A horse's hoof
### Horns And Antlers
True horns are made of keratin and are found in sheep, goats, and
cattle. They are never branched and, once grown, are never shed. They
consist of a core of bone arising in the dermis of the skin and are
fused with the skull. The horn itself forms as a hollow cone-shaped
sheath around the bone (see diagram 5.4).
![](_Anatomy_and_physiology_of_animals_A_horn.jpg "_Anatomy_and_physiology_of_animals_A_horn.jpg")
Diagram 5.4 - A horn
The **antlers** of male deer have quite a different structure. They are
not formed in the epidermis and do not consist of keratin but are entire
of bone. They are shed each year and are often branched, especially in
older animals. When growing they are covered in the skin called
**velvet** that forms the bone. Later the velvet is shed to leave the
bony antler. The velvet is often removed artificially to be sold in Asia
as a traditional medicine (see diagram 5.5).
![](_Anatomy_and_physiology_of_animals_Deer_antler.jpg "_Anatomy_and_physiology_of_animals_Deer_antler.jpg")
Diagram 5.5 - A deer antler
Other animals have projections on their heads that are not true horns
either. The horns on the head of giraffes are made of bone covered with
skin and hair, and the 'horn' of a rhinoceros is made of modified and
fused hair-like structures.
### Hair
Hair is also made of keratin and develops in the epidermis. It covers
the body of most mammals where it acts as an insulator and helps to
regulate the temperature of the body (see below). The color in hairs is
formed from the same pigment, **melanin** that colors the skin. Coat
color may help camouflage animals and sometimes acts to attract the
opposite sex.
![](_Anatomy_and_physiology_of_animals_A_hair.jpg "_Anatomy_and_physiology_of_animals_A_hair.jpg")
Diagram 5.6 - A hair
Hairs lie in a **follicle** and grow from a **root** that is well
supplied with blood vessels. The hair itself consists of layers of dead
keratin-containing cells and usually lies at a slant in the skin. A
small bundle of smooth muscle fibers (the **hair erector muscle**) is
attached to the side of each hair and when this contracts the hair
stands on end. This increases the insulating power of the coat and is
also used by some animals to make them seem larger when confronted by a
foe or a competitor(see diagram 5.6).
The whiskers of cats and the spines of hedgehogs are examples of special
types of hairs.
### Feathers
The lightness and stiffness of keratin is also a key to bird flight. In
the form of feathers, it provides the large airfoils necessary for
flapping and gliding flight. In another form, the light fluffy down
feathers, also made of keratin, are some of the best natural insulators
known. This superior insulation is necessary to help maintain the high
body temperatures of birds.
![](_Anatomy_and_physiology_of_animals_Contour_feather.jpg "_Anatomy_and_physiology_of_animals_Contour_feather.jpg")
Diagram 5.7 - A Contour Feather
Contour feathers are large feathers that cover the body, wings, and
tail. They have an expanded **vane** that provides the smooth,
continuous surface that is required for effective flight. This surface
is formed by **barbs** that extend out from the central shaft. If you
look carefully at a feather you can see that on either side of each barb
are thousands of **barbules** that lock together by a complex system of
hooks and notches. if this arrangement becomes disrupted, the bird uses
its beak to draw the barbs and barbules together again in an action
known as **preening** (see diagram 5.7).
![](_Anatomy_and_physiology_of_animals_Down_feather.jpg "_Anatomy_and_physiology_of_animals_Down_feather.jpg")
Diagram 5.8 - A Down Feather
![](_Anatomy_and_physiology_of_animals_Pin_feather.jpg "_Anatomy_and_physiology_of_animals_Pin_feather.jpg")
Diagram 5.9 - A Pin Feather
Down feathers are the only feathers covering a chick and form the main
insulation layer under the contour feathers of the adult. They have no
shaft but consist of a spray of simple, slender branches (see diagram
5.8).
Pin feathers have a slender hair-like shaft often with a tiny tuft of
barbs on the end. They are found between the other feathers and help
tell a bird how its feathers are lying (see diagram 5.9).
## Skin Glands
Glands are organs that produce and secrete fluids. They are usually
divided into two groups depending upon whether or not they have channels
or ducts to carry their products away. Glands with ducts are called
**exocrine glands** and include the glands found in the skin as well as
the glands that produce digestive enzymes in the gut. **Endocrine
glands** have no ducts and release their products (hormones) directly
into the bloodstream. The pituitary and adrenal glands are examples of
endocrine glands.
Most vertebrates have exocrine glands in the skin that produce a variety
of secretions. The slime on the skin of fish and frogs is **mucus**
produced by skin glands and some fish and frogs also produce poison from
modified glands. In fact, the skin glands of some frogs produce the most
poisonous chemicals known. Reptiles and birds have a dry skin with few
glands. The **preen gland**, situated near the base of the bird's tail,
produces oil to help keep the feathers in good condition. Mammals have
an array of different skin glands. These include the wax producing,
sweat, sebaceous and mammary glands.
**Wax producing glands** are found in the ears.
**Sebaceous glands** secrete an oily secretion into the hair follicle.
This secretion, known as **sebum**, keeps the hair supple and helps
prevent the growth of bacteria (see diagram 5.6).
**Sweat glands** consist of a coiled tube and a duct leading onto the
skin surface. Their appearance when examined under the microscope
inspired one of the first scientists to observe them to call them
"fairies' intestines" (see diagram 5.1). Sweat contains salt and waste
products like urea and the evaporation of sweat on the skin surface is
one of the major mechanisms for cooling the body of many mammals. Horses
can sweat up to 30 liters of fluid a day during active exercise, but
cats and dogs have few sweat glands and must cool themselves by panting.
The scent in the sweat of many animals is used to mark territory or
attract the opposite sex.
**Mammary glands** are only present in mammals. They are thought to be
modified sweat glands and are present in both sexes but are rarely
active in males (see diagram 5.10). The number of glands varies from
species to species. They open to the surface in well-developed nipples.
Milk contains proteins, sugars, fats and salts, although the exact
composition varies from one species to another.
![](_Anatomy_and_physiology_of_animals_Mammary_gland.jpg "_Anatomy_and_physiology_of_animals_Mammary_gland.jpg")
Diagram 5.10 - A Mammary Gland
## The Skin And Sun
A moderate amount of UV in sunlight is necessary for the skin to form
**vitamin D**. This vitamin prevents bone disorders like rickets to
which animals reared indoors are susceptible. Excessive exposure to the
UV in sunlight can be damaging and the pigment **melanin**, deposited in
cells at the base of the epidermis, helps to protect the underlying
layers of the skin from this damage. Melanin also colors the skin and
variations in the amount of melanin produce colors from pale yellow to
black.
### Sunburn And Skin Cancer
Excess exposure to the sun can cause sunburn. This is common in humans,
but light skinned animals like cats and pigs can also be sunburned,
especially on the ears. Skin cancer can also result from excessive
exposure to the sun. As holes in the ozone layer increase exposure to
the sun's UV rays so too does the rate of skin cancer in humans and
animals.
## The Dermis
The underlying layer of the skin, known as the dermis, is much thicker
but much more uniform in structure than the epidermis (see diagram 5.1).
It is composed of loose connective tissue with a felted mass of
**collagen** and **elastic fibres**. It is this part of the skin of
cattle and pigs etc. that becomes commercial leather when treated, The
dermis is well supplied with blood vessels, so cuts and burns that
penetrate down into the dermis will bleed or cause serious fluid loss.
There are also numerous nerve endings and touch receptors in the dermis
because, of course, the skin is sensitive to touch pain and temperature.
When looking at a section of the skin under the microscope you can see
hair follicles, sweat, and sebaceous glands dipping down into the
dermis. However, these structures do not originate in the dermis but are
derived from the epidermis.
In the lower levels of the dermis is a layer of fat or **adipose
tissue** (see diagram 5.1). This acts as an energy store and is an
excellent insulator especially in mammals like whales with little hair.
## The Skin And Temperature Regulation
Vertebrates can be divided into two groups depending on whether or not
they control their internal temperature. Amphibia (frogs) and reptiles
are said to be"**cold blooded**" (**poikilothermic**) because their body
temperature approximately follows that of the environment. Birds and
mammals are said to be **warm blooded (homoiothermic**) because they can
maintain a roughly constant body temperature despite changes in the
temperature of the environment.
Heat is produced by the biochemical reactions of the body (especially in
the liver) and by muscle contraction. Most of the heat loss from the
body occurs via the skin. It is therefore not surprising that many of
the mechanisms for controlling the temperature of the body operate here.
### Reduction Of Heat Loss
When an animal is in a cold environment and needs to reduce heat loss
the erector muscles contract causing the hair or feathers to rise up and
increase the layer of insulating air trapped by them.
![](_Anatomy_and_physiology_of_animals_Hair_muscle.jpg "_Anatomy_and_physiology_of_animals_Hair_muscle.jpg")
Diagram 5.11a) Hair muscle relaxed\...\...\...\...\...Diagram 5.11b)
Hair muscle contracted
Heat loss from the skin surface can also be reduced by the contraction
of the abundant blood vessels that lie in the dermis. This takes blood
flow to deeper levels, so reducing heat loss and causing pale skin (see
diagram 5.12a).
![](_Anatomy_and_physiology_of_animals_Reduction_of_heat_loss_by_skin.jpg "_Anatomy_and_physiology_of_animals_Reduction_of_heat_loss_by_skin.jpg")
Diagram 5.12a) Reduction of heat loss by skin
Shivering caused by twitching muscles produces heat that also helps
raise the body temperature.
### Increase Of Heat Loss
There are two main mechanisms used by animals to increase the amount of
heat lost through the skin when they are in a hot environment or high
levels of activity are increasing internal heat production. The first is
the expansion of the blood vessels in the dermis so blood flows near the
skin surface and heat loss to the environment can take place. The second
is by the production of sweat from the sweat glands (see diagram 5.12b).
The evaporation of this liquid on the skin surface produces a cooling
effect.
The mechanisms for regulating body temperature are under the control of
a small region of the brain called the **hypothalamus**. This acts like
a thermostat.
![](_Anatomy_and_physiology_of_animals_Increase_heat_loss_by_skin.jpg "_Anatomy_and_physiology_of_animals_Increase_heat_loss_by_skin.jpg")
Diagram 5.12b) - Increase of heat loss by skin
### Heat Loss And Body Size
The amount of heat that can be lost from the surface of the body is
related to the area of skin an animal has in relation to the total
volume of its body.
Small animals like mice have a very large skin area compared to their
total volume. This means they tend to lose large amounts of heat and
have difficulty keeping warm in cold weather. They may need to keep
active just to maintain their body temperature or may hibernate to avoid
the problem.
Large animals like elephants have the opposite problem. They have only a
relatively small skin area in relation to their total volume and may
have trouble keeping cool. This is one reason that these large animals
tend to have sparse coverings of hair.
## Summary
- Skin consists of two layers: the thin **epidermis** and under it the
thicker **dermis.**
- The **Epidermis** is formed by the division of base cells that push
those above them towards the surface where they die and are shed.
- **Keratin**, a protein, is deposited in the epidermal cells. It
makes skin waterproof.
- Various skin structures formed in the epidermis are made of keratin.
These include claws, nails, hoofs, horn, hair, and feathers.
- Various **Exocrine Glands** (with ducts) formed in the epidermis
include sweat, sebaceous, and mammary glands.
- **Melanin** deposited in cells at the base of the epidermis protects
deeper cells from the harmful effects of the sun.
- The **Dermis** is composed of loose connective tissue and is well
supplied with blood.
- Beneath the dermis is insulating **adipose tissue**.
- Body Temperature is controlled by sweat, hair erection, dilation,
and contraction of dermal capillaries and shivering.
## Worksheet
Use the Skin Worksheet to
help you learn all about the skin.
## Test Yourself
Now use this Skin Test Yourself to see how much you have learned and
remember.
```{=html}
<div class="collapsible">
```
```{=html}
<div class="title">
```
1\. The two layers that form the skin are the a)
_ _ and b) _ _
```{=html}
</div>
```
```{=html}
<div class="body">
```
: a)epidermis
: b)dermis
```{=html}
</div>
```
```{=html}
</div>
```
------------------------------------------------------------------------
```{=html}
<div class="collapsible">
```
```{=html}
<div class="title">
```
2\. The special protein deposited in epidermal cells to make them
waterproof is:
```{=html}
</div>
```
```{=html}
<div class="body">
```
keratin
```{=html}
</div>
```
```{=html}
</div>
```
------------------------------------------------------------------------
```{=html}
<div class="collapsible">
```
```{=html}
<div class="title">
```
3\. Many important skin structures are made of keratin. These include:
```{=html}
</div>
```
```{=html}
<div class="body">
```
hair,nails,foot pads,feathers,scales on reptiles
```{=html}
</div>
```
```{=html}
</div>
```
------------------------------------------------------------------------
```{=html}
<div class="collapsible">
```
```{=html}
<div class="title">
```
4\. Sweat, sebaceous and mammary glands all have ducts to the outside.
These kind of glands are known as:_ _
```{=html}
</div>
```
```{=html}
<div class="body">
```
exocrine
```{=html}
</div>
```
```{=html}
</div>
```
------------------------------------------------------------------------
```{=html}
<div class="collapsible">
```
```{=html}
<div class="title">
```
5\. What is the pigment deposited in skin cells that protects underlying
skin layers from the harmful effects of the sun?
```{=html}
</div>
```
```{=html}
<div class="body">
```
melanin
```{=html}
</div>
```
```{=html}
</div>
```
------------------------------------------------------------------------
```{=html}
<div class="collapsible">
```
```{=html}
<div class="title">
```
6\. How does the skin help cool an animal down when it is active or in a
hot environment?
```{=html}
</div>
```
```{=html}
<div class="body">
```
Panting and secretion from sweat glands.
```{=html}
</div>
```
```{=html}
</div>
```
------------------------------------------------------------------------
```{=html}
<div class="collapsible">
```
```{=html}
<div class="title">
```
7\. Name two mechanisms by means of which the skin helps prevent heat
loss when an animal is in a cold environment.
```{=html}
</div>
```
```{=html}
<div class="body">
```
: a.shivering
: b.contraction of blood vessels
```{=html}
</div>
```
```{=html}
</div>
```
/Test Yourself Answers/
## Websites
- <http://www.auburn.edu/academic/classes/zy/0301/Topic6/Topic6.html>
Comparative anatomy
Good on keratin skin structures - hairs, feathers, horns etc.
- <http://www.olympusmicro.com/micd/galleries/brightfield/skinhairymammal.html>
Hairy mammal skin
All about hairy mammalian skin.
- <http://www.earthlife.net/birds/feathers.html> The wonder of bird
feathers
Fantastic article on bird feathers with great pictures.
- <http://en.wikipedia.org/wiki/Skin> Wikipedia
Wikipedia on (human) skin. Good as usual, but more information than you
need.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/The Skeleton
!original image by
heschong cc
by{width="400"}
## Objectives
After completing this section, you should know:
- the functions of the skeleton
- the basic structure of a vertebrae and the regions of the vertebral
column
- the general structure of the skull
- the difference between 'true ribs' and 'floating ribs
- the main bones of the fore and hind limbs, and their girdles and be
able to identify them in a live cat, dog, or rabbit
Fish, frogs, reptiles, birds and mammals are called **vertebrates**, a
name that comes from the bony column of vertebrae (the spine) that
supports the body and head. The rest of the skeleton of all these
animals (except the fish) also has the same basic design with a skull
that houses and protects the brain and sense organs and ribs that
protect the heart and lungs and, in mammals, make breathing possible.
Each of the four limbs is made to the same basic pattern. It is joined
to the spine by means of a flat, broad bone called a **girdle** and
consists of one long upper bone, two long lower bones, several smaller
bones in the wrist or ankle and five digits (see diagrams 6.1 18,19 and
20).
![](Anatomy_and_physiology_of_animals_Mamalian_skeleton.jpg "Anatomy_and_physiology_of_animals_Mamalian_skeleton.jpg")
Diagram 6.1 - The mammalian skeleton
## The Vertebral Column
The vertebral column consists of a series of bones called **vertebrae**
linked together to form a flexible column with the skull at one end and
the tail at the other. Each vertebra consists of a ring of bone with
spines (spinous process) protruding dorsally from it. The spinal cord
passes through the hole in the middle and muscles attach to the spines
making movement of the body possible (see diagram 6.2).
![](Vertebra.JPG "Vertebra.JPG")
Diagram 6.2 - Cross section of a lumbar vertebrae
The shape and size of the vertebrae of mammals vary from the neck to the
tail. In the neck there are **cervical vertebrae** with the two top
ones, the **atlas** and **axis**, being specialized to support the head
and allow it to nod "Yes" and shake "No". **Thoracic vertebrae** in the
chest region have special surfaces against which the ribs move during
breathing. Grazing animals like cows and giraffes that have to support
weighty heads on long necks have extra large spines on their cervical
and thoracic vertebrae for muscles to attach to. **Lumbar vertebrae** in
the loin region are usually large strong vertebrae with prominent spines
for the attachment of the large muscles of the lower back. The **sacral
vertebrae** are usually fused into one solid bone called the **sacrum**
that sits within the **pelvic girdle**. Finally there are a variable
number of small bones in the tail called the **coccygeal vertebrae**
(see diagram 6.3).
![](Anatomy_and_physiology_of_animals_Regions_of_a_vertebral_column.svg "Anatomy_and_physiology_of_animals_Regions_of_a_vertebral_column.svg")
Diagram 6.3 - The regions of the vertebral column dik
## The Skull
The skull of mammals consists of 30 separate bones that grow together
during development to form a solid case protecting the brain and sense
organs. The "box "enclosing and protecting the brain is called the
**cranium** (see diagram 6.4). The bony wall of the cranium encloses the
middle and inner ears, protects the organs of smell in the nasal cavity
and the eyes in sockets known as **orbits**. The teeth are inserted into
the upper and lower jaws (see Chapter 5 for more on teeth) The lower jaw
is known as the **mandible**. It forms a joint with the skull moved by
strong muscles that allow an animal to chew. At the front of the skull
is the nasal cavity, separated from the mouth by a plate of bone called
the **palate**. Behind the nasal cavity and connecting with it are the
**sinuses**. These are air spaces in the bones of the skull which help
keep the skull as light as possible. At the base of the cranium is the
**foramen magnum**, translated as "big hole", through which the spinal
cord passes. On either side of this are two small, smooth rounded knobs
or **condyles** that **articulate** (move against) the first or Atlas
vertebra.
![](Anatomy_and_physiology_of_animals_Dogs_skull.jpg "Anatomy_and_physiology_of_animals_Dogs_skull.jpg")
Diagram 6.4 - A dog's skull
## The Rib
Paired ribs are attached to each thoracic vertebra against which they
move in breathing. Each rib is attached ventrally either to the
**sternum** or to the rib in front by cartilage to form the rib cage
that protects the heart and lungs. In dogs one pair of ribs is not
attached ventrally at all. They are called **floating ribs** (see
diagram 6.5). Birds have a large expanded sternum called the **keel** to
which the flight muscles (the 'breast" meat of a roast chicken) are
attached.
![](Anatomy_and_physiology_of_animals_Ribs.jpg "Anatomy_and_physiology_of_animals_Ribs.jpg")
Diagram 6.5 - The rib
## The Forelimb
The forelimb consists of: **Humerus, radius** and **ulna, carpals,
metacarpals, digits** or **phalanges** (see diagram 6.6). The top of the
humerus moves against (articulates with) the **scapula** at the shoulder
joint. By changing the number, size and shape of the various bones, fore
limbs have evolved to fit different ways of life. They have become wings
for flying in birds and bats, flippers for swimming in whales, seals and
porpoises, fast and efficient limbs for running in horses and arms and
hands for holding and manipulating in primates (see diagram 6.8).
![](Forelimb_dog_corrected.JPG "Forelimb_dog_corrected.JPG")
Diagram 6.6 - Forelimb of a dog
![](Hind_limb_dog_corrected.JPG "Hind_limb_dog_corrected.JPG")
Diagram 6.7. Hindlimb of a dog
![](Anatomy_and_physiology_of_animals_Various_vertebrate_limbs.jpg "Anatomy_and_physiology_of_animals_Various_vertebrate_limbs.jpg")
Diagram 6.8 - Various vertebrate limbs
![](Anatomy_and_physiology_of_animals_Forelimb_of_a_horse.jpg "Anatomy_and_physiology_of_animals_Forelimb_of_a_horse.jpg")
Diagram 6.9 - Forelimb of a horse
In the horse and other equines, the third toe is the only toe remaining
on the front and real limbs. Each toe is made up of a proximal phalange,
a middle phalange, and distal phalange (and some small bones often
referred to as sesamoids. In this image, the proximal phalange is
labeled P3 and the distal phalange is labeled hoof. (which is more
properly the name of the keratin covering that we see in the living
animal).
The legs of the horse are highly adapted to give it great galloping
speed over long distances. The bones of the lower leg and foot are
greatly elongated and the hooves are actually the tips of the third
fingers and toes, the other digits having been lost or reduced (see
diagram 6.9).
## The Hind Limb
The hind limbs have a similar basic pattern to the forelimb. They
consist of: **femur, tibia** and **fibula, tarsals, metatarsals,
digits** or **phalanges** (see diagram 6.7). The top of the femur moves
against (articulates with) the pelvis at the hip joint.
## The Girdles
The girdles pass on the "push" produced by the limbs to the body. The
shoulder girdle or **scapula** is a triangle of bone surrounded by the
muscles of the back but not connected directly to the spine (see diagram
6.1). This arrangement helps it to cushion the body when landing after a
leap and gives the forelimbs the flexibility to manipulate food or
strike at prey. Animals that use their forelimbs for grasping, burrowing
or climbing have a well-developed **clavicle** or collar bone. This
connects the shoulder girdle to the sternum. Animals like sheep, horses
and cows that use their forelimbs only for supporting the body and
locomotion have no clavicle. The **pelvic girdle** or hipbone attaches
the sacrum and the hind legs. It transmits the force of the leg-thrust
in walking or jumping directly to the spine (see diagram 6.10).
![](Anatomy_and_physiology_of_animals_Pelvic_girdle.jpg "Anatomy_and_physiology_of_animals_Pelvic_girdle.jpg")
Diagram 6.10 - The pelvic girdle
## Categories Of Bones
People who study skeletons place the different bones of the skeleton
into groups according to their shape or the way in which they develop.
Thus we have **long bones** like the femur, radius and finger bones,
**short bones** like the ones of the wrist and ankle, **irregular
bones** like the vertebrae and **flat bones** like the shoulder blade
and bones of the skull. Finally there are bones that develop in tissue
separated from the main skeleton. These include **sesamoid bones** which
include bones like the patella or kneecap that develop in tendons and
**visceral bones** that develop in the soft tissue of the penis of the
dog and the cow's heart.
Bird anatomy by Dr.Ankit Kumar Birman
## Bird Skeletons
Although the skeleton of birds is made up of the same bones as that of
mammals, many are highly adapted for flight. The most noticeable
difference is that the bones of the forelimbs are elongated to act as
wings. The large flight muscles make up as much as 1/5th of the body
weight and are attached to an extension of the sternum called the
**keel**. The vertebrae of the lower back are fused to provide the
rigidity needed to produce flying movements. There are also many
adaptations to reduce the weight of the skeleton. For instance birds
have a beak rather than teeth and many of the bones are hollow (see
diagram 6.11).
![](Anatomy_and_physiology_of_animals_Birds_skeleton.jpg "Anatomy_and_physiology_of_animals_Birds_skeleton.jpg")
Diagram 6.11 - A bird's skeleton
## The Structure Of Long Bones
A long bone consists of a central portion or **shaft** and two ends
called **epiphyses** (see diagram 6.12). Long bones move against or
articulate with other bones at joints and their ends have flattened
surfaces and rounded protuberances (condyles) to make this possible. If
you carefully examine a long bone you may also see raised or rough
surfaces. This is where the muscles that move the bones are attached.
You will also see holes (a hole is called a **foramen**) in the bone.
Blood vessels and nerves pass into the bone through these. You may also
be able to see a fine line at each end of the bone. This is called the
**growth plate** or **epiphyseal line** and marks the place where
increase in length of the bone occurred (see diagram 6.16).
![](Anatomy_and_physiology_of_animals_Femur.jpg "Anatomy_and_physiology_of_animals_Femur.jpg")
Diagram 6.12 - A femur
![](Anatomy_and_physiology_of_animals_l-s_section_long_bone.jpg "Anatomy_and_physiology_of_animals_l-s_section_long_bone.jpg")
6.13 - A longitudinal section through a long bone
If you cut a long bone lengthways you will see it consists of a hollow
cylinder (see diagram 6.13). The outer shell is covered by a tough
fibrous sheath to which the tendons are attached. Under this is a layer
of hard, dense **compact bone** (see below). This gives the bone its
strength. The central cavity contains fatty **yellow marrow**, an
important energy store for the body, and the ends are made from
honeycomb-like bony material called **spongy bone** (see box below).
Spongy bone contains **red marrow** where red blood cells are made.
## Compact Bone
Compact bone is not the lifeless material it may appear at first glance.
It is a living dynamic tissue with blood vessels, nerves and living
cells that continually rebuild and reshape the bone structure as a
result of the stresses, bends and breaks it experiences. Compact bone is
composed of microscopic hollow cylinders that run parallel to each other
along the length of the bone. Each of these cylinders is called a
**Haversian system**. Blood vessels and nerves run along the central
canal of each Haversian system. Each system consists of concentric rings
of bone material (the **matrix**) with minute spaces in it that hold the
bone cells. The hard matrix contains crystals of calcium phosphate,
calcium carbonate and magnesium salts with collagen fibres that make the
bone stronger and somewhat flexible. Tiny canals connect the cells with
each other and their blood supply (see diagram 6.14).
![](Anatomy_and_physiology_of_animals_Haversian_system_compact_bone.jpg "Anatomy_and_physiology_of_animals_Haversian_system_compact_bone.jpg")
Diagram 6.14 - Haversian systems of compact bone
## Spongy Bone
Spongy bone gives bones lightness with strength. It consists of an
irregular lattice that looks just like an old fashioned loofah sponge
(see diagram 6.15). It is found on the ends of long bones and makes up
most of the bone tissue of the limb girdles, ribs, sternum, vertebrae
and skull. The spaces contain red marrow, which is where red blood cells
are made and stored.
![](Anatomy_and_physiology_of_animals_Spongy_bone.jpg "Anatomy_and_physiology_of_animals_Spongy_bone.jpg")
Diagram 6.15 - Spongy bone
## Bone Growth
The skeleton starts off in the foetus as either cartilage or fibrous
connective tissue. Before birth and, sometimes for years after it, the
cartilage is gradually replaced by bone. The long bones increase in
length at the ends at an area known as the **epiphyseal plate** where
new cartilage is laid down and then gradually converted to bone. When an
animal is mature, bone growth ceases and the epiphyseal plate converts
into a fine **epiphyseal line** (see diagram 6.16).
![](Anatomy_and_physiology_of_animals_Growing_bone.jpg "Anatomy_and_physiology_of_animals_Growing_bone.jpg")
Diagram 6.16 - A growing bone
## Broken Bones
A fracture or break dramatically demonstrates the dynamic nature of
bone. Soon after the break occurs blood pours into the site and
cartilage is deposited. This starts to connect the broken ends together.
Later spongy bone replaces the cartilage, which is itself replaced by
compact bone. Partial healing to the point where some weight can be put
on the bone can take place in 6 weeks but complete healing may take 3--4
months.
## Joints
Joints are the structures in the skeleton where 2 or more bones meet.
There are several different types of joints. Some are **immovable** once
the animal has reached maturity. Examples of these are those between the
bones of the skull and the midline joint of the pelvic girdle. Some are
**slightly moveable** like the joints between the vertebrae but most
joints allow free movement and have a typical structure with a fluid
filled cavity separating the articulating surfaces (surfaces that move
against each other) of the two bones. This kind of joint is called a
**synovial joint** (see diagram 6.17). The joint is held together by
bundles of white fibrous tissue called **ligaments** and a fibrous
**capsule** encloses the joint. The inner layers of this capsule secrete
the **synovial fluid** that acts as a lubricant. The articulating
surfaces of the bones are covered with **cartilage** that also reduces
friction and some joints, e.g. the knee, have a pad of cartilage between
the surfaces that articulate with each other.
The shape of the articulating bones in a joint and the arrangement of
ligaments determine the kind of movement made by the joint. Some joints
only allow a to and from '*gliding movement**e.g. between the ankle and
wrist bones; the joints at the elbow, knee and fingers are**hinge
joints**and allow movement in two dimensions and the axis
vertebra**pivots**on the atlas vertebra.**Ball and socket joints*\',
like those at the shoulder and hip, allow the greatest range of
movement.
![](Anatomy_and_physiology_of_animals_Synovial_joint.jpg "Anatomy_and_physiology_of_animals_Synovial_joint.jpg")
Diagram 6.17 - A synovial joint
## Common Names Of Joints
Some joints in animals are given common names that tend to be confusing.
For example:
:# The joint between the femur and the tibia on the hind leg is our knee
but the **stifle** in animals.
:# Our ankle joint (between the tarsals and metatarsals) is the **hock**
in animals
:# Our knuckle joint (between the metacarpals or metatarsals and the
phalanges) is the **fetlock** in the horse.
:# The **"knee"** on the horse is equivalent to our wrist (ie on the
front limb between the radius and metacarpals) see diagrams 6.6, 6.7,
6.8, 6.17 and 6.18.
![](Anatomy_and_physiology_of_animals_Common_horse_joints.jpg "Anatomy_and_physiology_of_animals_Common_horse_joints.jpg")
Diagram 6.18 - The names of common joints of a horse
![](Anatomy_and_physiology_of_animals_Common_dog_joints.jpg "Anatomy_and_physiology_of_animals_Common_dog_joints.jpg")
Diagarm 6.19 - The names of common joints of a dog
## Locomotion
Different animals place different parts of the foot or forelimb on the
ground when walking or running.
Humans and bears put the whole surface of the foot on the ground when
they walk. This is known as **plantigrade locomotion**. Dogs and cats
walk on their toes (**digitigrade locomotion**) while horses and pigs
walk on their "toenails" or hoofs. This is called **unguligrade
locomotion** (see diagram 6.20).
:# **Plantigrade locomotion** (on the "palms of the hand) as in humans
and bears
:# **Digitigrade locomotion** (on the "fingers") as in cats and dogs
:# **Unguligrade locomotion** (on the "fingernails") as in horses
![](Anatomy_and_physiology_of_animals_Locomotion.jpg "Anatomy_and_physiology_of_animals_Locomotion.jpg")
Diagram 6.20 - Locomotion
## Summary
- The skeleton maintains the shape of the body, protects internal
organs and makes locomotion possible.
- The **vertebrae** support the body and protect the spinal cord. They
consist of: **cervical vertebrae** in the neck, **thoracic
vertebrae** in the chest region which articulate with the ribs,
**lumbar vertebrae** in the loin region, **sacral vertebrae** fused
to the pelvis to form the sacrum and **tail** or **coccygeal
vertebrae**.
- The **skull** protects the brain and sense organs. The **cranium**
forms a solid box enclosing the brain. The **mandible** forms the
jaw.
- The forelimb consists of the **humerus, radius, ulna, carpals,
metacarpals** and **phalanges**. It moves against or **articulates**
with the **scapula** at the shoulder joint.
- The hindlimb consists of the **femur, patella, tibia, fibula,
tarsals, metatarsals** and **digits**. It moves against or
articulates with the **pelvis** at the hip joint.
- Bones articulate against each other at **joints**.
- **Compact bone** in the shaft of long bones gives them their
strength. **Spongy bone** at the ends reduces weight. Bone growth
occurs at the **growth plate**.
## Worksheet
Use the Skeleton
Worksheet to learn the
main parts of the skeleton.
## Test Yourself
1\. Name the bones which move against (articulate with)\...
: a\) the humerus
: b\) the thoracic vertebrae
: c\) the pelvis
2\. Name the bones in the forelimb
3\. Where is the patella found?
4\. Where are the following joints located?
: a\) The stifle joint:
: b\) The hock joint
: c\) The hip joint:
5\. Attach the following labels to the diagram of the long bone shown
below.
: a\) compact bone
: b\) spongy bone
: c\) growth plate
: d\) fibrous sheath
: e\) red marrow
: f\) blood vessel
![](Section_through_long_bone.JPG "Section_through_long_bone.JPG")
6\. Attach the following labels to the diagram of a joint shown below
: a\) bone
: b\) articular cartilage
: c\) joint cavity
: d\) capsule
: e\) ligament
: f\) synovial fluid.
![](Joint_no_labels.JPG "Joint_no_labels.JPG")
/Test Yourself Answers/
## Websites
- <http://www.infovisual.info/02/056_en.html> Bird skeleton
A good diagram of the bird skeleton.
- <http://www.earthlife.net/mammals/skeleton.html> Earth life
A great introduction to the mammalian skeleton. A little above the level
required but it has so much interesting information it\'s worth reading
it.
- <http://www.klbschool.org.uk/interactive/science/skeleton.htm> The
human skeleton
Test yourself on the names of the bones of the (human) skeleton.
- <http://www.shockfamily.net/skeleton/JOINTS.HTML> The joints
Quite a good article on the different kinds of joints with diagrams.
- <http://en.wikipedia.org/wiki/Bone> Wikipedia
Wikipedia is disappointing where the skeleton is concerned. Most
articles stick entirely to the human skeleton or have far too much
detail. However this one on compact and spongy bone and the growth of
bone is quite good although still much above the level required.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/Muscles
!original image by
eclecticblogs cc
by{width="400"}
## Objectives
After completing this section, you should know:
- The structure of smooth, cardiac and skeletal muscle and where they
are found.
- What the insertion and origin of a muscle is.
- What flexion and extension of a muscle means.
- That muscles usually operate as antagonistic pairs.
- What tendons attach muscles to bones.
## Muscles
Muscles make up the bulk of an animal's body and account for about half
its weight. The meat on the chop or roast is muscle and is composed
mainly of protein. The cells that make up muscle tissue are elongated
and able to contract to a half or even a third of their length when at
rest. There are three different kinds of muscle based on appearance and
function: smooth, cardiac and skeletal muscle.
### Types of Muscle
- Smooth muscle
Smooth or Involuntary muscle carries out the unconscious routine tasks
of the body such as moving food down the digestive system, keeping the
eyes in focus and adjusting the diameter of blood vessels. The
individual cells are spindle-shaped, being fatter in the middle and
tapering off towards the ends with a nucleus in the centre of the cell.
They are usually found in sheets and are stimulated by the non-conscious
or autonomic nervous system as well as by hormones (see Chapter 3).
- Cardiac muscle
Cardiac muscle is only found in the wall of the heart. It is composed of
branching fibres that form a three-dimensional network. When examined
under the microscope, a central nucleus and faint stripes or striations
can be seen in the cells. Cardiac muscle cells contract spontaneously
and rhythmically without outside stimulation, but the sinoatrial node
(natural pacemaker) coordinates the heart beat. Nerves and hormones
modify this rhythm (see Chapter 3).
- Skeletal muscle
Skeletal muscle is the muscle that is attached to and moves the
skeleton, and is under voluntary control. It is composed of elongated
cells or fibres lying parallel to each other. Each cell is unusual in
that it has several nuclei and when examined under the microscope
appears striped or striated. This appearance gives the muscle its names
of striped or striated muscle. Each cell of striated muscle contains
hundreds, or even thousands, of microscopic fibres each one with its own
striped appearance. The stripes are formed by two different sorts of
protein that slide over each other making the cell contract (see diagram
7.1).
![](Anatomy_and_physiology_of_animals_Striped_muscle_cell.jpg "Anatomy_and_physiology_of_animals_Striped_muscle_cell.jpg")
Diagram 7.1 - A striped muscle cell
### Muscle contraction
Muscle contraction requires energy and muscle cells have numerous
mitochondria. However, only about 15% of the energy released by the
mitochondria is used to fuel muscle contraction. The rest is released as
heat. This is why exercise increases body temperature and makes animals
sweat or pant to rid themselves of this heat as part of
thermoregulation.
What we refer to as a muscle is made up of groups of muscle fibers
surrounded by connective tissue. The connective tissue sheaths join
together at the ends of the muscle to form tough white bands of fiber
called **tendons**. These attach the muscles to the bones. Tendons are
similar in structure to the **ligaments** that attach bones together
across a joint (see diagrams 7.2a and b).
![](_Anatomy_and_physiology_of_animals_Structure_of_a_muscle.jpg "_Anatomy_and_physiology_of_animals_Structure_of_a_muscle.jpg")
Diagram 7.2 a and b - The structure of a muscle
Remember:
**Tendons Tie** muscles to bones
:
: and
**Ligaments Link** bones at joints
### Structure of a muscle
A single muscle is fat in the middle and tapers towards the ends. The
middle part, which gets fatter when the muscle contracts, is called the
**belly** of the muscle. If you contract your biceps muscle in your
upper arm you may feel it getting fatter in the middle. You may also
notice that the biceps is attached at its top end to bones in your
shoulder while at the bottom it is attached to bones in your lower arm.
Notice that the bones at only one end move when you contract the biceps.
This end of the muscle is called the **insertion**. The other end of the
muscle, the **origin**, is attached to the bone that moves the least
(see diagram 7.3).
![](Anatomy_and_physiology_of_animals_Antagonistic_muscles,_flexion&tension.jpg "Anatomy_and_physiology_of_animals_Antagonistic_muscles,_flexion&tension.jpg")
Diagram 7.3 - Antagonistic muscles, flexion and extension
### Antagonistic muscles
Skeletal muscles usually work in pairs. When one contracts the other
relaxes and vice versa. Pairs of muscles that work like this are called
**antagonistic muscles**. For example the muscles in the upper forearm
are the biceps and triceps (see diagram 7.3). Together they bend the
elbow. When the biceps contracts (and the triceps relaxes) the lower
forearm is raised and the angle of the joint is reduced. This kind of
movement is called **flexion**. When the triceps is contracted (and the
biceps relaxes), the angle of the elbow increases. The term for this
movement is **extension**.
When you or animals contract skeletal muscle it is a voluntary action.
For example, you make a conscious decision to walk across the room,
raise the spoon to your mouth, or smile. There is however, another way
in which contraction of muscles attached to the skeleton happens that is
not under voluntary control. This is during a **reflex action**, such as
jerking your hand away from the hot stove you have touched by accident.
This is called a **reflex arc** and will be described in detail in
chapter 14-15.
## Summary
- There are three different kinds of muscle tissue: **smooth muscle**
in the walls of the gut and blood
: vessels; **cardiac muscle** in the heart and **skeletal muscle**
attached to the skeleton.
- **Tendons** attach skeletal muscles to the skeleton.
- **Ligaments** link bones together at a joint.
- Skeletal muscles work in pairs known as **antagonistic pairs.** As
one contracts the other in the
: pair relaxes.
- **Flexion** is the movement that reduces the angle of a joint.
**Extension** increases the angle
: of a joint.
## Test Yourself
1\. What kind of muscle tissue:
: a\) moves bones
: b\) makes the heart pump blood:
: c\) pushes food along the intestine:
: d\) makes your mouth form a smile:
: e\) makes the hair stand up when cold:
: f\) makes the diaphragm contract for breathing in:
2\. What structure connects a muscle to a bone?
3\. What is the insertion of a muscle?
4\. Which muscle is antagonistic to the biceps?
5\. Name 3 other antagonistic pairs and tell what they do.
6\. When you bend your knee what movement are you making?
7\. When you straighten your ankle joint what movement happens?
8\. What organelles provide the energy that muscles need?
9\. State the difference between a tendon and a ligament.
10.In the section \"Skeletal Muscle\" there are 2 proteins mentioned.
Name these proteins, state their size difference, and tell what they
actually do to help produce movement.
## Website
- <http://health.howstuffworks.com/muscle.htm> How muscles work
Description of the three types of muscles and how skeletal muscles work.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/Respiratory System
!original image by Zofia
P cc
by{width="400"}
## Objectives
After completing this section, you should know:
- why animals need energy and how they make it in cells
- why animals require oxygen and need to get rid of carbon dioxide
- what the term gas exchange means
- the structure of alveoli and how oxygen and carbon dioxide pass
across their walls
- how oxygen and carbon dioxide are carried in the blood
- the route air takes in the respiratory system (i.e. the nose,
pharynx, larynx, trachea, bronchus,
: bronchioles, alveoli)
- the movements of the ribs and diaphragm to bring about inspiration
- what tidal volume, minute volume and vital capacity are
- how the rate of breathing is controlled and how this helps regulate
the acid-base balance of the blood
## Overview
!**Diagram 9.1**: Alveoli with blood
supply
Animals require a supply of energy to survive. This energy is needed to
build large molecules like proteins and glycogen, make the structures in
cells, move chemicals through membranes and around cells, contract
muscles, transmit nerve impulses and keep the body warm. Animals get
their energy from the large molecules that they eat as food. Glucose is
often the energy source but it may also come from other carbohydrates,
as well as fats and protein. The energy is made by the biochemical
process known as **cellular respiration** that takes place in the
**mitochondria** inside every living cell.
The overall reaction can be summarised by the word equation given below.
**Carbohydrate Food (glucose) + Oxygen = Carbon Dioxide + Water +
energy**
As you can see from this equation, the cells need to be supplied with
**oxygen** and **glucose** and the waste product, **carbon dioxide**,
which is poisonous to cells, needs to be removed. The way the digestive
system provides the glucose for cellular respiration will be described
in Chapter 11 (\"The Gut and Digestion\"), but here we are only
concerned with the two gases, oxygen and carbon dioxide, that are
involved in cellular respiration. These gases are carried in the blood
to and from the tissues where they are required or produced.
Oxygen enters the body from the air (or water in fish)and carbon dioxide
is usually eliminated from the same part of the body. This process is
called **gas exchange**. In fish gas exchange occurs in the gills, in
land dwelling vertebrates lungs are the gas exchange organs and frogs
use gills when they are tadpoles and lungs, the mouth and the skin when
adults.
Mammals (and birds) are active and have relatively high body
temperatures so they require large amounts of oxygen to provide
sufficient energy through cellular respiration. In order to take in
enough oxygen and release all the carbon dioxide produced they need a
very large surface area over which gas exchange can take place. The many
minute air sacs or **alveoli** of the lungs provide this. When you look
at these under the microscope they appear rather like bunches of grapes
covered with a dense network of fine capillaries (see diagram 9.1). A
thin layer of water covers the inner surface of each alveolus. There is
only a very small distance -just 2 layers of thin cells - between the
air in the alveoli and the blood in the capillaries. The gases pass
across this gap by **diffusion**.
## Diffusion And Transport Of Oxygen
!**Diagram 9.2**: Cross section of an
alveolus
The air in the alveoli is rich in oxygen while the blood in the
capillaries around the alveoli is deoxygenated. This is because the
haemoglobin in the red blood cells has released all the oxygen it has
been carrying to the cells of the body. Oxygen diffuses from high
concentration to low concentration. It therefore crosses the narrow
barrier between the alveoli and the capillaries to enter the blood and
combine with the haemoglobin in the red blood cells to form
**oxyhaemoglobin**.
The narrow diameter of the capillaries around the alveoli means that the
blood flow is slowed down and that the red cells are squeezed against
the capillary walls. Both of these factors help the oxygen diffuse into
the blood (see diagram 9.2).
When the blood reaches the capillaries of the tissues the oxygen splits
from the haemoglobin molecule. It then diffuses into the tissue fluid
and then into the cells.
## Diffusion And Transport Of Carbon Dioxide
Blood entering the lung capillaries is full of carbon dioxide that it
has collected from the tissues. Most of the carbon dioxide is dissolved
in the plasma either in the form of **sodium bicarbonate** or **carbonic
acid**. A little is transported by the red blood cells. As the blood
enters the lungs the carbon dioxide gas diffuses through the capillary
and alveoli walls into the water film and then into the alveoli. Finally
it is removed from the lungs during breathing out (see diagram 9.2).
(See chapter 8 for more information about how oxygen and carbon dioxide
are carried in the blood).
## The Air Passages
When air is breathed in it passes from the nose to the alveoli of the
lungs down a series of tubes (see diagram 9.3). After entering the nose
the air passes through the **nasal cavity**, which is lined with a moist
membrane that adds warmth and moisture to the air as it passes. The air
then flows through the **pharynx** or throat, a passage that carries
both food and air, to the **larynx** where the voice-box is located.
Here the passages for food and air separate again. Food must pass into
the oesophagus and the air into the windpipe or **trachea**. To prevent
food entering this, a small flap of tissue called the **epiglottis**
closes the opening during swallowing (see chapter 11). A reflex that
inhibits breathing during swallowing also (usually) prevents choking on
food.
The trachea is the tube that ducts the air down the throat. Incomplete
rings of cartilage in its walls help keep it open even when the neck is
bent and head turned. The fact that acrobats and people that tie
themselves in knots doing yoga still keep breathing during the most
contorted manoeuvres shows how effective this arrangement is. The air
passage now divides into the two **bronchi** that take the air to the
right and left lungs before dividing into smaller and smaller
**bronchioles** that spread throughout the lungs to carry air to the
alveoli. Smooth muscles in the walls of the bronchi and bronchioles
adjust the diameter of the air passages.
The tissue lining the respiratory passages produces**mucus** and is
covered with miniature hairs or **cilia.** Any dust that is breathed
into the respiratory system immediately gets entangled in the mucous and
the cilia move it towards the mouth or nose where it can be coughed up
or blown out.
## The Lungs And The Pleural Cavities
!**Diagram 9.3**: The respiratory
system
The lungs fill most of the chest or **thoracic cavity**, which is
completely separated from the abdominal cavity by the **diaphragm**. The
lungs and the spaces in which they lie (called the **pleural cavities)**
are covered with membranes called the **pleura**. There is a thin film
of fluid between the two membranes. This lubricates them as they move
over each other during breathing movements.
### Collapsed Lungs
The pleural cavities are completely airtight with no connection with the
outside and if they are punctured by accident (a broken rib will often
do this), air rushes in and the lung collapses. Separating the two lungs
is a region of tissue that contains the oesophagus, trachea, aorta, vena
cava and lymph nodes. This is called the **mediastinum**. In humans and
sheep it separates the cavity completely so that puncturing one pleural
cavity leads to the collapse of only one lung. In dogs, however, this
separation is incomplete so a puncture results in a complete collapse of
both lungs.
## Breathing
!**Diagram 9.4a**: Inspiration; **Diagram 9.4b**:
Expiration
The process of breathing moves air in and out of the lungs. Sometimes
this process is called **respiration** but it is important not to
confuse it with the chemical process, **cellular respiration**, that
takes place in the mitochondria of cells. Breathing is brought about by
the movement of the diaphragm and the ribs.
### Inspiration
The diaphragm is a thin sheet of muscle that completely separates the
abdominal and thoracic cavities. When at rest it domes up into the
thoracic cavity but during breathing in or **inspiration** it flattens.
At the same time special muscles in the chest wall (external intercostal
muscles) move the ribs forwards and outwards. These movements of both
the diaphragm and the ribs cause the volume of the thorax to increase.
Because the pleural cavities are airtight, the lungs expand to fill this
increased space and air is drawn down the trachea into the lungs (see
diagram 9.4a).
### Expiration
**Expiration** or breathing out consists of the opposite movements. The
ribs move down and in and the diaphragm resumes its domed shape so the
air is expelled (see diagram 9.4b). Expiration is usually passive and no
energy is required (unless you are blowing up a balloon).
### Lung Volumes
!**Diagram 9.5**: Lung
volumes
As you sit here reading this just pay attention to your breathing.
Notice that your in and out breaths are really quite small and gentle
(unless you have just rushed here from somewhere else!). Only a small
amount of the total volume that your lungs hold is breathed in and out
with each breath. This kind of gentle "at rest" breathing is called
**tidal breathing** and the volume breathed in or out (they should be
the same) is the **tidal volume** (see diagram 9.5). Sometimes people
want to measure the volume of air inspired or expired during a minute of
this normal breathing. This is called the **minute volume**. It could be
estimated by measuring the volume of one tidal breath and then
multiplying that by the number of breaths in a minute. Of course it is
possible to take a deep breath and breathe in as far as you can and then
expire as far as possible. The volume of the air expired when a maximum
expiration follows a maximum inspiration is called the **vital
capacity** (see diagram 9.5).
### Composition Of Air
The air animals breathe in consists of 21% oxygen and 0.04% carbon
dioxide. Expelled air consists of 16% oxygen and 4.4% carbon dioxide.
This means that the lungs remove only a quarter of the oxygen contained
in the air. This is why it is possible to give someone (or an animal)
artificial respiration by blowing expired air into their mouth.
Breathing is usually an unconscious activity that takes place whether
you are awake or asleep, although, humans at least, can also control it
consciously. Two regions in the hindbrain called the **medulla
oblongata** and **pons** control the rate of breathing. These are called
**respiratory centres**. They respond to the concentration of carbon
dioxide in the blood. When this concentration rises during a bout of
activity, for example, nerve impulses are automatically sent to the
diaphragm and rib muscles that increase the rate and the depth of
breathing. Increasing the rate of breathing also increases the amount of
oxygen in the blood to meet the needs of this increased activity.
### The Acidity Of The Blood And Breathing
The degree of acidity of the blood (the **acid-base balance)** is
critical for normal functioning of cells and the body as a whole. For
example, blood that is too acidic or alkaline can seriously affect nerve
function causing a coma, muscle spasms, convulsions and even death.
Carbon dioxide carried in the blood makes the blood acidic and the
higher the concentration of carbon dioxide the more acidic it is. This
is obviously dangerous so there are various mechanisms in the body that
bring the acid-base balance back within the normal range. Breathing is
one of these homeostatic mechanisms. By increasing the rate of breathing
the animal increases the amount of dissolved carbon dioxide that is
expelled from the blood. This reduces the acidity of the blood.
### Breathing In Birds
Birds have a unique respiratory system that enables them to respire at
the very high rates necessary for flight. The lungs are relatively solid
structures that do not change shape and size in the same way as
mammalian lungs do. Tubes run through them and connect with a series of
air sacs situated in the thoracic and abdominal body cavities and some
of the bones. Movements of the ribs and breastbone or sternum expand and
compress these air sacs so they act rather like bellows and pump air
through the lungs. The evolution of this extremely efficient system of
breathing has enabled birds to migrate vast distances and fly at
altitudes higher than the summit of Everest.
## Summary
- Animals need to breathe to supply the cells with **oxygen** and
remove the waste product **carbon dioxide**.
- The lungs are situated in the **pleural cavities** of the
**thorax**.
- **Gas exchange** occurs in the **alveoli** of the lungs that provide
a large surface area. Here oxygen diffuses from the alveoli into the
red blood cells in the capillaries that surround the alveoli. Carbon
dioxide, at high concentration in the blood, diffuses into the
alveoli to be breathed out.
- **Inspiration** occurs when muscle contraction causes the ribs to
move up and out and the diaphragm to flatten. These movements
increase the volume of the pleural cavity and draw air down the
respiratory system into the lungs.
- The air enters the nasal cavity and passes to the **pharynx** and
**larynx** where the **epiglottis** closes the opening to the lungs
during swallowing. the air passes down the trachea kept open by
rings of cartilage to the **bronchi** and **bronchioles** and then
to the alveoli.
- **Expiration** is a passive process requiring no energy as it relies
on the relaxation of the muscles and recoil of the elastic tissue of
the lungs.
- The rate of breathing is determined by the concentration of carbon
dioxide in the blood. As carbon dioxide makes blood acidic, the rate
of breathing helps control the **acid/base balance** of the blood.
- The cells lining the respiratory passages produce mucus which traps
dust particles, which are wafted into the nose by cilia.
## Worksheet
Work through the Respiratory System
Worksheet to
learn the main structures of the respiratory system and how they
contribute to inspiration and gas exchange.
## Test Yourself
Then use the Test Yourself below to see how much you remember and
understand.
1\. What is meant by the phrase "gas exchange"?
2\. Where does gas exchange take place?
3\. What is the process by which oxygen moves from the alveoli into the
blood?
4\. Why does this process occur?
5\. How does the structure of the alveoli make gas exchange efficient?
6\. How is oxygen carried in the blood?
7\. List the structures that air passes on its way from the nose to the
alveoli:
8\. What is the function of the mucus and cilia lining the respiratory
passages?
9\. How do movements of the ribs and diaphragm bring about inspiration?
Circle the correct statement below.
: a\) The diaphragm domes up into the thorax and ribs move in and down
: b\) The diaphragm flattens and ribs move up and out
: c\) The diaphragm domes up into the thorax and the ribs move up and
out.
: d\) The diaphragm flattens and the ribs move in and down
10\. What is the function of the epiglottis?
11\. What controls the rate of breathing?
/Test Yourself Answers/
## Websites
- <http://www.biotopics.co.uk/humans/resyst.html> Bio topics
A good interactive explanation of breathing and gas exchange in humans
with diagrams to label, animations to watch and questions to answer.
- <http://www.schoolscience.co.uk/content/4/biology/abpi/asthma/asth3.html>
School Science
Although this is of the human respiratory system there is a good diagram
that gives the functions of the various parts as you move your mouse
over it. Also an animation of gas exchange and a quiz to test your
understanding of it.
- <http://en.wikipedia.org/wiki/Lung> Wikipedia
Wikipedia on the lungs. Lots of good information on the human
respiratory system with all sorts of links if you are interested.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/Lymphatic System
!original image by Toms
Bauģis cc
by{width="400"}
## Objectives
After completing this section, you should know:
- the function of the lymphatic system
- what the terms tissue fluid, lymph, lymphocyte and lymphatic mean
- how lymph is formed and what is in it
- the basic structure and function of a lymph node and the position of
some important lymph nodes in the body
- the route by which lymph circulates in the body and is returned to
the blood system
- the location and function of the spleen, thymus and lacteals
## Lymphatic System
When **tissue fluid** enters the small blind-ended **lymphatic
capillaries** that form a network between the cells it becomes
**lymph**. Lymph is a clear watery fluid that is very similar to blood
plasma except that it contains large numbers of white blood cells,
mostly **lymphocytes**. It also contains protein, cellular debris,
foreign particles and bacteria. Lymph that comes from the intestines
also contains many fat globules following the absorption of fat from the
digested food into the lymphatics (**lacteals**) of the villi (see
chapter 11 for more on these). From the lymph capillaries the lymph
flows into larger tubes called **lymphatic vessels.** These carry the
lymph back to join the blood circulation (see diagrams 10.1 and 10.2).
![](Anatomy_and_physiology_of_animals_Capillary_bed_with_lymphatic_capilaries.jpg "Anatomy_and_physiology_of_animals_Capillary_bed_with_lymphatic_capilaries.jpg")
Diagram 10.1 - A capillary bed with lymphatic capillaries
### Lymphatic vessels
Lymphatic vessels have several similarities to veins. Both are thin
walled and return fluid to the right hand side of the heart. The
movement of the fluid in both is brought about by the contraction of the
muscles that surround them and both have valves to prevent backflow. One
important difference is that lymph passes through at least one **lymph
node** or gland before it reaches the blood system (see diagram 10.2).
These filter out used cell parts, cancer cells and bacteria and help
defend the body from infection.
Lymph nodes are of various sizes and shapes and found throughout the
body and the more important ones are shown in diagram 10.3. They consist
of lymph tissue surrounded by a fibrous sheath. Lymph flows into them
through a number of incoming vessels. It then trickles through small
channels where white cells called **macrophages** (derived from
**monocytes**) remove the bacteria and debris by engulfing and digesting
them (see diagram 10.4). The lymph then leaves the lymph nodes through
outgoing vessels to continue its journey towards the heart where it
rejoins the blood circulation (see diagrams 10.2 and 10.3).
![](Anatomy_and_physiology_of_animals_Lymphatic_system.jpg "Anatomy_and_physiology_of_animals_Lymphatic_system.jpg")
Diagram 10.2 - The lymphatic system
![](Anatomy_and_physiology_of_animals_Circulation_of_lymph_w_major_lymph_nodes.jpg "Anatomy_and_physiology_of_animals_Circulation_of_lymph_w_major_lymph_nodes.jpg")
Diagram 10.3 - The circulation of lymph with major lymph nodes
![](Anatomy_and_physiology_of_animals_Lymph_node.jpg "Anatomy_and_physiology_of_animals_Lymph_node.jpg")
Diagram 10.4 - A lymph node
As well as filtering the lymph, lymph nodes produce the white cells
known as **lymphocytes**. Lymphocytes are also produced by the
**thymus**, **spleen** and **bone marrow**. There are two kinds of
lymphocyte. The first attack invading micro organisms directly while
others produce **antibodies** that circulate in the blood and attack
them.
The function of the lymphatic system can therefore be summarized as
transport and defense. It is important for returning the fluid and
proteins that have escaped from the blood capillaries to the blood
system and is also responsible for picking up the products of fat
digestion in the small intestine. Its other essential function is as
part of the immune system, defending the body against infection.
### Problems with lymph nodes and the lymphatic system
During infection of the body the lymph nodes often become swollen and
tender because of their increased activity. This is what causes the
swollen 'glands' in your neck during throat infections, mumps and
tonsillitis. Sometimes the bacteria multiply in the lymph node and cause
inflammation. Cancer cells may also be carried to the lymph nodes and
then transported to other parts of the body where they may multiply to
form a secondary growth or **metastasis**. The lymphatic system may
therefore contribute to the spread of cancer. Inactivity of the muscles
surrounding the lymphatic vessels or blockage of these vessels causes
tissue fluid to 'back up' in the tissues resulting in swelling or
**oedema**.
## Other Organs Of The Lymphatic System
The **spleen** is an important part of the lymphatic system. It is a
deep red organ situated in the abdomen caudal to the stomach (see
diagram 10.3). It is composed of two different types of tissue. The
first type makes and stores lymphocytes, the cells of the immune system.
The second type of tissue destroys worn out red blood cells, breaking
down the haemoglobin into iron, which is recycled, and waste products
that are excreted. The spleen also stores red blood cells. When severe
blood loss occurs, it contracts and releases them into the circulation.
The **thymus** is a large pink organ lying just under the sternum
(breastbone) just cranial to the heart (see diagram 10.1). It has an
important function processing lymphocytes so they are capable of
recognising and attacking foreign invaders like bacteria.
Other lymph organs are the **bone marrow** of the long bones where
lymphocytes are produced and **lymph nodules**, which are like tiny
lymph nodes. Large clusters of these are found in the wall of the small
intestine (called Peyer's Patches) and in the tonsils.
## Summary
- Fluid leaks out of the thin walled capillaries as they pass through
the tissues. This is called **tissue fluid**.
- Much of tissue fluid passes back into the capillaries. Some enters
the blind-ended lymphatic capillaries that form a network between
the cells of the tissues. This fluid is called **lymph**.
- Lymph flows from the **lymphatic capillaries** to **lymph vessels**,
passing through **lymph nodes** and along the thoracic duct to join
the blood system.
- Lymph nodes filter the lymph and produce **lymphocytes**.
- Other organs of the lymphatic system are the **spleen, thymus, bone
marrow**, and **lymph nodules**.
## Worksheets
Lymphatic System
Worksheet
## Test Yourself
1\. What is the difference between tissue fluid and lymph?
2\. By what route does lymph make its way back to join the blood of the
circulatory system?
3\. As the lymphatic system has no heart to push the lymph along what
makes it flow?
4\. What happens to the lymph as it passes through a lymph node?
5\. Where is the spleen located in the body?
6\. Where is the thymus located in the body?
7\. What is the function of lymphocytes?
/Test Yourself Answers/
## Websites
- <http://www.cancerhelp.org.uk/help/default.asp?page=117> Cancerhelp
A nice clear explanation here with great diagrams of the (human)
lymphatic system.
- <http://www.jdaross.cwc.net/lymphatics2.htm> Lymphatic system
Introduction to the Lymphatic System. A good description of lymph
circulation with an animation.
- <http://en.wikipedia.org/wiki/Lymphatic_system> Wikipedia
Good information here on the (human) lymphatic system, lymph circulation
and lymphoid organs.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/The Gut and Digestion
!original image by vnysia
cc
by{width="400"}
## Objectives
After completing this section, you should know:
- what is meant by the terms: ingestion, digestion, absorption,
assimilation, egestion, peristalsis and chyme
- the characteristics, advantages and disadvantages of a herbivorous,
carnivorous and omnivorous diet
- the 4 main functions of the gut
- the parts of the gut in the order in which the food passes down
## The Gut And Digestion
Plant cells are made of organic molecules using energy from the sun.
This process is called **photosynthesis**. Animals rely on these
ready-made organic molecules to supply them with their food. Some
animals (herbivores) eat plants; some (carnivores) eat the herbivores.
## Herbivores
Herbivores\'\' eat plant material. While no animal produces the
digestive enzymes to break down the large **cellulose** molecules in the
plant cell walls, micro-organisms\' like bacteria, on the other hand,
can break them down. Therefore herbivores employ micro-organisms to do
the job for them.
There are two types of herbivore:
: The first, **ruminants** like cattle, sheep and goats, house these
bacteria in a special compartment in the enlarged stomach called the
**rumen**.
```{=html}
<!-- -->
```
: The second group has an enlarged large intestine and caecum, called
a **functional caecum**, occupied by cellulose digesting
micro-organisms. These non-ruminant herbivores include the horse,
rabbit and rat.
Plants are a primary pure and good source of nutrients, however they
aren\'t digested very easily and therefore herbivores have to eat large
quantities of food to obtain all they require. Herbivores like cows,
horses and rabbits typically spend much of their day feeding. To give
the micro-organisms access to the cellulose molecules, the plant cell
walls need to be broken down. This is why herbivores have teeth that are
adapted to crush and grind. Their guts also tend to be lengthy and the
food takes a long time to pass through it.
Eating plants have other advantages. Plants are immobile so herbivores
normally have to spend little energy collecting them. This contrasts
with another main group of animals - the carnivores that often have to
chase their prey.
## Carnivores
**Carnivorous animals** like those in the cat and dog families, polar
bears, seals, crocodiles and birds of prey catch and eat other animals.
They often have to use large amounts of energy finding, stalking,
catching and killing their prey. However, they are rewarded by the fact
that meat provides a very concentrated source of nutrients. Carnivores
in the wild therefore tend to eat distinct meals often with long and
irregular intervals between them. Time after feeding is spent digesting
and absorbing the food.
The guts of carnivores are usually shorter and less complex than those
of herbivores because meat is easier to digest than plant material.
Carnivores usually have teeth that are specialised for dealing with
flesh, gristle and bone. They have sleek bodies, strong, sharp claws and
keen senses of smell, hearing and sight. They are also often cunning,
alert and have an aggressive nature.
## Omnivores
Many animals feed on both animal and vegetable material -- they are
**omnivorous.** There are currently two similar definitions of
omnivorism:
1\. Having the ability to derive energy from plant and animal material.
2\. Having characteristics which are optimized for acquiring and eating
both plants and animals.
Some animals fit both definitions of omnivorism, including bears,
raccoons, dogs, and hedgehogs. Their food is diverse, ranging from plant
material to animals they have either killed themselves or scavenged from
other carnivores. They are well equipped to hunt and tear flesh (claws,
sharp teeth, and a strong, non-rotational jaw hinge), but they also have
slightly longer intestines than carnivores, which has been found to
facilitate plant digestion. The examples also retain an ability to taste
amino acids, making unseasoned flesh palatable to most members of the
species.
Classically, humans and chimpanzees are classified as omnivores.
However, further research has shown chimpanzees typically consume 95%
plant matter (the remaining mass is largely termites), and their teeth,
jaw hinge, stomach pH, and intestinal length closely matches herbivores,
which many suggest classified them as herbivores. Humans, conversely,
have chosen to eat meat for much of the archaeological record, although
their teeth, jaw hinge, and stomach pH, and intestinal lengths also
closely match other herbivores.
The dispute of human/ chimps classifications is caused by two things.
First, there is research that both plant-only and some-animal diets
promote health (longevity and freedom from disease) in humans. Second,
well-off humans have often chosen to eat meat and dairy products
throughout written history, which some argue shows that we prefer meat
and dairy by latent instinct.
Per the classical definition, omnivores lack the specialized teeth and
guts of carnivores and herbivores but are often highly intelligent and
adaptable reflecting their varied diet.
## Treatment Of Food
Whether an animal eats plants or flesh, the **carbohydrates**, **fats**
and **proteins** in the food it eats are generally giant molecules (see
chapter 1). These need to be split up into smaller ones before they can
pass into the blood and enter the cells to be used for energy or to make
new cell constituents.
For example:
: **Carbohydrates** like cellulose, starch, and glycogen need to be
split into **glucose** and other **monosaccharides**;
: **Proteins** need to be split into **amino acids**;
: **Fats** or **lipids** need to be split into **fatty acids** and
**glycerol**.
## The Gut
The **digestive tract, alimentary canal** or **gut** is a hollow tube
stretching from the mouth to the anus. It is the organ system concerned
with the treatment of foods.
At the mouth the large food molecules are taken into the gut - this is
called **ingestion**. They must then be broken down into smaller ones by
digestive enzymes - **digestion**, before they can be taken from the gut
into the blood stream - **absorption**. The cells of the body can then
use these small molecules - **assimilation**. The indigestible waste
products are eliminated from the body by the act of **egestion** (see
diagram 11.1).
![](Anatomy_and_physiology_of_animals_From_ingestion_to_egestion.jpg "Anatomy_and_physiology_of_animals_From_ingestion_to_egestion.jpg")
Diagram 11.1 - From ingestion to egestion
The 4 major functions of the gut are:
: 1\. Transporting the food;
: 2\. Processing the food physically by breaking it up (chewing),
mixing, adding fluid etc.
: 3\. Processing the food chemically by adding digestive enzymes to
split large food molecules into smaller ones.
: 4\. Absorbing these small molecules into the blood stream so the
body can use them.
The regions of a typical mammals gut (for example a cat or dog) are
shown in diagram 11.2.
![](Anatomy_and_physiology_of_animals_Typical_mammalian_gut.jpg "Anatomy_and_physiology_of_animals_Typical_mammalian_gut.jpg")
Diagram 11.2 - A typical mammalian gut
The food that enters the **mouth** passes to the **oesophagus**, then to
the **stomach**, **small intestine**, **cecum**, **large intestine**,
**rectum** and finally undigested material exits at the **anus**. The
**liver** and **pancreas** produce secretions that aid digestion and the
**gall bladder** stores **bile**. Herbivores have an appendix which they
use for the digestion of cellulose. Carnivores have an appendix but is
not of any function anymore due to the fact that their diet is not based
on cellulose anymore.
## Mouth
The mouth takes food into the body. The lips hold the food inside the
mouth during chewing and allow the baby animal to suck on its mother's
teat. In elephants the lips (and nose) have developed into the trunk
which is the main food collecting tool. Some mammals, e.g. hamsters,
have stretchy cheek pouches that they use to carry food or material to
make their nests.
The sight or smell of food and its presence in the mouth stimulates the
**salivary glands** to secrete **saliva**. There are four pairs of these
glands in cats and dogs (see diagram 11.3). The fluid they produce
moistens and softens the food making it easier to swallow. It also
contains the enzyme, **salivary amylase**, which starts the digestion of
starch.
The **tongue** moves food around the mouth and rolls it into a ball
called a bolus for swallowing. **Taste buds** are located on the tongue
and in dogs and cats it is covered with spiny projections used for
grooming and lapping. The cow's tongue is prehensile and wraps around
grass to graze it.
Swallowing is a complex reflex involving 25 different muscles. It pushes
food into the oesophagus and at the same time a small flap of tissue
called the **epiglottis** closes off the windpipe so food doesn't enter
the trachea and choke the animal (see diagram 11.4).
![](Anatomy_and_physiology_of_animals_Salivary_glands.jpg "Anatomy_and_physiology_of_animals_Salivary_glands.jpg")
Diagram 11.3 - Salivary glands
![](Anatomy_and_physiology_of_animals_Section_through_head_of_a_dog.jpg "Anatomy_and_physiology_of_animals_Section_through_head_of_a_dog.jpg")
Diagram 11.4 - Section through the head of a dog
## Teeth
Teeth seize, tear and grind food. They are inserted into sockets in the
bone and consist of a crown above the gum and root below. The crown is
covered with a layer of **enamel**, the hardest substance in the body.
Below this is the **dentine**, a softer but tough and shock resistant
material. At the centre of the tooth is a space filled with **pulp**
which contains blood vessels and nerves. The tooth is cemented into the
**socket** and in most teeth the tip of the root is quite narrow with a
small opening for the blood vessels and nerves (see diagram 11.5).
In teeth that grow continuously, like the incisors of rodents, the
opening remains large and these teeth are called **open rooted teeth**.
Mammals have 2 distinct sets of teeth. The first set, the **milk
teeth**, are replaced by the **permanent teeth**.
![](Anatomy_and_physiology_of_animals_Stucture_of_tooth.jpg "Anatomy_and_physiology_of_animals_Stucture_of_tooth.jpg")
Diagram 11.5 - Structure of a tooth
### Types Of Teeth
All the teeth of fish and reptiles are similar but mammals usually have
four different types of teeth.
The **incisors** are the chisel-shaped 'biting off' teeth at the front
of the mouth. In rodents and rabbits the incisors never stop growing
(open-rooted teeth). They must be worn or ground down continuously by
gnawing. They have hard enamel on one surface only so they wear unevenly
and maintain their sharp cutting edge.
The largest incisors in the animal kingdom are found in elephants, for
tusks are actually giant incisors. Sloths have no incisors at all, and
sheep have no incisors in the upper jaw (see diagram 11.6). Instead
there is a horny pad against which the bottom incisors cut.
The **canines** or 'wolf-teeth' are long, cone-shaped teeth situated
just behind the incisors. They are particularly well developed in the
dog and cat families where they are used to hold, stab and kill the prey
(see diagram 11.7).
The tusks of boars and walruses are large canines while rodents and
herbivores like sheep have no (or reduced) canines. In these animals the
space where the canines would normally be is called the **diastema**. In
rodents like the rat and beaver it allows the debris from gnawing to be
expelled easily.
The cheek teeth or **premolars** and **molars** crush and grind the
food. They are particularly well developed in herbivores where they have
complex ridges that form broad grinding surfaces (see diagram 11.6).
These are created from alternating bands of hard enamel and softer
dentine that wear at different rates.
In carnivores the premolars and molars slice against each other like
scissors and are called **carnassial** teeth see diagram 11.7). They are
used for shearing flesh and bone.
### Dental Formula
The numbers of the different kinds of teeth can be expressed in a
**dental formula**. This gives the numbers of incisors, canines,
premolars and molars in **one half** of the mouth. The numbers of these
four types of teeth in the left **or** right **half of the upper jaw**
are written above a horizontal line and the four types of teeth in the
right **or** left **half of the lower jaw** are written below it.
Thus the dental formula for the sheep is:
:
: 0.0.3.3
: 3.1.3.3
It indicates that in the upper right (or left) **half** of the jaw there
are no incisors or canines (i.e. there is a **diastema**), three
premolars and three molars. In the lower right (or left) **half** of the
jaw are three incisors, one canine, three premolars and three molars
(see diagram 11.6).
![](Anatomy_and_physiology_of_animals_Sheeps_skull.jpg "Anatomy_and_physiology_of_animals_Sheeps_skull.jpg")
Diagram 11.6 - A sheep's skull
The dental formula for a dog is:
:
: 3.1.4.2
: 3.1.4.3
The formula indicates that in the right (or left) **half** of the upper
jaw there are three incisors, one canine, four premolars and two molars.
In the right (or left) **half** of the lower jaw there are three
incisors, one canine, four premolars and three molars (see diagram
11.7).
![](Anatomy_and_physiology_of_animals_Dogs_skull.jpg "Anatomy_and_physiology_of_animals_Dogs_skull.jpg")
Diagram 11.7 - A dog's skull
## Esophagus
The **Esophagus** transports food to the stomach. Food is moved along
the esophagus, as it is along the small and large intestines, by
contraction of the smooth muscles in the walls that push the food along
rather like toothpaste along a tube. This movement is called
**peristalsis** (see diagram 11.8).
![](Anatomy_and_physiology_of_animals_Peristalis.jpg "Anatomy_and_physiology_of_animals_Peristalis.jpg")
Diagram 11.8 - Peristalsis
## Stomach
The **stomach** stores and mixes the food. Glands in the wall secrete
**gastric juice** that contains enzymes to digest protein and fats as
well as **hydrochloric acid** to make the contents very acidic. The
walls of the stomach are very muscular and churn and mix the food with
the gastric juice to form a watery mixture called **chyme** (pronounced
kime). Rings of muscle called **sphincters** at the entrance and exit to
the stomach control the movement of food into and out of it (see diagram
11.9).
![](Anatomy_and_physiology_of_animals_Stomach.jpg "Anatomy_and_physiology_of_animals_Stomach.jpg")
Diagram 11.9 - The stomach
## Small Intestine
Most of the breakdown of the large food molecules and absorption of the
smaller molecules take place in the long and narrow small intestine. The
total length varies but it is about 6.5 metres in humans, 21 metres in
the horse, 40 metres in the ox and over 150 metres in the blue whale.
It is divided into 3 sections: the duodenum (after the stomach), jejunum
and ileum. The duodenum receives 3 different secretions:
: 1\) **Bile** from the liver;
: 2\) **Pancreatic juice** from the pancreas and
: 3\) **Intestinal juice** from glands in the intestinal wall.
These complete the digestion of starch, fats and protein. The products
of digestion are absorbed into the blood and lymphatic system through
the wall of the intestine, which is lined with tiny finger-like
projections called **villi** that increase the surface area for more
efficient absorption (see diagram 11.10).
![](Anatomy_and_physiology_of_animals_Wall_of_small_intestine_showing_villi.jpg "Anatomy_and_physiology_of_animals_Wall_of_small_intestine_showing_villi.jpg")
Diagram 11.10 - The wall of the small intestine showing villi
## The Rumen
In ruminant herbivores like cows, sheep and antelopes the stomach is
highly modified to act as a "fermentation vat". It is divided into four
parts. The largest part is called the **rumen**. In the cow it occupies
the entire left half of the abdominal cavity and can hold up to 270
litres. The **reticulum** is much smaller and has a honeycomb of raised
folds on its inner surface. In the camel the reticulum is further
modified to store water. The next part is called the **omasum** with a
folded inner surface. Camels have no omasum. The final compartment is
called the **abomasum**. This is the 'true' stomach where muscular walls
churn the food and gastric juice is secreted (see diagram 11.11).
![](Anatomy_and_physiology_of_animals_The_rumen.jpg "Anatomy_and_physiology_of_animals_The_rumen.jpg")
Diagram 11.11 - The rumen
Ruminants swallow the grass they graze almost without chewing and it
passes down the oesophagus to the rumen and reticulum. Here liquid is
added and the muscular walls churn the food. These chambers provide the
main fermentation vat of the ruminant stomach. Here bacteria and
single-celled animals start to act on the cellulose plant cell walls.
These organisms break down the cellulose to smaller molecules that are
absorbed to provide the cow or sheep with energy. In the process, the
gases methane and carbon dioxide are produced. These cause the "burps"
you may hear cows and sheep making.
Not only do the micro-organisms break down the cellulose but they also
produce the **vitamins E, B and K** for use by the animal. Their
digested bodies provide the ruminant with the majority of its protein
requirements.
In the wild grazing is a dangerous activity as it exposes the herbivore
to predators. They crop the grass as quickly as possible and then when
the animal is in a safer place the food in the rumen can be regurgitated
to be chewed at the animal's leisure. This is 'chewing the cud' or
**rumination**. The finely ground food may be returned to the rumen for
further work by the microorganisms or, if the particles are small
enough, it will pass down a special groove in the wall of the oesophagus
straight into the omasum. Here the contents are kneaded and water is
absorbed before they pass to the abomasum. The abomasum acts as a
"proper" stomach and gastric juice is secreted to digest the protein.
## Large Intestine
The **large intestine** consists of the **caecum**, **colon** and
**rectum**. The chyme from the small intestine that enters the colon
consists mainly of water and undigested material such as cellulose
(fibre or roughage). In omnivores like the pig and humans the main
function of the colon is absorption of water to give solid faeces.
Bacteria in this part of the gut produce vitamins B and K.
The caecum, which forms a dead-end pouch where the small intestine joins
the large intestine, is small in pigs and humans and helps water
absorption. However, in rabbits, rodents and horses, the caecum is very
large and called the **functional caecum**. It is here that cellulose is
digested by micro-organisms. The **appendix**, a narrow dead end tube at
the end of the caecum, is particularly large in primates but seems to
have no digestive function.
## Functional Caecum
The caecum in the rabbit, rat and guinea pig is greatly enlarged to
provide a "fermentation vat" for micro-organisms to break down the
cellulose plant cell walls. This is called a **functional caecum** (see
diagram 11.12). In the horse both the caecum and the colon are enlarged.
As in the rumen, the large cellulose molecules are broken down to
smaller molecules that can be absorbed. However, the position of the
functional caecum after the main areas of digestion and absorption,
means it is potentially less effective than the rumen. This means that
the small molecules that are produced there can not be absorbed by the
gut but pass out in the faeces. The rabbit and rodents (and foals) solve
this problem by eating their own faeces so that they pass through the
gut a second time and the products of cellulose digestion can be
absorbed in the small intestine. Rabbits produce two kinds of faeces.
Softer night-time faeces are eaten directly from the anus and the harder
pellets you are probably familiar with, that have passed through the gut
twice.
![](Anatomy_and_physiology_of_animals_Gut_of_a_rabbit.jpg "Anatomy_and_physiology_of_animals_Gut_of_a_rabbit.jpg")
Diagram 11.12 - The gut of a rabbit
## The Gut Of Birds
Birds' guts have important differences from mammals' guts. Most
obviously, birds have a **beak** instead of teeth. Beaks are much
lighter than teeth and are an adaptation for flight. Imagine a bird
trying to take off and fly with a whole set of teeth in its head! At the
base of the oesophagus birds have a bag-like structure called a
**crop**. In many birds the crop stores food before it enters the
stomach, while in pigeons and doves glands in the crop secretes a
special fluid called **crop-milk** which parent birds regurgitate to
feed their young. The stomach is also modified and consists of two
compartments. The first is the true stomach with muscular walls and
enzyme secreting glands. The second compartment is the **gizzard**. In
seed eating birds this has very muscular walls and contains pebbles
swallowed by the bird to help grind the food. This is the reason why you
must always supply a caged bird with grit. In birds of prey like the
falcon the walls of the gizzard are much thinner and expand to
accommodate large meals (see diagram 11.13).
![](Anatomy_and_physiology_of_animals_Stomach_&_small_intestine_of_hen.jpg "Anatomy_and_physiology_of_animals_Stomach_&_small_intestine_of_hen.jpg")
Diagram 11.13 - The stomach and small intestine of a hen
## Digestion
During digestion the large food molecules are broken down into smaller
molecules by **enzymes**. The three most important groups of enzymes
secreted into the gut are:
:# **Amylases** that split carbohydrates like starch and glycogen into
monosaccharides like glucose.
:# **Proteases** that split proteins into amino acids.
:# **Lipases** that split lipids or fats into fatty acids and glycerol.
Glands produce various secretions which mix with the food as it passes
along the gut.
These secretions include:
:# **Saliva** secreted into the mouth from several pairs of **salivary
glands** (see diagram 11.3). Saliva consists mainly of water but
contains salts, mucous and salivary amylase. The function of saliva is
to lubricate food as it is chewed and swallowed and salivary amylase
begins the digestion of starch.
:# **Gastric juice** secreted into the stomach from glands in its walls.
Gastric juice contains **pepsin** that breaks down protein and
hydrochloric acid to produce the acidic conditions under which this
enzyme works best. In baby animals rennin to digest milk is also
produced in the stomach.
:# **Bile** produced by the liver. It is stored in the **gall bladder**
and secreted into the duodenum via the **bile duct** (see diagram
11.14). (Note that the horse, deer, parrot and rat have no gall
bladder). Bile is not a digestive enzyme. Its function is to break up
large globules of fat into smaller ones so the fat splitting enzymes can
gain access the fat molecules.
![](Anatomy_and_physiology_of_animals_Liver,_gall_bladder_&_pancreas.jpg "Anatomy_and_physiology_of_animals_Liver,_gall_bladder_&_pancreas.jpg")
Diagram 11.14 - The liver, gall bladder and pancreas
## Pancreatic juice
The **pancreas** is a gland located near the beginning of the duodenum
(see diagram 11.14). In most animals it is large and easily seen but in
rodents and rabbits it lies within the membrane linking the loops of the
intestine (the **mesentery**) and is quite difficult to find.
**Pancreatic juice** is produced in the pancreas. It flows into the
duodenum and contains **amylase** for digesting starch, **lipase** for
digesting fats and **protease** for digesting proteins.
## Intestinal juice
**Intestinal juice** is produced by glands in the lining of the small
intestine. It contains enzymes for digesting disaccharides and proteins
as well as mucus and salts to make the contents of the small intestine
more alkaline so the enzymes can work.
## Absorption
The small molecules produced by digestion are absorbed into the
**villi** of the wall of the **small intestine**. The tiny finger-like
projections of the villi increase the surface area for absorption.
Glucose and amino acids pass directly through the wall into the blood
stream by diffusion or active transport. Fatty acids and glycerol enter
vessels of the lymphatic system (**lacteals**) that run up the centre of
each villus.
## The Liver
The liver is situated in the abdominal cavity adjacent to the diaphragm
(see diagrams 2 and 14). It is the largest single organ of the body and
has over 100 known functions. Its most important digestive functions
are:
:# the production of **bile** to help the digestion of fats (described
above) and
:# the control of **blood sugar** levels
Glucose is absorbed into the capillaries of the villi of the intestine.
The blood stream takes it directly to the liver via a blood vessel known
as the **hepatic portal vessel** or **vein** (see diagram 11.15).
The liver converts this glucose into glycogen which it stores. When
glucose levels are low the liver can convert the glycogen back into
glucose. It releases this back into the blood to keep the level of
glucose constant. The hormone **insulin**, produced by special cells in
the **pancreas**, controls this process.
![](Anatomy_and_physiology_of_animals_Control_of_glucose_by_the_liver.jpg "Anatomy_and_physiology_of_animals_Control_of_glucose_by_the_liver.jpg")
Diagram 11.15 - The control of blood glucose by the liver
Other functions of the liver include:
: 3\. making **vitamin A**,
: 4\. making the **proteins** that are found in the **blood plasma**
(**albumin, globulin** and **fibrinogen**),
: 5\. storing **iron**,
: 6\. removing **toxic substances** like alcohol and poisons from the
blood and converting them to safer substances,
: 7\. producing **heat** to help maintain the temperature of the body.
![](Anatomy_and_physiology_of_animals_Summary_of_the_main_functions_of_the_different_regions_of_the_gut.jpg "Anatomy_and_physiology_of_animals_Summary_of_the_main_functions_of_the_different_regions_of_the_gut.jpg")
Diagram 11.16 - Summary of the main functions of the different regions
of the gut
## Summary
- The **gut** breaks down plant and animal materials into nutrients
that can be used by animals' bodies.
- Plant material is more difficult to break down than animal tissue.
The gut of **herbivores** is therefore longer and more complex than
that of **carnivores**. Herbivores usually have a compartment (the
**rumen** or **functional caecum**) housing micro-organisms to break
down the **cellulose** wall of plants.
- Chewing by the teeth begins the food processing. There are 4 main
types of teeth: **incisors, canines, premolars** and **molars**. In
dogs and cats the premolars and molars are adapted to slice against
each other and are called **carnassial** teeth.
- **Saliva** is secreted in the mouth. It lubricates the food for
swallowing and contains an enzyme to break down starch.
- Chewed food is swallowed and passes down the **oesophagus** by waves
of contraction of the wall called **peristalsis.** The food passes
to the stomach where it is churned and mixed with acidic **gastric
juice** that begins the digestion of protein.
- The resulting **chyme** passes down the small intestine where
enzymes that digest fats, proteins and carbohydrates are secreted.
**Bile** produced by the liver is also secreted here. It helps in
the breakdown of fats. **Villi** provide the large surface area
necessary for the absorption of the products of digestion.
- In the **colon** and **caecum** water is absorbed and micro
organisms produce some **vitamin B and K**. In rabbits, horses and
rodents the caecum is enlarged as a **functional caecum** and
micro-organisms break down cellulose cell walls to simpler
carbohydrates. Waste products exit the body via the **rectum** and
**anus**.
- The **pancreas** produces **pancreatic juice** that contains many of
the enzymes secreted into the small intestine.
- In addition to producing bile the liver regulates blood sugar levels
by converting glucose absorbed by the villi into glycogen and
storing it. The liver also removes toxic substances from the blood,
stores iron, makes vitamin A and produces heat.
## Worksheet
Use the Digestive System
Worksheet to
help you learn the different parts of the digestive system and their
functions.
## Test Yourself
Then work through the Test Yourself below to see if you have understood
and remembered what you learned.
1\. Name the four different kinds of teeth
2\. Give 2 facts about how the teeth of cats and dogs are adapted for a
carnivorous diet:
: 1\.
: 2\.
3\. What does saliva do to the food?
4\. What is peristalsis?
5\. What happens to the food in the stomach?
6\. What is chyme?
7\. Where does the chyme go after leaving the stomach?
8\. What are villi and what do they do?
9\. What happens in the small intestine?
10\. Where is the pancreas and what does it do?
11\. How does the caecum of rabbits differ from that of cats?
12\. How does the liver help control the glucose levels in the blood?
13\. Give 2 other functions of the liver:
: 1\.
: 2\.
/Test Yourself Answers/
## Websites
- <http://www.second-opinions.co.uk/carn_herb_comparison.html> Second
opinion. A good comparison of the guts of carnivores and herbivores
- <http://www.chu.cam.ac.uk/~ALRF/giintro.htm> The gastrointestinal
system. A good comparison of the guts of carnivores and herbivores
with more advanced information than in the previous site.
- <http://www.westga.edu/~lkral/peristalsis/index.html> Peristalsis
animation.
- <http://en.wikipedia.org/wiki/Digestion> Wikipedia on digestion with
links to further information on most aspects. Like most websites
this is mainly about human digestion but much is applicable to
animals.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/Urinary System
## Objectives
After completing this section, you should know:
- Understand the parts of the urinary system.
- The structure and function of a kidney.
- The structure and function of a kidney tubule or nephron.
- The processes of filtration, reabsorption, secretion and
concentration that convert blood to urine in the kidney tubule.
- The function of antidiuretic hormone in producing concentrated
urine.
- The composition, storage and voiding of normal urine.
- Abnormal constituents of urine and their significance.
- The functions of the kidney in excreting nitrogenous waste,
controlling water levels and regulating salt concentrations and
acid-base balance.
- That birds do not have a bladder.
## Homeostasis
It is defined as the processes in which the animals or humans regulate
their internal temperature. Homeostasis is the maintenance of a stable
internal environment. Homeostasis is a term coined in 1959 to describe
the physical and chemical parameters that an organism must maintain to
allow proper functioning of its component cells, tissues, organs, and
organ systems.
Recall that enzymes function best when within a certain range of
temperature and pH, and that cells must strive to maintain a balance
between having too much or too little water in relation to their
external environment. Both situations demonstrate homeostasis. Just as
we have a certain temperature range (or comfort zone), so our body has a
range of environmental (internal as well as external) parameters within
which it works best. Multicellular organisms accomplish this by having
organs and organ systems that coordinate their homeostasis. In addition
to the other functions that life must perform (recall the discussion in
our Introduction chapter), unicellular creatures must accomplish their
homeostasis within but a single cell!
Single-celled organisms are surrounded by their external environment.
They move materials into and out of the cell by regulation of the cell
membrane and its functioning. Most multicellular organisms have most of
their cells protected from the external environment, having them
surrounded by an aqueous internal environment. This internal environment
must be maintained in such a state as to allow maximum efficiency. The
ultimate control of homeostasis is done by the nervous system. Often
this control is in the form of negative feedback loops. Heat control is
a major function of homeostatic conditions that involves the integration
of skin, muscular, nervous, and circulatory systems.
The difference between homeostasis as a single cell performs it and what
a multicelled creature does derives from their basic organizational
plan: a single cell can dump wastes outside the cell and just be done
with it. Cells in a multicelled creature, such as a human or cat, also
dump wastes outside those cells, but like the trash can or dumpster
outside my house/apartment, those wastes must be carted away. The
carting away of these wastes is accomplished in my body by the
circulatory system in conjunction with the excretory system. For my
house, I have the City of Phoenix sanitation department do that (and get
to pay each month for their service!).
The ultimate control of homeostasis is accomplished by the nervous
system (for rapid responses such as reflexes to avoid picking up a hot
pot off the stove) and the endocrine system (for longer-term responses,
such as maintaining the body levels of calcium, etc.). Often this
homeostatic control takes the form of negative feedback loops. There are
two types of biological feedback: positive and negative. Negative
feedback turns off the stimulus that caused it in the first place. Your
house's heater (or cooler for those of us in the Sun Belt) acts on the
principle of negative feedback. When your house cools off below the
temperature set by your thermostat, the heater is turned on to warm air
until the temperature is at or above what the thermostat is set at. The
thermostat detects this rise in temperature and sends a signal to shut
off the heater, allowing the house to cool of until the heater is turned
on yet again and the cycle (or loop) continues.
## Water In The Body
Water is essential for living things to survive because all the chemical
reactions within a body take place in a solution of water. An animal's
body consists of up to 80% water. The exact proportion depends on the
type of animal, its age, sex, health and whether or not it has had
sufficient to drink. Generally animals do not survive a loss of more
than 15% of their body water.
In vertebrates almost 2/3rd of this water is in the cells
(**intracellular fluid**). The rest is outside the cells
(**extracellular fluid**) where it is found in the spaces around the
cells (**tissue fluid**), as well as in the blood and lymph. Water is
considered to be the source of life. It is important for animal life
because of the following reasons:
\(i\) Water is vital body fluid which is essential for regulating the
processes such as , digestion , transport of nutrients and excretion.
Water dissolves ionic and large number of polar organic compounds. Thus,
it transports the products of digestion to the place of requirement of
the body.
\(ii\) Water regulates the body temperature by the process of sweating
and evaporation.
\(iii\) Water is a medium for all metabolic reactions in the body. All
metabolic reactions in the body take place in solution phase.
\(iv\) Water provides habitat for various animals in the form of ponds
and rivers, sea, etc.
![](Anatomy_and_physiology_of_animals_Water_in_the_body.jpg "Anatomy_and_physiology_of_animals_Water_in_the_body.jpg")
Diagram 12.1 - Water in the body
## Maintaining Water Balance
Animals lose water through their skin and lungs, in the faeces and
urine. These losses must be made up by water in food and drink and from
the water that is a by-product of chemical reactions. If the animal does
not manage to compensate for water loss the dissolved substances in the
blood may become so concentrated they become lethal. To prevent this
happening various mechanisms come into play as soon as the concentration
of the blood increases. A part of the brain called the **hypothalamus**
is in charge of these homeostatic processes. The most important is the
feeling of thirst that is triggered by an increase in blood
concentration. This stimulates an animal to find water and drink it.
The kidneys are also involved in maintaining water balance as various
hormones instruct them to produce more concentrated urine and so retain
some of the water that would otherwise be lost (see later in this
Chapter and Chapter 16).
### Desert Animals
Coping with water loss is a particular problem for animals that live in
dry conditions. Some, like the camel, have developed great tolerance for
dehydration. For example, under some conditions, camels can withstand
the loss of one third of their body mass as water. They can also survive
wide daily changes in temperature. This means they do not have to use
large quantities of water in sweat to cool the body by evaporation.
Smaller animals are more able than large ones to avoid extremes of
temperature or dry conditions by resting in sheltered more humid
situations during the day and being active only at night.
The kangaroo rat is able to survive without access to any drinking water
at all because it does not sweat and produces extremely concentrated
urine. Water from its food and from chemical processes is sufficient to
supply all its requirements.
## Excretion
Animals need to excrete because they take in substances that are excess
to the body's requirements and many of the chemical reactions in the
body produce waste products. If these substances were not removed they
would poison cells or slow down metabolism. All animals therefore have
some means of getting rid of these wastes.
The major waste products in mammals are carbon dioxide that is removed
by the lungs, and urea that is produced when excess amino acids (from
proteins) are broken down. Urea is filtered from the blood by the
kidneys.
![](Urinary_System_of_Dog.JPG "Urinary_System_of_Dog.JPG")
Diagram 12.2 - The position of the organs of the urinary system in a dog
BY GIZAW MEKONNEN
\'\'\'The Kidneys And Urinary System==
The urinary system, also known as the renal system or urinary tract,
consists of the kidneys, ureters, bladder, and the urethra. The purpose
of the urinary system is to eliminate waste from the body, regulate
blood volume and blood pressure, control levels of electrolytes and
metabolites, and regulate blood pH. The urinary tract is the body\'s
drainage system for the eventual removal of urine.\[1\] The kidneys have
an extensive blood supply via the renal arteries which leave the kidneys
via the renal vein. Each kidney consists of functional units called
nephrons. Following filtration of blood and further processing, wastes
(in the form of urine) exit the kidney via the ureters, tubes made of
smooth muscle fibres that propel urine towards the urinary bladder,
where it is stored and subsequently expelled from the body by urination
(voiding). The female and male urinary system are very similar,
differing only in the length of the urethra.
Urine is formed in the kidneys through a filtration of blood. The urine
is then passed through the ureters to the bladder, where it is stored.
During urination, the urine is passed from the bladder through the
urethra to the outside of the body.
800--2,000 milliliters (mL) of urine are normally produced every day in
a healthy human. This amount varies according to fluid intake and kidney
function.
The **kidneys** in mammals are bean-shaped organs that lie in the
abdominal cavity attached to the dorsal wall on either side of the spine
(see diagram 12.2). An artery from the dorsal aorta called the **renal
artery** supplies blood to them and the **renal vein** drains them.
![](Anatomy_and_physiology_of_animals_Urinary_system.jpg "Anatomy_and_physiology_of_animals_Urinary_system.jpg")
Diagram 12.3 - The urinary system
To the naked eye kidneys seem simple enough organs. They are covered by
a fibrous coat or capsule and if cut in half lengthways (longitudinally)
two distinct regions can be seen - an inner region or **medulla** and
the outer **cortex**. A cavity within the kidney called the **pelvis**
collects the urine and carries it to the **ureter**, which connects with
the **bladder** where the urine is stored temporarily. Rings of muscle
(**sphincters**) control the release of urine from the bladder and the
urine leaves the body through the **urethra** (see diagrams 12.3 and
12.4).
![](Anatomy_and_physiology_of_animals_Dissected_kidney.jpg "Anatomy_and_physiology_of_animals_Dissected_kidney.jpg")
Diagram 12.4 - The dissected kidney
## Kidney Tubules Or Nephrons
It is only when you examine kidneys under the microscope that you find
that their structure is not simple at all. The cortex and medulla are
seen to be composed of masses of tiny tubes. These are called **kidney
tubules** or **nephrons** (see diagrams 12.5 and 12.6). A human kidney
consists of over a million of them.
![](Anatomy_and_physiology_of_animals_Several_kidney_tubules_or_nephrons.jpg "Anatomy_and_physiology_of_animals_Several_kidney_tubules_or_nephrons.jpg")
Diagram 12.5 - Several kidney tubules or nephrons
![](Anatomy_and_physiology_of_animals_Kidney_tubule_or_nephron.jpg "Anatomy_and_physiology_of_animals_Kidney_tubule_or_nephron.jpg")
Diagram 12.6 - A kidney tubule or nephron
At one end of each nephron, in the cortex of the kidney, is a cup shaped
structure called the (**Bowman's** or **renal**) **capsule**. It
surrounds a tuft of capillaries called the **glomerulus** that carries
high-pressure blood. Together the glomerulus and capsule act as a
blood-filtering device (see diagram 12.7). The holes in the filter allow
most of the contents of the blood through except the _red and
white cells_ and _large protein
molecules_. The fluid flowing from the capsule into the rest
of the kidney tubule is therefore very similar to blood plasma and
contains many useful substances like water, glucose, salt and amino
acids. It also contains waste products like **urea**.
### Processes Occurring In The Nephron
After entering the glomerulus the filtered fluid flows along a coiled
part of the tubule (the **proximal convoluted tubule**) to a looped
portion (the **Loop of Henle**) and then to the **collecting tube** via
a second length of coiled tube (the **distal convoluted tubule**) (see
diagram 12.6). From the collecting ducts the urine flows into the
**renal pelvis** and enters the **ureter**.
Note that the glomerulus, capsule and both coiled parts of the tubule
are all situated in the cortex of the kidney while the loops of Henle
and collecting ducts make up the medulla (see diagram 12.5).
As the fluid flows along the proximal convoluted tubule useful
substances like glucose, water, salts, potassium ions, calcium ions and
amino acids are **reabsorbed** into the blood capillaries that form a
network around the tubules. Many of these substances are transported by
active transport and energy is required.
![](Anatomy_and_physiology_of_animals_Filtration_in_the_glomerulus_capsule.jpg "Anatomy_and_physiology_of_animals_Filtration_in_the_glomerulus_capsule.jpg")
Diagram 12.7 - Filtration in the glomerulus and capsule
In a separate process, some substances, particularly potassium, ammonium
and hydrogen ions, and drugs like penicillin, are actively **secreted**
into the distal convoluted tubule.
By the time the fluid has reached the collecting ducts these processes
of absorption and secretion have changed the fluid originally filtered
into the Bowman's capsule into urine. The main function of the
collecting ducts is then to remove more water from the urine if
necessary. These processes are summarised in diagram 12.8.
**Normal urine** consists of water, in which waste products such as urea
and salts such as sodium chloride are dissolved. Pigments from the
breakdown of red blood cells give urine its yellow colour.
### The Production Of Concentrated Urine
Because of the high pressure of the blood in the glomerulus and the
large size of the pores in the glomerulus/capsule-filtering device, an
enormous volume of fluid passes into the kidney tubules. If this fluid
were left as it is, the animal's body would be drained dry in 30
minutes. In fact, as the fluid flows down the tubule, over 90% of the
water in it is reabsorbed. The main part of this reabsorption takes
place in the collecting tubes.
The amount of water removed from the collecting ducts is controlled by a
hormone called **antidiuretic hormone (ADH)** produced by the
**pituitary gland**, situated at the base of the brain. When the blood
becomes more concentrated, as happens when an animal is deprived of
water, ADH is secreted and causes more water to be absorbed from the
collecting ducts so that concentrated urine is produced. When the animal
has drunk plenty of water and the blood is dilute, no ADH is secreted
and no or little water is absorbed from the collecting ducts, so dilute
urine is produced. In this way the concentration of the blood is
controlled precisely.
![](Anatomy_and_physiology_of_animals_Summary_of_the_processes_involved_in_the_formation_of_urine.jpg "Anatomy_and_physiology_of_animals_Summary_of_the_processes_involved_in_the_formation_of_urine.jpg")
Diagram 12.8 - Summary of the processes involved in the formation of
urine
## Water Balance In Fish And Marine Animals
### Fresh Water Fish
Although the skin of fish is more or less waterproof, the gills are very
porous. The body fluids of fish that live in fresh water have a higher
concentration of dissolved substances than the water in which they swim.
In other words the body fluids of fresh water fish are **hypertonic** to
the water (see chapter 3). Water therefore flows into the body by
**osmosis**. To stop the body fluids being constantly diluted fresh
water fish produce large quantities of dilute urine.
### Marine Fish
Marine fish like the sharks and dogfish have body fluids that have the
same concentration of dissolved substances as the water (**isotonic**)
have little problem with water balance. However, marine bony fish like
red cod, snapper and sole, have body fluids with a lower concentration
of dissolved substances than seawater (they are **hypotonic** to
seawater). This means that water tends to flow out of their bodies by
osmosis. To make up this fluid loss they drink seawater and get rid of
the excess salt by excreting it from the gills.
### Marine Birds
Marine birds that eat marine fish take in large quantities of salt and
some only have access to seawater for drinking. Bird's kidneys are
unable to produce very concentrated urine, so they have developed a salt
gland. This excretes a concentrated salt solution into the nose to get
rid of the excess salt.
## Diabetes And The Kidney
There are two types of diabetes. The most common is called sugar
diabetes or **diabetes mellitus** and is common in cats and dogs
especially if they are overweight. It is caused by the pancreas
secreting insufficient **insulin**, the hormone that controls the amount
of glucose in the blood. If insulin secretion is inadequate, the
concentration of glucose in the blood increases. Any increase in the
glucose in the blood automatically leads to an increase in glucose in
the fluid filtered into the kidney tubule. Normally the kidney removes
all the glucose filtered into it, but these high concentrations swamp
this removal mechanism and urine containing glucose is produced. The
main symptoms of this type of diabetes are the production of large
amounts of dilute urine containing glucose, and excessive thirst.
The second type of diabetes is called **diabetes insipidus**. The name
comes from the main symptom, which is the production of large amounts of
very dilute and "tasteless" urine. It occurs when the pituitary gland
produces insufficient ADH, the hormone that stimulates water
re-absorption from the kidney tubule. When this hormone is lacking,
water is not absorbed and large amounts of dilute urine are produced.
Because so much water is lost in the urine, animals with this form of
diabetes can die if deprived of water for only a day or so.
## Other Functions Of The Kidney
The excretion of urea from the body and the maintenance of water
balance, as described above, are the main functions of the kidney.
However, the kidneys have other roles in keeping conditions in the body
stable i.e. in maintaining homeostasis. These include:
:\* controlling the concentration of salt ions (Na+, K+, Cl-) in the
blood by adjusting how much is excreted or retained;
:\* maintaining the correct acidity of the blood. Excess acid is
constantly being produced by the normal chemical reactions in the body
and the kidney eliminates this.
## Normal Urine
Normal urine consists of water (95%), urea, salts (mostly sodium
chloride) and pigments (mostly from bile) that give it its
characteristic colour.
## Abnormal Ingredients Of Urine
If the body is not working properly, small amounts of substances not
normally present may be found in the urine or substances normally
present may appear in abnormal amounts.
:\* The presence of **glucose** may indicate diabetes (see above).
:\* Urine with red blood cells in it is called **haematuria**, and may
indicate inflammation of the kidney,or urinary tract, cancer or a blow
to the kidneys.
:\* Sometimes free **haemoglobin** is found in the urine. This indicates
that the red blood cells in the blood have **haemolysed** (the membrane
has broken down) and the haemoglobin has passed into the kidney tubules.
:\* The presence of **white blood cells** in the urine indicates there
is an infection in the kidney or urinary tract.
:\* **Protein molecules** are usually too large to pass into the kidney
tubule so no or only small amounts of proteins like **albumin** is
normally found in urine. Large quantities of albumin indicate that the
kidney tubules have been injured or the kidney has become diseased. High
blood pressure also pushes proteins from the blood into the tubules.
:\* **Casts** are tiny cylinders of material that have been shed from
the lining of the tubules and flushed out into the urine.
:\* **Mucus** is not usually found in the urine of healthy animals but
is a normal constituent of horses' urine, giving it a characteristic
cloudy appearance.
Tests can be carried out to identify any abnormal ingredients of urine.
These tests are normally done by "**stix**", which are small plastic
strips with absorbent ends impregnated with various chemicals. A colour
change occurs in the presence of an abnormal ingredient.
## Excretion In Birds
Birds' high body temperature and level of activity means that they need
to conserve water. Birds therefore do not have a bladder and instead of
excreting urea, which needs to be dissolved in large amounts of water,
birds produce uric acid that can be discharged as a thick paste along
with the feces. This is the white chalky part of the bird droppings that
land on you or your car.
CONCLUSION
- The excretory system consists of paired **kidneys** and associated
blood supply. **Ureters** transport urine from the kidneys to the
bladder and the **urethra** with associated sphincter muscles
controls the release of urine.
- The kidneys have an important role in maintaining **homeostasis** in
the body. They excrete the waste product urea, control the
concentrations of water and salt in the body fluids, and regulate
the acidity of the blood.
- A kidney consists of an outer region or **cortex,** inner
**medulla** and a cavity called the **pelvis** that collects the
urine and carries it to the ureter.
- The tissue of a kidney is composed of masses of tiny tubes called
**kidney tubules** or **nephrons**. These are the structures that
make the urine.
- High-pressure blood is supplied to the nephron via a tuft of
capillaries called the **glomerulus**. Most of the contents of the
blood except the cells and large protein molecules filter from the
glomerulus into the (**Bowmans) capsule.** This fluid flows down a
coiled part of the tubule (**proximal convoluted tubule**) where
useful substances like glucose, amino acids and various ions are
reabsorbed. The fluid flows to a looped portion of the tubule called
the **Loop of Henle** where water is reabsorbed and then to another
coiled part of the tubule (**distal convoluted tubule**) where more
reabsorbtion and secretion takes place. Finally the fluid passes
down the **collecting duct** where water is reabsorbed to form
concentrated urine.
` BY GIZAW MEKONNEN`
## Worksheet
Use this Excretory System
Worksheet to
help you learn the parts of the urinary system, the kidney and kidney
tubule and their functions.
## Test Yourself
The Urinary System Test Yourself can then be used to see if you
understand this rather complex system.
1\. Add the following labels to the diagram of the excretory system
shown below. Bladder \| ureter \| urethra \| kidney \| dorsal aorta \|
vena cava \| renal artery \| vein
![](Anatomy_and_physiology_of_animals_diagram_of_urinary_system_unlabeled.JPG "Anatomy_and_physiology_of_animals_diagram_of_urinary_system_unlabeled.JPG")
2\. Using the words/phrases in the list below fill in the blanks in the
following statements.
:
: \| cortex \| amino acids \| renal \| glucose \| water
reabsorption \| large proteins \|
: \| bowman's capsule \| diabetes mellitus \| secreted \|
antidiuretic hormone (ADH) \| blood cells \|
: \| glomerulus \| concentration of the urine \| medulla \|
nephron \|
a\) Blood enters the kidney via the \...\...\...\...\...\...\...\....
artery.
b\) When cut across the kidney is seen to consist of two regions, the
outer\...\...\...\..... and the inner\...\...\...\.....
c\) Another word for the kidney tubule is
the\...\...\...\...\...\...\...\...\...\....
d\) Filtration of the blood occurs in
the\...\...\...\...\...\...\...\...\...\...
e\) The filtered fluid (filtrate) enters
the\...\...\...\...\...\...\...\...\.....
f\) The filtrate entering the e) above is similar to blood but does not
contain\...\...\...\...\...\... or\...\...\...\...\...\.....
g\) As the fluid passes along the first coiled part of the kidney
tubule\...\...\...\...\...\... and\...\...\...\...\...\..... are
removed.
h\) The main function of the loop of Henle
is\...\...\...\...\...\...\...\...\...\...\...\...\...\...\...\...\...\...\...\....
i\) Hydrogen and potassium ions
are\...\...\...\...\...\...\...\...\...\... into the second coiled part
of the tubule.
j\) The main function of the collecting tube
is\...\...\...\...\...\...\...\...\...\...\...\...\...\...\...\.....
k\) The hormone\...\...\...\...\...\...\...\...\...\...\...\..... is
responsible for controlling water reabsorption in the collecting tube.
**Write short answer for following question** l) When the pancreas
secretes inadequate amounts of the hormone insulin the condition known
as\...\...\...\...\...\...\...\...\...\.... results. This is most easily
diagnosed by testing for\...\...\...\...\...\...\...\...\...\..... in
the urine.What is Homeostasis?
2\. Give 2 examples of homeostasis.
3\. List 3 ways in which animals keep their body temperature constant
when the weather is hot.
4\. How does the kidney compensate when an animal is deprived of water
to drink?
6\. Describe how panting helps to reduce the acidity of the blood.
/Test Yourself Answers/
## Websites
- <http://www.biologycorner.com/bio3/nephron.html> Biology Corner. A
fabulous drawing of the kidney and nephron to print off, label and
colour in with clear explanation of function.
- <http://health.howstuffworks.com/adam-200032.htm> How Stuff Works.
This animation traces the full process of urine formation and
reabsorption in the kidneys, its path down the ureter to the
bladder, and its excretion via the urethra. Needs Shockwave.
- <http://en.wikipedia.org/wiki/Nephron> Wikipedia. A bit more detail
than you need but still good clear explanations and lots of
information.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/Reproductive System
!original image by
ynskjen cc
by{width="400"}
## Objectives
After completing this section, you should know:
- the role of mitosis and meiosis in the production of gametes (sperm
and ova)
- that gametes are haploid cells
- that fertilization forms a diploid zygote
- the major parts of the male reproductive system and their functions
- the route sperm travel along the male reproductive tract to reach
the penis
- the structure of a sperm and the difference between sperm and semen
- the difference between infertility and impotence
- the main parts of the female reproductive system and their functions
- the ovarian cycle and the roles of FSH, LH, oestrogen and
progesterone
- the oestrous cycle and the signs of heat in rodents, dogs, cats and
cattle
- the process of fertilization and where it occurs in the female tract
- what a morula and a blastocyst are
- what the placenta is and its functions
## Reproductive System
In biological terms sexual reproduction involves the union of
**gametes** - the sperm and the ovum - produced by two parents. Each
gamete is formed by **meiosis** (see Chapter 3). This means each
contains only half the chromosomes of the body cells (**haploid**).
Fertilization results in the joining of the male and female gametes to
form a **zygote** which contains the full number of chromosomes
(**diploid**). The zygote then starts to divide by **mitosis** (see
Chapter 3) to form a new animal with all its body cells containing
chromosomes that are identical to those of the original zygote (see
diagram 13.1).
Diagram 13.1 - Sexual reproduction
The offspring formed by sexual reproduction contain genes from both
parents and show considerable variation. For example, kittens in a
litter are all different although they (usually) have the same mother
and father. In the wild this variation is important because it means
that when the environment changes some individuals may be better adapted
to survive than others. These survivors pass their "superior" genes on
to their offspring. In this way the characteristics of a group of
animals can gradually change over time to keep pace with the changing
environment. This "survival of the fittest" or "**natural selection**"
is the mechanism behind the theory of **evolution**.
## Fertilization
In most fish and amphibia (frogs and toads) fertilization of the egg
cells takes place outside the body. The female lays the eggs and then
the male deposits his sperm on or at least near them.
In reptiles and birds, eggs are fertilized inside the body when the male
deposits the sperm inside the **egg duct** of the female. The egg is
then surrounded by a resistant shell, "laid" by the female and the
embryo completes its development inside the egg.
In mammals the sperm are placed in the body of the female and the eggs
are fertilized internally. They then develop to quite an advanced stage
inside the body of the female. When they are born they are fed on milk
excreted from the mammary glands and protected by their parents until
they become independent.
## Sexual Reproduction In Mammals
The reproductive organs of mammals produce the **gametes** (sperm and
egg cells), help them fertilize and then support the developing embryo.
## The Male Reproductive System
The male reproductive system consists of a pair of testes that produce
**sperm** (or**spermatozoa**), ducts that transport the sperm to the
penis and glands that add secretions to the sperm to make **semen** (see
diagram 13.2).
The various parts of the male reproductive system with a summary of
their functions are shown in diagram 13.3.
![](Male_repro_system_labelled.jpg "Male_repro_system_labelled.jpg")
Diagram 13.2. The reproductive organs of a male dog
![](Anatomy_and_physiology_of_animals_Diagram_summarizing_the_functions_of_the_male_reproductive_organs.jpg "Anatomy_and_physiology_of_animals_Diagram_summarizing_the_functions_of_the_male_reproductive_organs.jpg")
Diagram 13.3 - Diagram summarizing the functions of the male
reproductive organs
### The Testicles
Sperm need temperatures between 2 and 10 degrees Centigrade lower than
the body temperature to develop. This is the reason why the testes are
located in a bag of skin called the **scrotal sacs** (or **scrotum**)
that hangs below the body and where the evaporation of secretions from
special glands can further reduce the temperature. In many animals
(including humans) the testes descend into the scrotal sacs at birth but
in some animals they do not descend until sexual maturity and in others
they only descend temporarily during the breeding season. A mature
animal in which one or both testes have not descended is called a
**cryptorchid** and is usually infertile if both testicles have not
descended.
The problem of keeping sperm at a low enough temperature is even greater
in birds that have a higher body temperature than mammals. For this
reason bird's sperm are usually produced at night when the body
temperature is lower and the sperm themselves are more resistant to
heat.
The testes consist of a mass of coiled tubes (the **seminiferous**
or**sperm producing tubules**) in which the sperm are formed by meiosis
(see diagram 13.4). Cells lying between the seminiferous tubules produce
the male sex hormone **testosterone**.
When the sperm are mature they accumulate in the **collecting ducts**
and then pass to the **epididymis**before moving to the **sperm duct**
or **vas deferens**. The two sperm ducts join the **urethra** just below
the bladder, which passes through the **penis** and transports both
sperm and urine.
**Ejaculation** discharges the semen from the erect penis. It is brought
about by the contraction of the epididymis, vas deferens, prostate gland
and urethra.
![](Anatomy_and_physiology_of_animals_The_testis_&_a_magnified_seminferous_tubule.jpg "Anatomy_and_physiology_of_animals_The_testis_&_a_magnified_seminferous_tubule.jpg")
Diagram 13.4 - The testis and a magnified seminiferous tubule
### Semen
Semen consists of 10% sperm and 90% fluid and as sperm pass down the
ducts from testis to penis, (accessory) glands add various secretion\...
### Accessory Glands
Three different glands may be involved in producing the secretions in
which sperm are suspended, although the number and type of glands varies
from species to species.
**Seminal vesicles** are important in rats, bulls, boars and stallions
but are absent in cats and dogs. When present they produce secretions
that make up much of the volume of the semen, and transport and provide
nutrients for the sperm.
The **prostate gland** is important in dogs and humans. It produces an
alkaline secretion that neutralizes the acidity of the male urethra and
female vagina.
**Cowper's glands** (bulbourethral glands) have various functions in
different species. The secretions may lubricate, flush out urine or form
a gelatinous plug that traps the semen in the female reproductive system
after copulation and prevents other males of the same species
fertilizing an already mated female. Cowper's glands are absent in bears
and aquatic mammals.
### The Penis
The penis consists of connective tissue with numerous small blood spaces
in it. These fill with blood during sexual excitement causing erection.
#### Penis Form And Shape
Dogs, bears, seals, bats and rodents have a special bone in the penis
which helps maintain the erection (see diagram 13.2). In some animals
(e.g. the bull, ram and boar) the penis has an "S" shaped bend that
allows it to fold up when not in use. In many animals the shape of the
penis is adapted to match that of the vagina. For example, the boar has
a corkscrew shaped penis, there is a pronounced twist in bulls' and it
is forked in marsupials like the opossum. Some have spines, warts or
hooks on them to help keep them in the vagina and copulation may be
extended to help retain the semen in the female system. Mating can last
up to three hours in minks, and dogs may "knot" or "tie" during mating
and can not separate until the erection has subsided.
### Sperm
Sperm are made up of three parts: a **head** consisting mainly a
prominent haploid nucleus which carries the genetic material and also an
acrosome, a **midpiece** containing many mitochondria to provide the
energy and a **tail** that provides propulsion (see diagram 13.5).
![](Anatomy_and_physiology_of_animals_A_sperm.jpg "Anatomy_and_physiology_of_animals_A_sperm.jpg")
Diagram 13.5 - A sperm
A single ejaculation may contain 2-3 hundred million sperm but even in
normal semen as many as 10% of these sperm may be abnormal and
infertile. Some may be dead while others are inactive or deformed with
double, giant or small heads or tails that are coiled or absent
altogether.
When there are too many abnormal sperm or when the sperm concentration
is low, the semen may not be able to fertilize an egg and the animal is
infertile. Make sure you don\'t confuse infertility with impotence,
which is the inability to copulate successfully.
Sperm do not live forever. They have a definite life span that varies
from species to species. They survive for between 20 days (guinea pig)
to 60 days (bull) in the epididymis but once ejaculated into the female
tract they only live from 12 to 48 hours. When semen is used for
artificial insemination, storage under the right conditions can extend
the life span of some species.
### Artificial Insemination
In many species the male can be artificially stimulated to ejaculate and
the semen collected. It can then be diluted, stored and used to
**inseminate** females. For example bull semen can be diluted and stored
for up to 3 weeks at room temperature. If mixed with an antifreeze
solution and stored in "straws" in liquid nitrogen at minus 79^o^C it
will keep for much longer. Unfortunately the semen of chickens,
stallions and boars can only be stored for up to 2 days.
Dilution of the semen means that one male can be used to fertilise many
more females than would occur under natural conditions. There are also
advantages in the male and female not having to make physical contact.
It means that owners of females do not have to buy expensive males and
the possibility of transmitting sexually transmitted diseases is
reduced. Routine examination of the semen for sperm concentration,
quality and activity allows only the highest quality semen to be used so
a high success rate is ensured.
Since the lifespan of sperm in the female tract is so short and ova only
survive from 8 to 10 hours the timing of the artificial insemination is
critical. Successful conception depends upon detecting the time that the
animal is "on heat" and when ovulation occurs.
## The Female Reproductive Organs
The female reproductive system consists of a pair of **ovaries** that
produce egg cells or **ova** and **fallopian tubes** where fertilisation
occurs and which carry the fertilised ovum to the **uterus**. Growth of
the foetus takes place here. The **cervix** separates the uterus from
the **vagina** or birth canal, where the sperm are deposited (see
diagram 13.6).
![](Female_repro_system_labelled.JPG "Female_repro_system_labelled.JPG")
Diagram 13.6. - The reproductive system of a female rabbit
Note that primates like humans have a uterus with a single compartment
but in most mammals the uterus is divided into two separate parts or
**horns** as shown in diagram 13.6.
### The Ovaries
Ovaries are small oval organs situated in the abdominal cavity just
ventral to the kidneys. Most animals have a pair of ovaries but in birds
only the left one is functional to reduce weight (see below).
The ovary consists of an inner region (**medulla**) and an outer region
(**cortex**) containing egg cells or ova. These are formed in large
numbers around the time of birth and start to develop after the animal
becomes sexually mature. A cluster of cells called the **follicle**
surrounds and nourishes each ovum.
### The Ovarian Cycle
The **ovarian cycle** refers to the series of changes in the ovary
during which the follicle matures, the ovum is shed and the **corpus
luteum** develops (see diagram 13.7).
Numerous undeveloped ovarian follicles are present at birth but they
start to mature after sexual maturity. In animals that normally have
only one baby at a time only one ovum will mature at once but in litter
animals several will. The mature follicle consists of outer cells that
provide nourishment. Inside this is a fluid-filled space that contains
the ovum.
A mature follicle can be quite large, ranging from a few millimetres in
small mammals to the size of a golf ball in large animals. It bulges out
from the surface of the ovary before eventually rupturing to release the
ovum into the abdominal cavity. Once the ovum has been shed, a blood
clot forms in the empty follicle. This develops into a tissue called the
**corpus luteum** that produces the hormone **progesterone** (see
diagram 13.9). If the animal becomes pregnant the corpus luteum
persists, but if there is no pregnancy it degenerates and a new ovarian
cycle usually.
![](Anatomy_and_physiology_of_animals_Ovarian_cycle_showing_from_top_left_clockwise.jpg "Anatomy_and_physiology_of_animals_Ovarian_cycle_showing_from_top_left_clockwise.jpg")
Diagram 13.7 - The ovarian cycle showing from the top left clockwise:
the maturation of the ovum over time, followed by ovulation and the
development of the corpus luteum in the empty follicle
### The Ovum
When the ovum is shed the nucleus is in the final stages of meiosis
(cell division). It is surrounded by few layers of follicle cells and a
tough membrane called the **zona pelluc**ida (see diagram 13.8).
![](Anatomy_and_physiology_of_animals_An_ovum.jpg "Anatomy_and_physiology_of_animals_An_ovum.jpg")
Diagram 13.8 - An ovum
### The Oestrous Cycle
The **oestrous cycle** is the sequence of hormonal changes that occurs
through the **ovarian cycle**. These changes influence the behaviour and
body changes of the female (see diagram 13.9).
![](Anatomy_and_physiology_of_animals_The_oestrous_cycle.jpg "Anatomy_and_physiology_of_animals_The_oestrous_cycle.jpg")
Diagram 13.9 - The oestrous cycle
The first hormone involved in the oestrous cycle is **follicle
stimulating hormone (F.S.H.),** secreted by the **anterior pituitary
gland** (see chapter 16). It stimulates the follicle to develop. As the
follicle matures the outer cells begin to secrete the hormone
**oestrogen** and this stimulates the mammary glands to develop. It also
prepares the lining of the uterus to receive a fertilised egg. Ovulation
is initiated by a surge of another hormone from the anterior pituitary,
**luteinising hormone (L.H.).** This hormone also influences the
development of the corpus luteum, which produces **progesterone**, a
hormone that prepares the lining of the uterus for the fertilised ovum
and readies the mammary glands for milk production. If no pregnancy
takes place the corpus luteum shrinks and the production of progesterone
decreases. This causes FSH to be produced again and a new oestrous cycle
begins.
For fertilisation of the ovum by the sperm to occur, the female must be
receptive to the male at around the time of ovulation. This is when the
hormones turn on the signs of "**heat**", and she is "**in season**" or
"**in oestrous**". These signs are turned off again at the end of the
oestrous cycle.
During the oestrous cycle the lining of the uterus (**endometrium**)
thickens ready for the fertilised ovum to be implanted. If no pregnancy
occurs this thickened tissue is absorbed and the next cycle starts. In
humans and other higher primates, however, the endometrium is shed as a
flow of blood and instead of an oestrous cycle there is a **menstrual
cycle**.
The length of the oestrous cycle varies from species to species. In rats
the cycle only lasts 4--5 days and they are sexually receptive for about
14 hours. Dogs have a cycle that lasts 60--70 days and heat lasts 7--9
days and horses have a 21-day cycle and heat lasts an average of 6 days.
**Ovulation** is spontaneous in most animals but in some, e.g. the cat,
and the rabbit, ovulation is stimulated by mating. This is called
**induced ovulation**.
### Signs Of Oestrus Or Heat
:\*When on heat a bitch has a blood stained discharge from the **vulva**
that changes a little later to a straw coloured one that attracts all
the dogs in the neighbourhood.
:\* Female cats "call" at night, roll and tread the carpet and are
generally restless but will "stand" firm when pressure is placed on the
pelvic region (this is the lordosis response).
:\* A female rat shows the lordosis response when on heat. It will
"mount" other females and be more active than normal.
:\* A cow mounts other cows (bulling), bellows, is restless and has a
discharge from the vulva.
### Breeding Seasons And Breeding Cycles
Only a few animals breed throughout the year. This includes the higher
primates (humans, gorillas and chimpanzees etc.), pigs, mice and
rabbits. These are known as **continuous breeders**.
Most other animals restrict reproduction to one or two seasons in the
year-**seasonal breeders** (see diagram 13.10). There are several
reasons for this. It means the young can be born at the time (usually
spring) when feed is most abundant and temperatures are favourable. It
is also sensible to restrict the breeding season because courtship,
mating, gestation and the rearing of young can exhaust the energy
resources of an animal as well as make them more vulnerable to
predators.
![](Anatomy_and_physiology_of_animals_Breeding_cycles.jpg "Anatomy_and_physiology_of_animals_Breeding_cycles.jpg")
Diagram 13.10 - Breeding cycles
The timing of the breeding cycle is often determined by day length. For
example the shortening day length in autumn will bring sheep and cows
into season so the foetus can gestate through the winter and be born in
spring. In cats the increasing day length after the winter solstice
(shortest day) stimulates breeding. The number of times an animal comes
into season during the year varies, as does the number of oestrous
cycles during each season. For example a dog usually has 2-3 seasons per
year, each usually consisting of just one oestrous cycle. In contrast
ewes usually restrict breeding to one season and can continue to cycle
as many as 20 times if they fail to become pregnant.
## Fertilisation and Implantation
### Fertilization
The opening of the fallopian tube lies close to the ovary and after
ovulation the ovum is swept into its funnel-like opening and is moved
along it by the action of cilia and wave-like contractions of the wall.
**Copulation** deposits several hundred million sperm in the vagina.
They swim through the cervix and uterus to the fallopian tubes moved
along by whip-like movements of their tails and contractions of the
uterus. During this journey the sperm undergo their final phase of
maturation so they are ready to fertilize the ovum by the time they
reach it in the upper fallopian tube.
High mortality means only a small proportion of those deposited actually
reach the ovum. The sperm attach to the outer **zona pellucida** and
enzymes secreted from a gland in the head of the sperm dissolve this
membrane so it can enter. Once one sperm has entered, changes in the
**zona pellucida** prevent further sperm from penetrating. The sperm
loses its tail and the two nuclei fuse to form a **zygote** with the
full set of paired chromosomes restored.
### Development Of The Morula And Blastocyst
As the fertilised egg travels down the fallopian tube it starts to
divide by mitosis. First two cells are formed and then four, eight,
sixteen, etc. until there is a solid ball of cells. This is called a
**morula**. As division continues a hollow ball of cells develops. This
is a **blastocyst** (see diagram 13.11).
### Implantation
Implantation involves the blastocyst attaching to, and in some species,
completely sinking into the wall of the uterus.
## Pregnancy
### The Placenta And Fetal Membranes
As the **embryo** increases in size, the **placenta**, **umbilical
cord** and **fetal membranes** (often known collectively as the
**placenta**) develop to provide it with nutrients and remove waste
products (see diagram 13.12). In later stages of development the embryo
becomes known as a **fetus**.
The placenta is the organ that attaches the fetus to the wall of the
uterus. In it the blood of the fetus and mother flow close to each other
but never mix (see diagram 13.13). The closeness of the maternal and
fetal blood systems allows diffusion between them. Oxygen and nutrients
diffuse from the mother's blood into that of the fetus and carbon
dioxide and excretory products diffuse in the other direction. Most
maternal hormones (except adrenaline), antibodies, almost all drugs
(including alcohol), lead and DDT also pass across the placenta.
However, it protects the fetus from infection with bacteria and most
viruses.
![](Anatomy_and_physiology_of_animals_Development_&_implantation_of_the_embryo.jpg "Anatomy_and_physiology_of_animals_Development_&_implantation_of_the_embryo.jpg")
Diagram 13.11 - Development and implantation of the embryo
![](Anatomy_and_physiology_of_animals_Fetus_and_placenta.jpg "Anatomy_and_physiology_of_animals_Fetus_and_placenta.jpg")
Diagram 13.12. The fetus and placenta
The fetus is attached to the placenta by the **umbilical cord**. It
contains arteries that carry blood to the placenta and a vein that
returns blood to the fetus. The developing fetus becomes surrounded by
membranes. These enclose the amniotic fluid that protects the fetus from
knocks and other trauma (see diagram 13.12).
![](Anatomy_and_physiology_of_animals_Maternal_and_fetal_blood_flow_in_the_placenta.jpg "Anatomy_and_physiology_of_animals_Maternal_and_fetal_blood_flow_in_the_placenta.jpg")
Diagram 13.13 - Maternal and fetal blood flow in the placenta
### Hormones During Pregnancy
The corpus luteum continues to secrete progesterone and oestrogen during
pregnancy. These maintain the lining of the uterus and prepare the
mammary glands for milk secretion. Later in the pregnancy the placenta
itself takes over the secretion of these hormones.
**Chorionic gonadotrophin** is another hormone secreted by the placenta
and placental membranes. It prevents uterine contractions before labour
and prepares the mammary glands for lactation. Towards the end of
pregnancy the placenta and ovaries secrete **relaxin**, a hormone that
eases the joint between the two parts of the pelvis and helps dilate the
cervix ready for birth.
### Pregnancy Testing
The easiest method of pregnancy detection is ultrasound which is
noninvasive and very reliable Later in gestation pregnancy can be
detected by taking x-rays.
In dogs and cats a blood test can be used to detect the hormone
**relaxin**.
In mares and cows palpation of the uterus via the rectum is the classic
way to determine pregnancy. It can also be done by detecting the
hormones **progesterone** or **equine chorionic gonagotrophin**
(**eCG**) in the urine. A new sensitive test measures the amount of the
hormone, **oestrone sulphate**, present in a sample of faeces. The
hormone is produced by the foal and placenta, and is only present when
there is a living foal.
In most animals, once pregnancy is advanced, there is a window of time
during which an experienced veterinarian can determine pregnancy by
feeling the abdomen.
### Gestation Period
The young of many animals (e.g. pigs, horses and elephants) are born at
an advanced state of development, able to stand and even run to escape
predators soon after they are born. These animals have a relatively long
gestation period that varies with their size e.g. from 114 days in the
pig to 640 days in the elephant.
In contrast, cats, dogs, mice, rabbits and higher primates are
relatively immature when born and totally dependent on their parents for
survival. Their gestation period is shorter and varies from 25 days in
the mouse to 31 days in rabbits and 258 days in the gorilla.
The babies of marsupials are born at an extremely immature stage and
migrate to the pouch where they attach to a teat to complete their
development. Kangaroo joeys, for example, are born 33 days after
conception and opossums after only 8 days.
## Birth
### Signs Of Imminent Birth
As the pregnancy continues, the mammary glands enlarge and may secrete a
milky substance a few days before birth occurs. The vulva may swell and
produce thick mucus and there is sometimes a visible change in the
position of the foetus. Just before birth the mother often becomes
restless, lying down and getting up frequently. Many animals seek a
secluded place where they may build a nest in which to give birth.
### Labour
Labour involves waves of uterine contractions that press the foetus
against the cervix causing it to dilate. The foetus is then pushed
through the cervix and along the vagina before being delivered. In the
final stage of labour the placenta or "afterbirth" is expelled.
### Adaptations Of The Fetus To Life Outside The Uterus
The fetus grows in the watery, protected environment of the uterus where
the mother supplies oxygen and nutrients, and waste products pass to her
blood circulation for excretion. Once the baby animal is born it must
start to breathe for itself, digest food and excrete its own waste. To
allow these functions to occur blood is re-routed to the lungs and the
glands associated with the gut start to secrete. Note that newborn
animals can not control their own body temperature. They need to be kept
warm by the mother, litter mates and insulating nest materials.
## Milk Production
Cows, manatees and primates have two mammary glands but animals like
pigs that give birth to large litters may have as many as 12 pairs.
Ducts from the gland lead to a nipple or teat and there may be a sinus
where the milk collects before being suckled (see diagram 13.14).
![](Anatomy_and_physiology_of_animals_reproduction_Mammary_gland.jpg "Anatomy_and_physiology_of_animals_reproduction_Mammary_gland.jpg")
Diagram 13.14 - A mammary gland
The hormones **oestrogen** and **progesterone** stimulate the mammary
glands to develop and **prolactin** promotes the secretion of the milk.
**Oxytocin** from the pituitary gland releases the milk when the baby
suckles. The first milk is called **colostrum**. It is a rich in
nutrients and contains protective antibodies from the mother. Milk
contains fat, protein and milk sugar as well as vitamins and most
minerals although it contains little iron. Its actual composition varies
widely from species to species. For example whale's and seal's milk has
twelve times more fat and four times more protein than cow's milk. Cow's
milk has far less protein in it than cat's or dog's milk. This is why
orphan kittens and puppies cannot be fed cow's milk.
## Reproduction In Birds
Male birds have testes and sperm ducts and male swans, ducks, geese and
ostriches have a penis. However, most birds make do with a small amount
of erectile tissue known as a **papilla**. To reduce weight for flight
most female birds only have one ovary - usually the left, which produces
extremely yolky eggs. The eggs are fertilised in the upper part of the
oviduct (equivalent to the fallopian tube and uterus of mammals) and as
they pass down it **albumin** (the white of the egg), the membrane
beneath the shell and the shell are laid down over the yolk. Finally the
egg is covered in a layer of mucus to help the bird lay it (see diagram
13.15).
Most birds lay their eggs in a nest and the hen sits on them until they
hatch. Ducklings and chicks are relatively well developed when they
hatch and able to forage for their own food. Most other nestlings need
their parents to keep them warm, clean and fed. Young birds grow rapidly
and have voracious appetites that may involve the parents making up to
1000 trips a day to supply their need for food.
![](Anatomy_and_physiology_of_animals_Female_reproductive_organs_of_a_bird.jpg "Anatomy_and_physiology_of_animals_Female_reproductive_organs_of_a_bird.jpg")
Diagram 13.15 - Female reproductive organs of a bird
## Summary
- **Haploid** gametes (sperm and ova) are produced by meiosis in the
**gonads** (testes and ovaries).
- Fertilization involves the fusing of the gametes to form a diploid
**zygote**.
- The male reproductive system consists of a pair of **testes** that
produce sperm (or **spermatozoa**), ducts that transport the sperm
to the penis and glands that add secretions to the sperm to make
semen.
- Sperm are produced in the **seminiferous tubules**, are stored in
the **epididymis** and travel via the **vas deferens** or **sperm
duct** to the junction of the bladder and the **urethra** where
various accessory glands add secretions. The fluid is now called
**semen** and is ejaculated into the female system down the
**urethra** that runs down the centre of the penis.
- Sperm consist of a head, a midpiece and a tail.
- **Infertility** is the inability of sperm to fertilize an egg while
**impotence** is the inability to copulate successfully.
- The female reproductive system consists of a pair of **ovaries**
that produce **ova** and **fallopian tubes** where fertilization
occurs and which carry the fertilized ovum to the **uterus**. Growth
of the fetus takes place here. The **cervix** separates the uterus
from the **vagina**, the birth canal and where the sperm are
deposited.
- The **ovarian cycle** refers to the series of changes in the ovary
during which the follicle matures, the ovum is shed and the **corpus
luteum** develops.
- The **oestrous cycle** is the sequence of hormonal changes that
occurs through the ovarian cycle. It is initiated by the secretion
of **follicle stimulating hormone (F.S.H.),** by the **anterior
pituitary gland** which stimulates the **follicle** to develop. The
follicle secretes **oestrogen** which stimulates **mammary gland**
development. **luteinising hormone (L.H.)** from the anterior
pituitary initiates **ovulation** and stimulates the **corpus
luteum** to develop. The corpus luteum produces **progesterone**
that prepares the lining of the uterus for the fertilized ovum.
- **Signs of oestrous** or heat differ. A bitch has a blood stained
discharge, female cats and rats are restless and show the lordosis
response, while cows mount other cows, bellow and have a discharge
from the vulva.
- After fertilization in the fallopian tube the **zygote** divides
over and over by mitosis to become a ball of cells called a
**morula**. Division continues to form a hollow ball of cells called
the **blastocyst**. This is the stage that **implants** in the
uterus.
- The **placenta, umbilical cord** and **fetal membranes** (known as
the **placenta**) protect and provide the developing fetus with
nutrients and remove waste products.
## Worksheet
Reproductive System
Worksheet
## Test Yourself
1\. Add the following labels to the diagram of the male reproductive
organs below.
:
: testis \| epididymis \| vas deferens \| urethra \| penis \|
scrotal sac \| prostate gland
Diagram of the Male Reproductive System
2\. Match the following descriptions with the choices given in the list
below.
:
: accessory glands \| vas deferens or sperm duct \| penis \|
scrotum \| fallopian tube \| testes \| urethra \| vagina \|
uterus \| ovary \| vulva
: a\) Organ that delivers semen to the female vagina
: b\) Where the sperm are produced
: c\) Passage for sperm from the epididymis to the urethra
: d\) Carries both sperm and urine down the penis
: e\) Glands that produce secretions that make up most of the semen
: f\) Bag of skin surrounding the testes
: g\) Where the foetus develops
: h\) This receives the penis during copulation
: i\) Where fertilisation usually occurs
: j\) Ova travel along this tube to reach the uterus
: k\) Where the ova are produced
: l\) The external opening of the vagina
3\. Which hormone is described in each statement below?
: a\) This hormone stimulates the growth of the follicles in the ovary
: b\) This hormone converts the empty follicle into the corpus luteum
and stimulates it to produce progesterone
: c\) This hormone is produced by the cells of the follicle
: d\) This hormone is produced by the corpus luteum
: e\) This hormone causes the mammary glands to develop
: f\) This hormone prepares the lining of the uterus to receive a
fertilised ovum
4\. State whether the following statements are true or false. If false
write in the correct answer.
: a\) Fertilisation of the egg occurs in the uterus
: b\) The fertilised egg cell contains half the normal number of
chromosomes
: c\) The morula is a hollow ball of cells
: d\) The mixing of the blood of the mother and foetus allows
nutrients and oxygen to transfer easily to the foetus
: e\) The morula implants in the wall of the uterus
: f\) The placenta is the organ that supplies the foetus with oxygen
and nutrients
: g\) Colostrum is the first milk
: h\) Young animals often have to be given calcium supplements because
milk contains very little calcium
/Test Yourself Answers/
## Websites
- <http://www.anatomicaltravel.com/CB_site/Conception_to_birth3.htm>
Anatomical travel. Images of fertilisation and the development of
the (human) embryo through to birth.
- <http://www.uchsc.edu/ltc/fert.swf> Fertilisation. A great animation
of fertilisation, formation of the zygote and first mitotic
division. A bit advanced but still worth watching.
- <http://www.uclan.ac.uk/facs/health/nursing/sonic/scenarios/salfordanim/heart.swf>
Sonic. An animation showing the foetal blood circulation through the
placenta to the changes allowing circulation through the lungs after
birth.
- <http://en.wikipedia.org/wiki/Estrus> Wikipedia. As always, good
interesting information although some terms and concepts are beyond
the requirements of this level.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/Nervous System
!Original image by Royalty-free image
collection.
Used under a CC-BY
licence.{width="400"}
## Objectives
After completing this section, you should know:
- the role of the nervous system in coordinating an animal's response
to the environment
- that the nervous system gathers, sorts and stores information and
initiates movement
- the basic structure and functions of a neuron
- the structure and function of a synapse and neurotransmitter
chemicals
- the nervous pathway known as a reflex with examples
- that training can develop conditioned reflexes in animals
- that the nervous system can be divided into the central and
peripheral nervous systems
- that the brain is surrounded by membranes called meninges
- the basic parts of the brain and the function of the cerebral
hemispheres, hypothalamus, pituitary, cerebellum and medulla
oblongata
- the structure and function of the spinal cord
- that the peripheral nervous system consists of cranial and spinal
nerves and the autonomic nervous system
- that the autonomic nervous system consists of sympathetic and
parasympathetic parts with different functions
## Coordination
Animals must be able to sense and respond to the environment in which
they live if they are to survive. They need to be able to sense the
temperature of their surroundings, for example, so they can avoid the
hot sun. They must also be able to identify food and escape predators.
The various systems and organs in the body must also be linked so they
work together. For example, once a predator has identified suitable prey
it has to catch it. This involves coordinating the contraction of the
muscle so the predator can run, there must then be an increased blood
supply to the muscles to provide them with oxygen and nutrients. At the
same time the respiration rate must increase to supply the oxygen and
remove the carbon dioxide produced as a result of this increased
activity. Once the prey has been caught and eaten, the digestive system
must be activated to digest it.
The adjustment of an animal's response to changes in the environment and
the complex linking of the various processes in the body that this
response involves are called **co-ordination**. Two systems are involved
in co-ordination in animals. These are the **nervous** and **endocrine
systems**. The first operates via electrical impulses along nerve fibres
and the second by releasing special chemicals or hormones into the
bloodstream from glands.
## Functions of the Nervous System
The nervous system has three basic functions:
: 1\. **Sensory function** - to sense changes (known as stimuli) both
outside and within the body. For example the eyes sense changes in
light and the ear responds to sound waves. Inside the body, stretch
receptors in the stomach indicate when it is full and chemical
receptors in the blood vessels monitor the acidity of the blood.
```{=html}
<!-- -->
```
: 2\. **Integrative function** - processing the information received
from the sense organs. The impulses from these organs are analysed
and stored as memory. The many different impulses from different
sources are sorted, synchronised and co-ordinated and the
appropriate response initiated. The power to integrate, remember and
apply experience gives higher animals much of their superiority.
```{=html}
<!-- -->
```
: 3\. **Motor function** - The third function is the response to the
stimuli that causes muscles to contract or glands to secrete.
All nervous tissue is made up of nerve cells or **neurons.** These
transmit high-speed signals called **nerve impulses**. Nerve impulses
can be thought of as being similar to an electric current.
## The Neuron
Neurons are cells that have been adapted to carry nerve impulses. A
typical neuron has a **cell body** containing a nucleus, one or more
branching filaments called **dendrites** which conduct nerve impulses
towards the cell body and one long fibre, an **axon**, that carries the
impulses away from it. Many axons have a sheath of fatty material called
**myelin** surrounding them. This speeds up the rate at which the nerve
impulses travel along the nerve (see diagram 14.1).
![](Anatomy_and_physiology_of_animals_Motor_neuron.jpg "Anatomy_and_physiology_of_animals_Motor_neuron.jpg")
Diagram 14.1 - A motor neuron
The cell body of neurons is usually located in the brain or spinal cord
while the axon extends the whole distance to the organ that it supplies.
The neuron carrying impulses from the spinal cord to the hind leg or
tail of a horse, for example, can be several feet long. A **nerve** is a
bundle of axons.
A **sensory neuron** is a nerve cell that transmits impulses from a
sense receptor such as those in the eye or ear to the brain or spinal
cord. A **motor neuron** is a nerve cell that transmits impulses from
the brain or spinal cord to a muscle or gland. A **relay neuron**
connects sensory and motor neurons and is found in the brain or spinal
cord (see diagrams 14.1 and 14.2).
![](Anatomy_and_physiology_of_animals_Relation_btw_sensory,_relay_&_motor_neurons.jpg "Anatomy_and_physiology_of_animals_Relation_btw_sensory,_relay_&_motor_neurons.jpg")
Diagram 14.2 - The relationship between sensory, relay and motor neurons
### Connections Between Neurons
The connection between adjacent neurons is called a **synapse**. The two
nerve cells do not actually touch here for there is a microscopic space
between them. The electrical impulse in the neurone before the synapse
stimulates the production of chemicals called **neurotransmitters**
(such as **acetylcholine**), which are secreted into the gap.
The neurotransmitter chemicals diffuse across the gap and when they
contact the membrane of the next nerve cell they stimulate a new nervous
impulse (see diagram 14.3). After the impulse has passed the chemical is
destroyed and the synapse is ready to receive the next nerve impulse.
![](Anatomy_and_physiology_of_animals_Magnification_of_a_synapse.jpg "Anatomy_and_physiology_of_animals_Magnification_of_a_synapse.jpg")
Diagram 14.3 - A nerve and magnification of a synapse
## Reflexes
A **reflex** is a rapid automatic response to a stimulus. When you
accidentally touch a hot object and automatically jerk your hand away,
this is a reflex action. It happens without you having to think about
it. Animals automatically blink when an object approaches the eye and
cats twist their bodies in the air when falling so they land on their
paws. (Please don't test this one at home with your pet cat!).
Swallowing, sneezing, and the constriction of the pupil of the eye in
bright light are also all reflex actions. Some other examples of reflex
actions in animals can be shivering with cold and the opening of the
mouth on hearing a sudden loud noise.
The path taken by the nerve impulses in a reflex is called a **reflex
arc**. Most reflex arcs involve only three neurons (see diagram 14.4).
The **stimulus** (a pin in the paw) stimulates the pain receptors of the
skin, which initiate an impulse in a sensory neuron. This travels to the
spinal cord where it passes, by means of a synapse, to a connecting
neuron called the relay neuron situated in the spinal cord. The relay
neuron in turn makes a synapse with one or more motor neurons that
transmit the impulse to the muscles of the limb causing them to contract
and remove the paw from the sharp object. Reflexes do not require
involvement of the brain although you are aware of what is happening and
can, in some instances, prevent them happening. Animals are born with
their reflexes. You can think of them as being wired in.
![](Anatomy_and_physiology_of_animals_A_reflex_arc.jpg "Anatomy_and_physiology_of_animals_A_reflex_arc.jpg")
Diagram 14.4 - A reflex arc
### Conditioned Reflexes
In most reflexes the stimulus and response are related. For example the
presence of food in the mouth causes the salivary glands to release
saliva. However, it is possible to train animals (and humans) to respond
to different and often quite irrelevant stimuli. This is called a
**conditioned reflex**.
A Russian biologist called Pavlov carried out the classic experiment to
demonstrate such a reflex when he conditioned dogs to salivate at the
sound of a bell ringing. Almost every pet owner can identify reflexes
they have conditioned in their animals. Perhaps you have trained your
cat to associate food with the opening of the fridge door or accustomed
your dog to the routines you go through before taking them for a walk.
## Parts of the Nervous System
When we describe the nervous system of vertebrates we usually divide it
into two parts (see diagram 14.5).
: 1\. The **central nervous system** (**CNS**) which consists of the
brain and spinal cord.
: 2\. The **peripheral nervous system** (**PNS**) which consists of
the nerves that connect to the brain and spinal cord (cranial and
spinal nerves) as well as the **autonomic** (or involuntary) nervous
system.
![](Horse_nervous_system_labelled.JPG "Horse_nervous_system_labelled.JPG")
Diagram 14.5 - The nervous system of a horse
### The Central Nervous System
The **central nervous system** consists of the brain and spinal cord. It
acts as a kind of 'telephone exchange' where a vast number of cross
connections are made.
When you look at the brain or spinal cord some regions appear creamy
white (**white matter**) and others appear grey (**grey matter**). White
matter consists of masses of nerve axons and the grey matter consists of
the cell bodies of the nerve cells. In the brain the grey matter is on
the outside and in the spinal cord it is on the inside (see diagram
14.2).
#### The Brain
The major part of the brain lies protected within the sturdy "box" of
skull called the **cranium**. Surrounding the fragile brain tissue (and
spinal cord) are protective membranes called the **meninges** (see
diagram 14.6), and a crystal-clear fluid called **cerebrospinal fluid**,
which protects and nourishes the brain tissue. This fluid also fills
four cavities or **ventricles** that lie within the brain.
Brain tissue is extremely active and, even when an animal is resting, it
uses up to 20% of the oxygen taken into the body by the lungs. The
**carotid artery**, a branch off the dorsal aorta, supplies it with the
oxygen and nutrients it requires. Brain damage occurs if brain tissue is
deprived of oxygen for only 4--8 minutes.
The brain consists of three major regions:
: 1\. the **fore brain** which includes the **cerebral hemispheres**,
**hypothalamus** and **pituitary gland**;
: 2\. the **hind brain** or **brain stem**, contains the **medulla
oblongata** and **pons** and
: 3\. the **cerebellum** or "little brain" (see diagram 14.6).
![](Anatomy_and_physiology_of_animals_Longitudinal_section_through_brain_of_a_dog.jpg "Anatomy_and_physiology_of_animals_Longitudinal_section_through_brain_of_a_dog.jpg")
Diagram 14.6 - Longitudinal section through the brain of a dog
##### Mapping the brain
In humans and some animals the functions of the different regions of the
cerebral cortex have been mapped (see diagram 14.7).
Diagram 14.7 - The functions of the regions of the human cerebral cortex
##### The Forebrain
The **cerebral hemispheres** are the masses of brain tissue that sit on
the top of the brain. The surface is folded into ridges and furrows
called **sulci** (singular sulcus). They make this part of the brain
look rather like a very large walnut kernel. The two hemispheres are
separated by a deep groove although they are connected internally by a
thick bundle of nerve fibres. The outer layer of each hemisphere is
called the **cerebral cortex** and this is where the main functions of
the cerebral hemispheres are carried out.
The cerebral cortex is large and convoluted in mammals compared to other
vertebrates and largest of all in humans because this is where the
so-called "higher centres" concerned with memory, learning, reasoning
and intelligence are situated.
Nerves from the eyes, ears, nose and skin bring sensory impulses to the
cortex where they are interpreted. Appropriate voluntary movements are
initiated here in the light of the memories of past events.
Different regions of the cortex are responsible for particular sensory
and motor functions, e.g. vision, hearing, taste, smell, or moving the
fore-limbs, hind-limbs or tail. For example, when a dog sniffs a scent,
sensory impulses from the organ of smell in the nose pass via the
olfactory (smelling) nerve to the olfactory centres of the cerebral
hemispheres where the impulses are interpreted and co-ordinated.
In humans and some animals the functions of the different regions of the
cerebral cortex have been mapped (see diagram 14.8).
![](Anatomy_and_physiology_of_animals_Functions_of_the_regions_of_the_cerebral_cortex.jpg "Anatomy_and_physiology_of_animals_Functions_of_the_regions_of_the_cerebral_cortex.jpg")
Diagram 14.8 - The functions of the regions of the cerebral cortex
The **hypothalamus** is situated at the base of the brain and is
connected by a "stalk" to the **pituitary gland**, the "master"
hormone-producing gland (see chapter 16). The hypothalamus can be
thought of as the bridge between the nervous and endocrine (hormone
producing) systems. It produces some of the hormones that are released
from the pituitary gland and controls the release of others from it.
It is also an important centre for controlling the internal environment
of the animal and therefore maintaining homeostasis. For example, it
helps regulate the movement of food through the gut and the temperature,
blood pressure and concentration of the blood. It is also responsible
for the feeling of being hungry or thirsty and it controls sleep
patterns and sex drive.
##### The Hindbrain
The **medulla oblongata** is at the base of the brain and is a
continuation of the spinal cord. It carries all signals between the
spinal cord and the brain and contains centres that control vital body
functions like the basic rhythm of breathing, the rate of the heartbeat
and the activities of the gut. The medulla oblongata also co-ordinates
swallowing, vomiting, coughing and sneezing.
##### The Cerebellum
The **cerebellum** (little brain) looks rather like a smaller version of
the cerebral hemispheres attached to the back of the brain. It receives
impulses from the organ of balance (vestibular organ) in the inner ear
and from stretch receptors in the muscles and tendons. By co-ordinating
these it regulates muscle contraction during walking and running and
helps maintain the posture and balance of the animal. When the
cerebellum malfunctions it causes a tremor and uncoordinated movement.
#### The Spinal Cord
The spinal cord is a cable of nerve tissue that passes down the channel
in the vertebrae from the hindbrain to the end of the tail. It becomes
progressively smaller as paired **spinal nerves** pass out of the cord
to parts of the body. Protective membranes or meninges cover the cord
and these enclose cerebral spinal fluid (see diagram 14.9).
![](Anatomy_and_physiology_of_animals_The_spinal_cord.jpg "Anatomy_and_physiology_of_animals_The_spinal_cord.jpg")
Diagram 14.9 - The spinal cord
If you cut across the spinal cord you can see that it consists of white
matter on the outside and grey matter in the shape of an H or butterfly
on the inside.
### The Peripheral Nervous System
The **peripheral nervous system** consists of nerves that are connected
to the brain (**cranial nerves**), and nerves that are connected to the
spinal cord (**spinal nerves**). The **autonomic nervous system** is
also part of the peripheral nervous system.
#### Cranial Nerves
There are twelve pairs of cranial nerves that come from the brain. Each
passes through a hole in the cranium (brain case). The most important of
these are the olfactory, optic, acoustic and vagus nerves.
The **olfactory nerves** - (smell) carry impulses from the olfactory
organ of the nose to the brain.
The **optic nerves** - (sight) carry impulses from the retina of the eye
to the brain.
The **auditory (acoustic) nerves** - (hearing) carry impulses from the
cochlear of the inner ear to the brain.
The **vagus nerve** - controls the muscles that bring about swallowing.
It also controls the muscles of the heart, airways, lungs, stomach and
intestines (see diagram 14.5).
#### Spinal Nerves
**Spinal nerves** connect the spinal cord to sense organs, muscles and
glands in the body. Pairs of spinal nerves leave the spinal cord and
emerge between each pair of adjacent vertebrae (see diagram 14.9).
The **sciatic nerve** is the largest spinal nerve in the body (see
diagram 14.5). It leaves the spinal cord as several nerves that join to
form a flat band of nervous tissue. It passes down the thigh towards the
hind leg where it gives off branches to the various muscles of this
limb.
#### The Autonomic Nervous System
The **autonomic nervous system** controls internal body functions that
are not under conscious control. For example when a prey animal is
chased by a predator the autonomic nervous system automatically
increases the rate of breathing and the heartbeat. It dilates the blood
vessels that carry blood to the muscles, releases glucose from the
liver, and makes other adjustments to provide for the sudden increase in
activity. When the animal has escaped and is safe once again the nervous
system slows down all these processes and resumes all the normal body
activities like the digestion of food.
The nerves of the autonomic nervous system originate in the spinal cord
and pass out between the vertebrae to serve the various organs (see
diagram 14.10).There are two main parts to the autonomic nervous
system---the **sympathetic system** and the **parasympathetic system**.
The **sympathetic system** stimulates the "flight, fright, fight"
response that allows an animal to face up to an attacker or make a rapid
departure. It increases the heart and respiratory rates, as well as the
amount of blood flowing to the skeletal muscles while blood flow to less
critical regions like the gut and skin is reduced. It also causes the
pupils of the eyes to dilate. Note that the effects of the sympathetic
system are similar to the effects of the hormone adrenaline (see Chapter
16).
The **parasympathetic system** does the opposite to the sympathetic
system. It maintains the normal functions of the relaxed body. These are
sometimes known as the "housekeeping" functions. It promotes effective
digestion, stimulates defaecation and urination and maintains a regular
heartbeat and rate of breathing.
![](Anatomy_and_physiology_of_animals_Function_of_the_sympathetic_&_parasympathetic_nervous_systems.jpg "Anatomy_and_physiology_of_animals_Function_of_the_sympathetic_&_parasympathetic_nervous_systems.jpg")
Diagram 14.10 - The function of the sympathetic and parasympathetic
nervous systems
## Summary
- The **neuron** is the basic unit of the nervous system. It consists
of a **cell body** with a nucleus, filaments known as **dendrites**
and a long fibre known as the **axon** often surrounded by a
**myelin sheath**.
- A **nerve** is a bundle of axons.
- **Grey matter** in the brain and spinal cord consists mainly of
brain cells while **white matte**r consists of masses of axons.
- **Nerve Impulses** travel along axons.
- Adjacent neurons connect with each other at **synapses**.
- **Reflexes** are automatic responses to stimuli. The path taken by
nerve impulses involved in reflexes is a **reflex arc**. Most reflex
arcs involve 3 neurons - a **sensory neuron**, a **relay neuron**
and a **motor neuron**. A stimulus, a pin in the paw for example,
initiates an impulse in the sensory neuron that passes via a synapse
to the relay neuron situated in the spinal cord and then via another
synapse to the motor neurone. This transmits the impulse to the
muscle causing it to contract and remove the paw from the pin.
- The nervous system is divided into 2 parts: the **central nervous
system**, consisting of the brain and spinal cord and the
**peripheral nervous system** consisting of nerves connected to the
brain and spinal cord. The **autonomic nervous system** is
considered to be part of the peripheral nervous system.
- The brain consists of three major regions: 1. the **fore brain**
which includes the **cerebral hemispheres** (or **cerebrum**),
**hypothalamus** and **pituitary gland**; 2. the **hindbrain** or
**brain stem** containing the **medulla oblongata** and 3. the
**cerebellum**.
- Protective membranes known as the **meninges** surround the brain
and spinal cord.
- There are 12 pairs of cranial nerves that include the optic,
olfactory, acoustic and **vagus** nerves.
- The **spinal cord** is a cable of nerve tissue surrounded by
meninges passing from the brain to the end of the tail. **Spinal
nerves** emerge by a **ventral** and **dorsal root** between each
vertebra and connect the spinal cord with organs and muscles.
- The **autonomic nervous system** controls internal body functions
not under conscious control. It is divided into 2 parts with 2
different functions: the **sympathetic nervous system** that is
involved in the flight and fight response including increased heart
rate, bronchial dilation, dilation of the pupil and decreased gut
activity. The **parasympathetic nervous system** is associated with
decreased heart rate, pupil constriction and increased gut activity.
## Worksheet
Nervous System
Worksheet
## Test Yourself
1\. Add the following labels to this diagram of a motor neuron.
: cell body \| nucleus \| axon \| dendrites \| myelin sheath \| muscle
fibres
![](Anatomy_and_physiology_unlabeled_neuron.jpg "Anatomy_and_physiology_unlabeled_neuron.jpg")
2\. What is a synapse?
3\. What is a reflex?
4\. Rearrange the parts of a reflex arc given below in the order in
which the nerve impulse travels from the sense organ to the muscle.
: sense organ \| relay neuron \| motor neuron \| sensory neuron \|
muscle fibres
5\. Add the following labels to the diagram of the dog's brain shown
below.
: cerebellum \| cerebral hemisphere \| cerebral cortex \| pituitary
gland \| medulla oblongata
![](Anatomy_and_physiology_unlabeled_LS_dog's_brain.jpg "Anatomy_and_physiology_unlabeled_LS_dog's_brain.jpg")
6\. What is the function of the meninges that cover the brain and spinal
cord
7\. Give 3 effects of the action of the sympathetic nervous system.
/Test Yourself Answers/
## Websites
- <http://en.wikipedia.org/wiki/Neuron> Wikipedia. Lots of good
information here but as usual a warning that there are terms and
concepts that are beyond the scope of this course. Also try 'reflex
action' ; 'autonomic nervous system' ;
- <http://images.google.co.nz/imgres?imgurl=http://static.howstuffworks.com/gif/brain-neuron.gif&imgrefurl=http://science.howstuffworks.com/brain1.htm&h=296&w=394&sz=17&hl=en&start=5&tbnid=LWLRI9lW_5PZhM:&tbnh=93&tbnw=124&prev=/images%3Fq%3Dneuron%26svnum%3D10%26hl%3Den%26lr%3D%26sa%3DN>
How Stuff Works. This site is for the neuron but try 'neuron types',
'brain parts' and 'balancing act' too.
- <http://web.archive.org/web/20060821134839/http://www.bbc.co.uk/schools/gcsebitesize/flash/bireflexarc.swf>
Reflex Arc. Nice clear and simple animation of a reflex arc.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/The Senses
!original image by miss
pupik cc
by{width="400"}
## Objectives
After completing this section, you should know:
- that the general senses of touch, pressure, pain etc. are situated
in the dermis of the skin and in the body
- that the special senses include those of smell, taste, sight,
hearing, and balance
- the main structures of the eye and their functions
- the route taken by light through the eye to the retina
- the role of the rods and cones in the retina
- the advantages of binocular vision
- the main structures of the ear and their functions
- the route taken by sound waves through the ear to the cochlea
- the role of the vestibular organ (semicircular canals and otolith
organ) in maintaining balance and posture
## The sense organs
Sense organs allow animals to sense changes in the environment around
them and in their bodies so that they can respond appropriately. They
enable animals to avoid hostile environments, sense the presence of
predators and find food.
Animals can sense a wide range of stimuli that includes, touch,
pressure, pain, temperature, chemicals, light, sound, movement and
position of the body. Some animals can sense electric and magnetic
fields. All sense organs respond to stimuli by producing nerve impulses
that travel to the brain via a sensory nerve. The impulses are then
processed and interpreted in the brain as pain, sight, sound, taste etc.
The senses are often divided into two groups:
: 1\. The **general senses** of touch, pressure, pain and temperature
that are distributed fairly evenly through the skin. Some are found
in muscles and within joints.
```{=html}
<!-- -->
```
: 2\. The **special senses** which include the senses of smell, taste,
sight, hearing and balance. The special sense organs may be quite
complex in structure.
## Touch And Pressure
Within the dermis of the skin are numerous modified nerve endings that
are sensitive to touch and pressure. The roots of hairs may also be well
supplied with sensory receptors that inform the animal contact with an
object (see diagram 15.1). Whiskers are specially modified hairs.
## Pain
Receptors that sense pain are found in almost every tissue of the body.
They tell the animal that tissues are dangerously hot, cold, compressed
or stretched or that there is not enough blood flowing in them. The
animal may then be able to respond and protect itself from further
damage
![](Anatomy_and_physiology_of_animals_General_senses_in_skin.jpg "Anatomy_and_physiology_of_animals_General_senses_in_skin.jpg")
Diagram 15.1 - The general senses in the skin
## Temperature
Nerve endings in the skin respond to hot and cold stimuli (See diagram
15.1).
## Awareness Of Limb Position
There are sense organs in the muscles, tendons and joints that send
continuous impulses to the brain that tell it where each limb is. This
information allows the animal to place its limbs accurately and know
their exact position without having to watch them.
## Smell
Animals use the sense of smell to locate food, mark their territory,
identify their own offspring and the presence and sexual condition of a
potential mate. The organ of smell (**olfactory organ**) is located in
the nose and responds to chemicals in the air. It consists of modified
nerve cells that have several tiny hairs on the surface. These emerge
from the epithelium on the roof of the nose cavity into the mucus that
lines it. As the animal breathes, chemicals in the air dissolve in the
mucus. When the sense cell responds to a particular molecule, it fires
an impulse that travels along the **olfactory nerve** to the brain where
it is interpreted as an odour (see diagram 15.2).
![](Anatomy_and_physiology_of_animals_Olfactory_organ_the_sense_of_smell.jpg "Anatomy_and_physiology_of_animals_Olfactory_organ_the_sense_of_smell.jpg")
Diagram 15.2 - The olfactory organ - the sense of smell
The olfactory sense in humans is rudimentary compared to that of many
animals. Carnivores that hunt have a very highly developed sensitivity
to scents. For example a polar bear can smell out a dead seal 20 km away
and a bloodhound can distinguish between the trails of different people
although it may sometimes be confused by the criss-crossing trail of
identical twins.
Snakes and lizards detect odours by means of **Jacobson's organ**. This
is situated on the roof of the mouth and consists of pits containing
sensory cells. When snakes flick out their forked tongues they are
smelling the air by carrying the molecules in it to the Jacobson's
organ.
## Taste
The sense of taste allows animals to detect and identify dissolved
chemicals. In reptiles, birds, and mammals the taste receptors (**taste
buds**) are found mainly to the upper surface of the tongue. They
consist of pits containing sensory cells arranged rather like the
segments of an orange (see diagram 15.3). Each receptor cell has a tiny
"hair" that projects into the saliva to sense the chemicals dissolved in
it.
![](Anatomy_and_physiology_of_animals_Taste_buds_on_the_tongue.jpg "Anatomy_and_physiology_of_animals_Taste_buds_on_the_tongue.jpg")
Diagram 15.3. Taste buds on the tongue
The sense of taste is quite restricted. Humans can only distinguish four
different tastes (sweet, sour, bitter and salt) and what we normally
think of as taste is mainly the sense of smell. Food is quite tasteless
when the nose is blocked and cats often refuse to eat when this happens.
## Sight
The eyes are the organs of sight. They consist of spherical **eyeballs**
situated in deep depressions in the skull called the **orbits**. They
are attached to the wall of the orbit by six muscles, which move the
eyeball. Upper and lower **eyelids** cover the eyes during sleep and
protect them from foreign objects or too much light, and spread the
tears over their surface.
The **nictitating membrane** or **haw** is a transparent sheet that
moves sideways across the eye from the inner corner, cleansing and
moistening the cornea without shutting out the light. It is found in
birds, crocodiles, frogs and fish as well as marsupials like the
kangaroo. It is rare in mammals but can be seen in cats and dogs by
gently opening the eye when it is asleep. **Eyelashes** also protect the
eyes from the sun and foreign objects.
### Structure of the Eye
Lining the eyelids and covering the front of the eyeball is a thin
epithelium called the **conjunctiva**. Conjunctivitis is inflammation of
this membrane. **Tear glands** that open just under the top eyelid
secrete a salty solution that keeps the exposed part of the eye moist,
washes away dust and contains an enzyme that destroys bacteria.
The wall of the eyeball is composed of three layers (see diagram 15.4).
From the outside these are the **sclera**, the **choroid** and the
**retina**.
![](Anatomy_and_physiology_of_animals_Structure_of_the_eye.jpg "Anatomy_and_physiology_of_animals_Structure_of_the_eye.jpg")
Diagram 15.4 - The structure of the eye
The **sclera** is a tough fibrous layer that protects the eyeball and
gives it rigidity. At the front of the eye the sclera is visible as the
"**white**" of the eye, which is modified as the transparent **cornea**
through which the light rays have to pass to enter the eye. The cornea
helps focus light that enters the eye.
The **choroid** lies beneath the sclera. It contains a network of blood
vessels that supply the eye with oxygen and nutrients. Its inner surface
is highly pigmented and absorbs stray light rays. In nocturnal animals
like the cat and possum this highly pigmented layer reflects light as a
means of conserving light. This is what makes them shine when caught in
car headlights.
At the front of the eye the choroid becomes the **iris**. This is the
coloured part of the eye that controls the amount of light entering the
**pupil** of the eye. In dim light the pupil is wide open so as much
light as possible enters while in bright light the pupils contract to
protect the retina from damage by excess light.
The **pupil** in most animals is circular but in many nocturnal animals
it is a slit that can close completely. This helps protect the
extra-sensitive light sensing tissues of animals like the cat and possum
from bright sunlight.
The inner layer lining the inside of the eye is the **retina**. This
contains the light sensing cells called **rods** and **cones** (see
diagram 15.5).
The **rod cells** are long and fat and are sensitive to dim light but
cannot detect colour. They contain large amounts of a pigment that
changes when exposed to light. This pigment comes from vitamin A found
in carrots etc. A deficiency of this vitamin causes night blindness. So
your mother was right when she told you to eat your carrots as they
would help you see in the dark!
The **cone cells** provide colour vision and allow animals to see
details. Most are found in the centre of the retina and they are most
densely concentrated in a small area called the **fovea**. This is the
area of sharpest vision, where the words you are reading at this moment
are focussed on your retina.
![](Anatomy_and_physiology_of_animals_A_rod_and_cone_from_the_retina.jpg "Anatomy_and_physiology_of_animals_A_rod_and_cone_from_the_retina.jpg")
Diagram 15.5 - A rod and cone from the retina
The nerve fibres from the cells of the retina join and leave the eye via
the **optic nerve**. There are no rods or cones here and it is a **blind
spot**. The optic nerve passes through the back of the orbit and enters
the brain.
The **lens** is situated just behind the pupil and the iris. It is a
crystalline structure with no blood vessels and is held in position by a
ligament. This is attached to a muscle, which changes the shape of the
lens so both near and distant objects can be focussed by the eye. This
ability to change the focus of the lens is called **accommodation**. In
many mammals the muscles that bring about accommodation are poorly
developed, Rats, cows and dogs, for example, are thought to be unable to
focus clearly on near objects.
In old age and certain diseases the lens may become cloudy resulting in
blurred vision. This is called a **cataract**. Within the eyeball are
two cavities, the **anterior and posterior chambers**, separated by the
lens. They contain fluids the **aqueous and vitreous
humours**respectively, that maintain the shape of the eyeball and help
press the retina firmly against the choroid so clear images are seen.
### How The Eye Sees
Eyes work quite like a camera. Light rays from an object enter the eye
and are focused on the retina (the "film") at the back of the eye. The
cornea, the lens and the fluid within the eye all help to focus the
light. They do this by bending the light rays so that light from the
object falls on the retina. This bending of light is called
**refraction**. The light stimulates the light sensitive cells of the
retina and nerve impulses are produced that pass down the optic nerve to
the brain (see diagram 15.6).
![](Anatomy_and_physiology_of_animals_How_light_travels_from_the_object_to_the_retina_of_the_eye.jpg "Anatomy_and_physiology_of_animals_How_light_travels_from_the_object_to_the_retina_of_the_eye.jpg")
Diagram 15.6 - How the light travels from the object to the retina of
the eye
### Colour Vision In Animals
As mentioned before, the retina has _two_
different kinds of cells that are stimulated by light -
***\'rodsandcones***\'. In humans and higher primates like baboons and
gorillas the rods function in dim light and do not perceive colour,
while the cones are stimulated by bright light and perceive details and
colour.
Other mammals have very few cones in their retinas and it is believed
that they see no or only a limited range of colour. It is, of course,
difficult to find out exactly what animals do see. It is thought that
deer, rats, and rabbits and nocturnal animals like the cat are
colour-blind, and dogs probably see green and blue. Some fish and most
birds seem to have better colour vision than humans and they use colour,
often very vivid ones, for recognizing each other as well as for
courtship and protection.
### Binocular Vision
Animals like cats that hunt have eyes placed on the front of the head in
such a way that both eyes see the same wide area but from slightly
different angles (see diagram 15.7). This is called binocular**vision**.
Its main advantage is that it enables the animals to estimate the
distance to the prey so they can chase it and pounce accurately.
![](Anatomy_and_physiology_of_animals_Well_developed_binocular_vision.jpg "Anatomy_and_physiology_of_animals_Well_developed_binocular_vision.jpg")
Diagram 15.7 - Well developed binocular vision in predator animals like
the cat
In contrast plant-eating prey animals like the rabbit and deer need to
have a wide panoramic view so they can see predators approaching. They
therefore have eyes placed on the side of the head, each with its own
field of vision (see diagram 15.8). They have only a very small area of
binocular vision in front of the head but are extremely sensitive to
movement.
![](Anatomy_and_physiology_of_animals_Panaoramic_monocular_vision.jpg "Anatomy_and_physiology_of_animals_Panaoramic_monocular_vision.jpg")
Diagram 15.8 - Panoramic monocular vision in prey animals like the
rabbit
## Hearing
Animals use the sense of hearing for many different purposes. It is used
to sense danger and enemies, to detect prey, to identify prospective
mates and to communicate within social groups. Some animals (e.g. most
bats and dolphins) use sound to "see" by echolocation. By sending out a
cry and interpreting the echo, they sense obstacles or potential prey.
### Structure of the Ear
Most of the ear, the organ of hearing, is hidden from view within the
bony skull. It consists of three main regions: the **outer ear, the
middle ear** and the **inner ear** (see diagram 15.9).
![](Anatomy_and_physiology_of_animals_The_ear.jpg "Anatomy_and_physiology_of_animals_The_ear.jpg")
Diagram 15.9 - The ear
The **outer ear** consists of an **ear canal** leading inwards to a thin
membrane known as the **eardrum**or **tympanic membrane** that stretches
across the canal. Many animals have an external ear flap or **pinna** to
collect and funnel the sound into the ear canal. The pinnae (plural of
pinna) usually face forwards on the head but many animals can swivel
them towards the source of the sound.
In dogs the ear canal is long and bent and often traps wax or provides
an ideal habitat for mites, yeast and bacteria.
The **middle ear** consists of a cavity in the skull that is connected
to the **pharynx** (throat) by a long narrow tube called the
**Eustachian tube**. This links the middle ear to the outside air so
that the air pressure on both sides of the eardrum can be kept the same.
Everyone knows the uncomfortable feeling (and affected hearing) that
occurs when you drive down a steep hill and the unequal air pressures on
the two sides of the eardrum cause it to distort. The discomfort is
relieved when you swallow because the Eustachian tubes open and the
pressure on either side equalises.
Within the cavity of the middle ear are three of the smallest bones in
the body, the **auditory ossicles**. They are known as the hammer, the
anvil and the stirrup because of their resemblance to the shape of these
objects. These tiny bones articulate (move against) each other and
transfer the vibrations of the eardrum to the membrane covering the
opening to the inner ear.
The **inner ear** is a complicated series of fluid-filled tubes imbedded
in the bone of the skull. It consists of two main parts. These are the
**cochlea** where sound waves are converted to nerve impulses and the
**vestibular organ** that is associated with the sense of balance and
has no role in hearing (see later).
The **cochlea** looks rather like a coiled up snail shell. Within it
there are specialised cells with fine hairs on their surface that
respond to the movement of the fluid within the cochlea by producing
nervous impulses that travel to the brain along the **auditory nerve**.
### How The Ear Hears
Sound waves can be thought of as vibrations in the air. They are
collected by the ear pinna and pass down the ear canal where they cause
the eardrum to vibrate. (An interesting fact is that when you are
listening to someone speaking your eardrum vibrates at exactly the same
rate as the vocal cords of the person speaking to you).
The vibration of the eardrum sets the three tiny bones in the middle ear
moving against each other so that the vibration is transferred to the
membrane covering the opening to the inner ear. As well as transferring
the vibration, the tiny ear bones also amplify it. The three tiny bones
are called the stirrup, anvil, and hammer. They were called such of
their form. In the human ear this amplification is about 20 times while
in desert-dwelling animals like the kangaroo rat it is 100 times. This
acute hearing warns them of the approach of predators like owls and
snakes, even in the dark.
The vibration causes waves in the fluid in the inner ear that pass down
the cochlea. These waves stimulate the tiny hair cells to produce nerve
impulses that travel via the auditory nerve to the cerebral cortex of
the brain where they are interpreted as sound.
To summarise: The route sound waves take as they pass through the ear
is: **External ear \| tympanic membrane \| ear ossicles \| inner ear
\|cochlear \| hair cells**
The hair cells generate a nerve impulse that travels down the auditory
nerve to the brain.
Remember that sound waves do not pass along the Eustachian tube. Its
function is to equalise the air pressure on either side of the tympanic
membrane.
## Balance
The **vestibular organ** of the inner ear helps an animal maintain its
posture and keep balanced by monitoring the movement and position of the
head. It consists of two structures - the **semicircular canals** and
the **otolith organs**.
The **semicircular canals** (see diagram 15.10) respond to movement of
the body. They tell an animal whether it is moving up or down, left or
right. They consist of three canals set in three different planes at
right angles to each other so that movement in any direction can be
registered. The canals contain fluid and sense cells with fine hairs
that project into the fluid. When the head moves the fluid swirls in the
canals and stimulates the hair cells. These send nerve impulses along
the **vestibular nerve** to the **cerebellum**.
Note that the semicircular canals register acceleration and deceleration
as well as changes in direction but do not respond to movement that is
at a constant speed.
The **otolith organs** are sometimes known as gravity receptors. They
tell you if your head is tilted or if you are standing on your head.
They consist of bulges at the base of the semi circular canals that
contain hair cells that are covered by a mass of jelly containing tiny
pieces of chalk called **otoliths** (see diagram 15.10). When the head
is tilted, or moved suddenly, the otoliths pull on the hair cells, which
produce a nerve impulse. This travels down the **vestibular nerve** to
the **cerebellum**. By coordinating the nerve impulses from the
semicircular canals and otolith organs the cerebellum helps the animal
keep its balance.
![](Anatomy_and_physiology_of_animals_Olith_organs.jpg "Anatomy_and_physiology_of_animals_Olith_organs.jpg")
Diagram 15.10 - Otolith Organs
## Summary
- Receptors for touch, pressure, pain and temperature are found in the
skin. Receptors in the muscles, tendons and joints inform the brain
of limb position.
- The **olfactory organ** in the nose responds to chemicals in the air
i.e. smell.
- **Taste buds** on the tongue respond to a limited range of chemicals
dissolved in saliva.
- The eyes are the organs of sight. Spherical **eyeballs** situated in
orbits in the skull have walls composed of 3 layers.
- The tough outer **sclera** protects and holds the shape of the
eyeball. At the front it becomes visible as the white of the eye and
the transparent **cornea** that allows light to enter the eye.
- The middle layer is the **choroid**. In most animals it absorbs
stray light rays but in nocturnal animal it is reflective to
conserve light. At the front of the eye it becomes the **iris** with
muscles to control the size of the **pupil** and hence the amount of
light entering the eye.
- The inner layer is the **retina** containing the light receptor
cells: the **rods** for black and white vision in dim light and the
**cones** for colour and detailed vision. Nerve impulses generated
by these cells leave the eye for the brain via the **optic nerve**.
- The **lens** (with the cornea) helps focus the light rays on the
retina. Muscles alter the shape of the lens to allow near and far
objects to be focussed.
- **Aqueous humour** fills the space immediately behind the cornea and
keeps it in shape and **vitreous humour**, a transparent jelly-like
substance, fills the space behind the lens allowing light rays to
pass through to the retina.
- The ear is the organ of hearing and balance.
- The external **pinna** helps funnel sound waves into the ear and
locate the direction of the sound. The sound waves travel down the
external **ear canal** to the **eardrum** or **tympanic membrane**
causing it to vibrate. This vibration is transferred to the
**auditory ossicles** of the middle ear which themselves transfer it
to the inner ear. Here receptors in the **cochlea** respond by
generating nerve impulses that travel to the brain via the
**auditory** (acoustic) nerve.
- The **Eustachian tube** connects the middle ear with the pharynx to
equalise air pressure on either side of the tympanic membrane.
- The **vestibular organ** of the inner ear is concerned with
maintaining balance and posture. It consists of the **semicircular
canals** and the **otolith organs**.
## Worksheet
Senses Worksheet
## Test Yourself
1\. Where are the organs that sense pain, pressure and temperature
found?
2\. Which sense organ responds to chemicals in the air?
3\. Match the words in the list below with the following descriptions.
: optic nerve \| choroid \| cornea \| aqueous humor \| retina \| cones
\| iris \| vitreous humour \| sclera \| lens
```{=html}
<!-- -->
```
: a\) Focuses light rays on the retina.
: b\) Respond to colour and detail.
: c\) Outer coat of the eyeball.
: d\) Carries nerve impulses from the retina to the brain.
: e\) The chamber behind the lens is filled with this.
: f\) This layer of the eyeball reflects light in nocturnal animals
like the cat.
: g\) This is the transparent window at the front of the eye.
: h\) This constricts in bright light to reduce the amount of light
entering the eye.
: i\) The light rays are focused on here by the lens and cornea.
: j\) The chamber in front of the lens is filled with this.
4\. Add the following labels to the diagram of the ear below.
: pinna \| Eustachian tube \| cochlea \| tympanic membrane \| external
ear canal \| ear ossicles \| semicircular canals
![](Unlabeled_diagram_of_the_dog's_ear.jpg "Unlabeled_diagram_of_the_dog's_ear.jpg")
5\. What is the role of the Eustachian tube?
6\. What do the ear ossicles do?
7\. What is the role of the semicircular canals?
/Senses Test Yourself
Answers/
## Websites
- <http://en.wikipedia.org/wiki/Sense> Wikipedia. The old faithful.
You can explore here to your hearts desire. Try 'eye', 'ear',
'taste' etc. but also 'equilibrioception', and 'echolocation'.
- <http://www.bbc.co.uk/science/humanbody/body/factfiles/smell/smell_ani_f5.swf>
BBC Science and Nature. BBC animation of (human) olfactory organ and
smelling.
- <http://www.bbc.co.uk/science/humanbody/body/factfiles/taste/taste_ani_f5.swf>
BBC Science. BBC animation of (human) taste buds and tasting.
- <http://web.archive.org/web/20071121213719/http://www.bishopstopford.com/faculties/science/arthur/Eye%20Drag%20%26%20Drop.swf>
Eye Diagram. A diagram of the eye to label and test your knowledge.
- <http://www.bbc.co.uk/science/humanbody/body/factfiles/hearing/hearing_animation.shtml>
BBC on Hearing. BBC animation of hearing. Well worth looking at.
- <http://www.wisc-online.com/objects/index_tj.asp?objid=AP1502> Ear
Animation. Another great animation of the ear and hearing.
- <http://www.bbc.co.uk/science/humanbody/body/factfiles/balance/balance_ani_f5.swf>
BBC Balance Animation. An animation of the action of the otolith
organ (called macula in this animation)
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/Endocrine System
!original image by Denis
Gustavo cc
by{width="400"}
**PREPARED BY ARNOLD WAMUKOTA, BUSIA**
## Objectives
After completing this section, you should know:
- The characteristics of endocrine glands and hormones
- The position of the main endocrine glands in the body
- The relationship between the pituitary gland and the hypothalamus
- The main hormones produced by the two parts of the pituitary gland
and their effects on the body
- The main hormones produced by the pineal, thyroid, parathyroid and
adrenal glands, the pancreas, ovary and testicle in regard to their
effects on the body
- What is meant by homeostasis and feedback control
- The homeostatic mechanisms that allow an animal to control its body
temperature, water balance, blood volume and acid/base balance
## The Endocrine System
In order to survive, animals must constantly adapt to changes in the
environment. The **nervous** and **endocrine systems** both work
together to bring about this adaptation. In general the nervous system
responds rapidly to short-term changes by sending electrical impulses
along nerves and the endocrine system brings about longer-term
adaptations by sending out chemical messengers called hormones into the
blood stream. In general Endocrine system is represented by a set of
heterogeneous structure and origin of formations capable of internal
secretion, ie the release of biologically active substances (hormones)
that flow directly into the bloodstream.
For example, think about what happens when a male and female cat meet
under your bedroom window at night. The initial response of both cats
may include spitting, fighting and spine tingling yowling - all brought
about by the nervous system. Fear and stress then activates the adrenal
glands to secrete the hormone **adrenaline** which increases the heart
and respiratory rates. If mating occurs, other hormones stimulate the
release of ova from the ovary of the female and a range of different
hormones maintains pregnancy, delivery of the kittens and lactation.
` PREPARED By ARNOLD WERANGAI`
**Evolution of endocrine systems**
The most primitive endocrine systems seem to be those of the
neurosecretory type, in which the nervous system either secretes
neurohormones (hormones that act on, or are secreted by, nervous tissue)
directly into the circulation or stores them in neurohemal organs
(neurons whose endings directly contact blood vessels, allowing
neurohormones to be secreted into the circulation), from which they are
released in large amounts as needed. True endocrine glands probably
evolved later in the evolutionary history of the animal kingdom as
separate, hormone-secreting structures. Some of the cells of these
endocrine glands are derived from nerve cells that migrated during the
process of evolution from the nervous system to various locations in the
body. These independent endocrine glands have been described only in
arthropods (where neurohormones are still the dominant type of endocrine
messenger) and in vertebrates (where they are best developed).
It has become obvious that many of the hormones previously ascribed only
to vertebrates are secreted by invertebrates as well (for example, the
pancreatic hormone insulin). Likewise, many invertebrate hormones have
been discovered in the tissues of vertebrates, including those of
humans. Some of these molecules are even synthesized and employed as
chemical regulators, similar to hormones in higher animals, by
unicellular animals and plants. Thus, the history of endocrinologic
regulators has ancient beginnings, and the major changes that took place
during evolution would seem to centre around the uses to which these
molecules were put.
**Vertebrate endocrine systems**
Vertebrates (phylum Vertebrata) are separable into at least seven
discrete classes that represent evolutionary groupings of related
animals with common features. The class Agnatha, or the jawless fishes,
is the most primitive group. Class Chondrichthyes and class Osteichthyes
are jawed fishes that had their origins, millions of years ago, with the
Agnatha. The Chondrichthyes are the cartilaginous fishes, such as sharks
and rays, while the Osteichthyes are the bony fishes. Familiar bony
fishes such as goldfish, trout, and bass are members of the most
advanced subgroup of bony fishes, the teleosts, which developed lungs
and first invaded land. From the teleosts evolved the class Amphibia,
which includes frogs and toads. The amphibians gave rise to the class
Reptilia, which became more adapted to land and diverged along several
evolutionary lines. Among the groups descending from the primitive
reptiles were turtles, dinosaurs, crocodilians (alligators, crocodiles),
snakes, and lizards. Birds (class Aves) and mammals (class Mammalia)
later evolved from separate groups of reptiles. Amphibians, reptiles,
birds, and mammals, collectively, are referred to as the tetrapod
(four-footed) vertebrates.
The human endocrine system is the product of millions of years of
evolution. and it should not be surprising that the endocrine glands and
associated hormones of the human endocrine system have their
counterparts in the endocrine systems of more primitive vertebrates. By
examining these animals it is possible to document the emergence of the
hypothalamic-pituitary-target organ axis, as well as many other
endocrine glands, during the evolution of fishes that preceded the
origin of terrestrial vertebrates.
**The hypothalamic-pituitary-target organ axis**
The hypothalamic-pituitary-target organ axes of all vertebrates are
similar. The hypothalamic neurosecretory system is poorly developed in
the most primitive of the living Agnatha vertebrates, the hagfishes, but
all of the basic rudiments are present in the closely related lampreys.
In most of the more advanced jawed fishes there are several
well-developed neurosecretory centres (nuclei) in the hypothalamus that
produce neurohormones. These centers become more clearly defined and
increase in the number of distinct nuclei as amphibians and reptiles are
examined, and they are as extensive in birds as they are in mammals.
Some of the same neurohormones that are found in humans have been
identified in nonmammals, and these neurohormones produce similar
effects on cells of the pituitary as described above for mammals.
Two or more neurohormonal peptides with chemical and biologic properties
similar to those of mammalian oxytocin and vasopressin are secreted by
the vertebrate hypothalamus (except in Agnatha fishes, which produce
only one). The oxytocin-like peptide is usually isotocin (most fishes)
or mesotocin (amphibians, reptiles, and birds). The second peptide is
arginine vasotocin, which is found in all nonmammalian vertebrates as
well as in fetal mammals. Chemically, vasotocin is a hybrid of oxytocin
and vasopressin, and it appears to have the biologic properties of both
oxytocin (which stimulates contraction of muscles of the reproductive
tract, thus playing a role in egg-laying or birth) and vasopressin (with
either diuretic or antidiuretic properties). The functions of the
oxytocin-like substances in non-mammals are unknown.
The pituitary glands of all vertebrates produce essentially the same
tropic hormones: thyrotropin (TSH), corticotropin (ACTH), melanotropin
(MSH), prolactin (PRL), growth hormone (GH), and one or two
gonadotropins (usually FSH-like and LH-like hormones). The production
and release of these tropic hormones are controlled by neurohormones
from the hypothalamus. The cells of teleost fishes, however, are
innervated directly. Thus, these fishes may rely on neurohormones as
well as neurotransmitters for stimulating or inhibiting the release of
tropic hormones.
Among the target organs that constitute the
hypothalamic-pituitary-target organ axis are the thyroid, the adrenal
glands, and the gonads. Their individual roles are discussed below.
**The thyroid axis**
Thyrotropin secreted by the pituitary stimulates the thyroid gland to
release thyroid hormones, which help to regulate development, growth,
metabolism, and reproduction. In humans, these thyroid hormones are
known as triiodothyronine (T3) and thyroxine (T4). The evolution of the
thyroid gland is traceable in the evolutionary development of
invertebrates to vertebrates. The thyroid gland evolved from an
iodide-trapping, glycoprotein-secreting gland of the protochordates (all
nonvertebrate members of the phylum Chordata). The ability of many
invertebrates to concentrate iodide, an important ingredient in thyroid
hormones, occurs generally over the surface of the body. In
protochordates, this capacity to bind iodide to a glycoprotein and
produce thyroid hormones became specialized in the endostyle, a gland
located in the pharyngeal region of the head. When these iodinated
proteins are swallowed and broken down by enzymes, the iodinated amino
acids known as thyroid hormones are released. Larvae of primitive
vertebrate lampreys also have an endostyle like that of the
protochordates. When a lamprey larva undergoes metamorphosis into an
adult lamprey, the endostyle breaks into fragments. The resulting clumps
of endostyle cells differentiate into the separate follicles of the
thyroid gland. Thyroid hormones actually direct metamorphosis in the
larvae of lampreys, bony fishes, and amphibians. Thyroids of fishes
consist of scattered follicles in the pharyngeal region. In tetrapods
and a few fishes, the thyroid becomes encapsulated by a layer of
connective tissue.
**The adrenal axis**
The adrenal axes in mammals and in nonmammals are not constructed along
the same lines. In mammals the adrenal cortex is a separate structure
that surrounds the internal adrenal medulla; the adrenal gland is
located atop the kidneys. Because the cells of the adrenal cortex and
adrenal medulla do not form separate structures in nonmammals as they do
in mammals, they are often referred to in different terms; the cells
that correspond to the adrenal cortex in mammals are called inter renal
cells, and the cells that correspond to the adrenal medulla are called
chromaffin cells. In primitive non mammals the adrenal glands are
sometimes called inter renal glands.
In fishes the interrenal and chromaffin cells often are embedded in the
kidneys, whereas in amphibians they are distributed diffusely along the
surface of the kidneys. Reptiles and birds have discrete adrenal glands,
but the anatomical relationship is such that often the "cortex" and the
"medulla" are not distinct units. Under the influence of pituitary
adrenocorticotropin hormone, the interrenal cells produce steroids
(usually corticosterone in tetrapods and cortisol in fishes) that
influence sodium balance, water balance, and metabolism.
**The gonadal axis**
Gonadotropins secreted by the pituitary are basically LH-like and/or
FSH-like in their actions on vertebrate gonads. In general, the FSH-like
hormones promote development of eggs and sperm and the LH-like hormones
cause ovulation and sperm release; both types of gonadotropins stimulate
the secretion of the steroid hormones (androgens, estrogens, and, in
some cases, progesterone) from the gonads. These steroids produce
effects similar to those described for humans. For example, progesterone
is essential for normal gestation in many fishes, amphibians, and
reptiles in which the young develop in the reproductive tract of the
mother and are delivered live. Androgens (sometimes testosterone, but
often other steroids are more important) and estrogens (usually
estradiol) influence male and female characteristics and behaviour.
**Control of pigmentation**
Melanotropin (melanocyte-stimulating hormone, or MSH) secreted by the
pituitary regulates the star-shaped cells that contain large amounts of
the dark pigment melanin (melanophores), especially in the skin of
amphibians as well as in some fishes and reptiles. Apparently, light
reflected from the surface stimulates photoreceptors, which send
information to the brain and in turn to the hypothalamus. Pituitary
melanotropin then causes the pigment in the melanophores to disperse and
the skin to darken, sometimes quite dramatically. By releasing more or
less melanotropin, an animal is able to adapt its colouring to its
background.
**Growth hormone and prolactin**
The functions of growth hormone and prolactin secreted by the pituitary
overlap considerably, although prolactin usually regulates water and
salt balance, whereas growth hormone primarily influences protein
metabolism and hence growth. Prolactin allows migratory fishes such as
salmon to adapt from salt water to fresh water. In amphibians, prolactin
has been described as a larval growth hormone, and it can also prevent
metamorphosis of the larva into the adult. The water-seeking behaviour
(so-called water drive) of adult amphibians often observed prior to
breeding in ponds is also controlled by prolactin. The production of a
protein-rich secretion by the skin of the discus fish (called "discus
milk") that is used to nourish young offspring is caused by a
prolactin-like hormone. Similarly, prolactin stimulates secretions from
the crop sac of pigeons ("pigeon" or "crop" milk), which are fed to
newly hatched young. This action is reminiscent of prolactin's actions
on the mammary gland of nursing mammals. Prolactin also appears to be
involved in the differentiation and function of many sex accessory
structures in nonmammals, and in the stimulation of the mammalian
prostate gland. For example, prolactin stimulates cloacal glands
responsible for special reproductive secretions. Prolactin also
influences external sexual characteristics such as nuptial pads (for
clasping the female) and the height of the tail in male salamanders.
**Other vertebrate endocrine glands**
**The pancreas**
The pancreas in nonmammals is an endocrine gland that secretes insulin,
glucagon, and somatostatin. Pancreatic polypeptide has been identified
in birds and may occur in other groups as well. Insulin lowers blood
sugar (hypoglycemia) in most vertebrates, although mammalian insulin is
rather ineffective in reptiles and birds. Glucagon is a hyperglycemic
hormone (it increases the level of sugar in the blood).
In primitive fishes the cells responsible for secreting the pancreatic
hormones are scattered within the wall of the intestine. There is a
trend toward progressive clumping of cells in more evolutionarily
advanced fishes, and in a few species the endocrine tissue forms only
one or a few large islets. As a rule, most fishes lack a discrete
pancreas, but all tetrapods have a fully formed exocrine and endocrine
pancreas. The endocrine cells of all tetrapods are organized into
distinct islets as described for humans, although the abundance of the
different cell types often varies. For example, in reptiles and birds
there is a predominance of glucagon-secreting cells and relatively few
insulin-secreting cells.
**Calcium-regulating hormones**
Fishes have no parathyroid glands: these glands first appear in
amphibians. Although the embryological origin of parathyroid glands of
tetrapods is well known, their evolutionary origin is not. Parathyroid
hormone raises blood calcium levels (hypercalcemia) in tetrapods. The
absence in most fishes of cellular bone, which is the principal target
for parathyroid hormone in tetrapods, is reflected by the absence of
parathyroid glands.
Fishes, amphibians, reptiles, and birds have paired pharyngeal
ultimobranchial glands that secrete the hypocalcemic hormone calcitonin.
The corpuscles of Stannius, unique glandular islets found only in the
kidneys of bony fishes, secrete a peptide called hypocalcin. Fish
calcitonins differ somewhat from the mammalian peptide hormone of the
same name, and fish calcitonins have proved to be more potent and have a
longer-lasting action in humans than human calcitonin itself.
Consequently, synthetic fish calcitonin has been used to treat humans
suffering from various disorders of bone, including Paget's disease. The
secretory cells of the ultimobranchial glands are derived from cells
that migrated from the embryonic nervous system. During the development
of a mammalian fetus, the ultimobranchial gland becomes incorporated
into the developing thyroid gland as the "C cells" or "parafollicular
cells."
**Gastrointestinal hormones**
Little research has been done on gastrointestinal hormones in
nonmammals, but there is good evidence for a gastrinlike mechanism that
controls the secretion of stomach acids. Peptides similar to
cholecystokinin are also present and can stimulate contractions of the
gall bladder. The gall bladders of primitive fishes contract when
treated with mammalian cholecystokinin.
**Other mammalian-like endocrine systems**
**The renin-angiotensin system**
The renin-angiotensin system in mammals is represented in nonmammals by
the juxtaglomerular cells that secrete renin associated with the kidney.
The macula densa that functions as a detector of sodium levels within
the kidney tubules of tetrapods, however, has not been found in fishes.
**The pineal complex**
In fishes, amphibians, and reptiles, the pineal complex is better
developed than in mammals. The nonmammalian pineal functions as both a
photoreceptor organ and an endocrine source for melatonin. Effects of
light on reproduction in fishes and tetrapods are mediated at least in
part through the pineal, and it has been implicated in a number of daily
and seasonal biorhythmic phenomena.
**Prostaglandins**
Many tissues of nonmammals produce prostaglandins that play important
roles in reproduction similar to those discussed for humans and other
mammals.
**The liver**
As in mammals, the liver of several nonmammalian species has been shown
to produce somatomedin-like growth factors in response to stimulation by
growth hormone. Similarly, there is evidence that prolactin stimulates
the production of a related growth factor, which synergizes (cooperates)
with prolactin on targets such as the pigeon crop sac.
**Unique endocrine glands in fishes**
In addition to the corpuscles of Stannius and the ultimobranchial
glands, most fishes have a unique neurosecretory neurohemal organ, the
urophysis, which is associated with the spinal cord at the base of the
tail. Although the functions of this caudal (rear) neurosecretory system
are not now understood, it is known to produce two peptides, urotensin I
and urotensin II. Urotensin I is chemically related to a family of
peptides that includes somatostatin; urotensin II is a member of the
family of peptides that includes mammalian corticotropin-releasing
hormone (CRH). There are no homologous structures to either the
corpuscles of Stannius or the urophysis in amphibians, reptiles, or
birds.
**Invertebrate endocrine systems**
Advances in the study of invertebrate endocrine systems have lagged
behind those in vertebrate endocrinology, largely due to the problems
associated with adapting investigative techniques that are appropriate
for large vertebrate animals to small invertebrates. It also is
difficult to maintain and study appropriately some invertebrates under
laboratory conditions. Nevertheless, knowledge about these systems is
accumulating rapidly.
All phyla in the animal kingdom that have a nervous system also possess
neurosecretory neurons. The results of studies on the distribution of
neurosecretory neurons and ordinary epithelial endocrine cells imply
that the neurohormones were the first hormonal regulators in animals.
Neurohemal organs appear first in the more advanced invertebrates (such
as mollusks and annelid worms), and endocrine epithelial glands occur
only in the most advanced phyla (primarily Arthropoda and Chordata).
Similarly, the peptide and steroid hormones found in vertebrates are
also present in the nervous and endocrine systems of many invertebrate
phyla. These hormones may perform similar functions in diverse animal
groups. With more emphasis being placed on research in invertebrate
systems, new neuropeptides are being discovered initially in these
animals, and subsequently in vertebrates.
The endocrine systems of some animal phyla have been studied in detail,
but the endocrine systems of only a few species are well known. The
following discussion summarizes the endocrine systems of five
invertebrate phyla and the two invertebrate subphyla of the phylum
Chordata, a phylum that also includes Vertebrata, a subphylum to which
the backboned animals belong
## Endocrine Glands And Hormones
Hormones are chemicals that are secreted by **endocrine glands**. Unlike
exocrine glands (see chapter 5), endocrine glands have no ducts, but
release their secretions directly into the blood system, which carries
them throughout the body. However, hormones only affect the specific
**target organs** that recognize them. For example, although it is
carried to virtually every cell in the body, **follicle stimulating
hormone** (FSH), released from the **anterior pituitary gland**, only
acts on the follicle cells of the ovaries causing them to develop.
A nerve impulse travels rapidly and produces an almost instantaneous
response but one that lasts only briefly. In contrast, hormones act more
slowly and their effects may be long lasting. Target cells respond to
minute quantities of hormones and the concentration in the blood is
always extremely low. However, target cells are sensitive to subtle
changes in hormone concentration and the endocrine system regulates
processes by changing the rate of hormone secretion.
The main endocrine glands in the body are the **pituitary, pineal,
thyroid, parathyroid**, and **adrenal glands**, the **pancreas,
ovaries** and **testes**. Their positions in the body are shown in
diagram 16.1.
![](Anatomy_and_physiology_of_animals_Main_endocrine_organs_of_the_body.jpg "Anatomy_and_physiology_of_animals_Main_endocrine_organs_of_the_body.jpg")
Diagram 16.1 - The main endocrine organs of the body
## The Pituitary Gland And Hypothalamus
The **pituitary gland** is a pea-sized structure that is attached by a
stalk to the underside of the cerebrum of the brain (see diagram 16.2).
It is often called the "master" endocrine gland because it controls many
of the other endocrine glands in the body. However, we now know that the
pituitary gland is itself controlled by the **hypothalamus**. This small
but vital region of the brain lies just above the pituitary and provides
the link between the nervous and endocrine systems. It controls the
**autonomic nervous system**, produces a range of hormones and regulates
the secretion of many others from the pituitary gland (see Chapter 7 for
more information on the hypothalamus).
The pituitary gland is divided into two parts with different functions -
the **anterior** and **posterior pituitary** (see diagram 16.3).
![](Anatomy_and_physiology_of_animals_Position_of_the_pituitary_gland_and_hypothalamus.jpg "Anatomy_and_physiology_of_animals_Position_of_the_pituitary_gland_and_hypothalamus.jpg")
Diagram 16.2 - The position of the pituitary gland and hypothalamus
![](Anterior_and_posterior_pituitary.jpg "Anterior_and_posterior_pituitary.jpg")
Diagram 16.3 - The anterior and posterior pituitary
The **anterior pituitary gland** secretes hormones that regulate a wide
range of activities in the body. These include:
: 1\. **Growth hormone** that stimulates body growth.
: 2\. **Prolactin** that initiates milk production.
: 3\. **Follicle stimulating hormone (FSH**) that stimulates the
development of the **follicles** of the ovaries. These then secrete
**oestrogen** (see chapter 6).
: 4\. **melanocyte stimulating hormone (MSH**) that causes darkening
of skin by producing melanin
: 5\. **lutenizing hormone (LH**) that stimulates ovulation and
production of progesterone and testosterone
The **posterior pituitary gland**
: 1\. Antidiuretic Hormone (ADH), regulates water loss and increases
blood pressure
: 2\. Oxytocin, milk \"let down\"
## The Pineal Gland
The **pineal gland** is found deep within the brain (see diagram 16.4).
It is sometimes known as the 'third eye" as it responds to light and day
length. It produces the hormone **melatonin**, which influences the
development of sexual maturity and the seasonality of breeding and
hibernation. **Bright light inhibits melatonin secretion** Low level of
melatonin in bright light makes one feel good and this increases
fertility. High level of melatonin in dim light makes an animal tired
and depressed and therefore causes low fertility in animals.
![](Anatomy_and_physiology_of_animals_Pineal_gland.jpg "Anatomy_and_physiology_of_animals_Pineal_gland.jpg")
Diagram 16.4 - The pineal gland
## The Thyroid Gland
The **thyroid gland** is situated in the neck, just in front of the
windpipe or trachea (see diagram 16.5). It produces the hormone
**thyroxine**, which influences the rate of growth and development of
young animals. In mature animals it increases the rate of chemical
reactions in the body.
Thyroxine consists of 60% **iodine** and too little in the diet can
cause **goitre**, an enlargement of the thyroid gland. Many inland soils
in New Zealand contain almost no iodine so goitre can be common in stock
when iodine supplements are not given. To add to the problem, chemicals
called **goitrogens** that occur naturally in plants like kale that
belong to the **cabbage family**, can also cause goitre even when there
is adequate iodine available.
![](Anatomy_and_physiology_of_animals_Thyroid_&_parathyroid_glands.jpg "Anatomy_and_physiology_of_animals_Thyroid_&_parathyroid_glands.jpg")
Diagram 16.5 - The thyroid and parathyroid glands
## The Parathyroid Glands
The **parathyroid glands** are also found in the neck just behind the
thyroid glands (see diagram 16.5). They produce the hormone
**parathormone** that regulates the amount of **calcium** in the blood
and influences the excretion of **phosphates** in the urine.
## The Adrenal Gland
The **adrenal glands** are situated on the cranial surface of the
kidneys (see diagram 16.6). There are two parts to this endocrine gland,
an outer **cortex** and an inner **medulla**.
![](Anatomy_and_physiology_of_animals_Adrenal_glands.jpg "Anatomy_and_physiology_of_animals_Adrenal_glands.jpg")
Diagram 16.6 - The adrenal glands
The **adrenal cortex** produces several hormones. These include:
: 1\. **Aldosterone** that regulates the concentration of **sodium and
potassium** in the blood by controlling the amounts that are
secreted or reabsorbed in the kidney tubules.
: 2\. **Cortisone** and **hydrocortisone** (cortisol) that have
complex effects on glucose, protein and fat metabolism. In general
they increase metabolism. They are also often administered to
animals to counteract allergies and for treating arthritic and
rheumatic conditions. However, prolonged use should be avoided if
possible as they can increase weight and reduce the ability to heal.
: 3\. **Male and female sex hormones** similar to those secreted by
the ovaries and testes.
The hormones secreted by the adrenal cortex also play a part in
"**general adaptation syndrome**" which occurs in situations of
prolonged stress.
The **adrenal medulla** secretes **adrenalin** (also called
**epinephrine**). Adrenalin is responsible for the so-called flight
fight, fright response that prepares the animal for emergencies. Faced
with a perilous situation the animal needs to either fight or make a
rapid escape. To do either requires instant energy, particularly in the
skeletal muscles. Adrenaline increases the amount of blood reaching them
by causing their blood vessels to dilate and the heart to beat faster.
An increased rate of breathing increases the amount of oxygen in the
blood and glucose is released from the liver to provide the fuel for
energy production. Sweating increases to keep the muscles cool and the
pupils of the eye dilate so the animal has a wide field of view.
Functions like digestion and urine production that are not critical to
immediate survival slow down as blood vessels to these parts constrict.
Note that the effects of adrenalin are similar to those of the
sympathetic nervous system.
` PREPARED BY ARNOLD WAMUKOTA`
## The Pancreas
In most animals the **pancreas** is an oblong, pinkish organ that lies
in the first bend of the small intestine (see diagram 16.7). In rodents
and rabbits, however, it is spread thinly through the mesentery and is
sometimes difficult to see.
![](Anatomy_and_physiology_of_animals_The_pancreas.jpg "Anatomy_and_physiology_of_animals_The_pancreas.jpg")
Diagram 16.7 - The pancreas
Most of the pancreas acts as an **exocrine gland** producing digestive
enzymes that are secreted into the small intestine. The endocrine part
of the organ consists of small clusters of cells (called **Islets of
Langerhans**) that secrete the hormone **insulin**. This hormone
regulates the amount of **glucose** in the blood by increasing the rate
at which glucose is converted to glycogen in the liver and the movement
of glucose from the blood into cells.
In **diabetes mellitus** the pancreas produces insufficient insulin and
glucose levels in the blood can increase to a dangerous level. A major
symptom of this condition is glucose in the urine.
## The Ovaries
A part of the reproductive system of all female vertebrates. Although
not vital to individual survival, the ovary is vital to perpetuation of
the species. The function of the ovary is to produce the female germ
cells or ova, and in some species to elaborate hormones that assist in
regulating the reproductive cycle.
The ovaries develop as bilateral structures in all vertebrates, but
adult asymmetry is found in certain species of all vertebrates from the
elasmobranchs to the mammals.
The ovary of all vertebrates functions in essentially the same manner.
However, ovarian histology of the various groups differs considerably.
Even such a fundamental element as the ovum exhibits differences in
various groups. See Ovum
The mammalian ovary is attached to the dorsal body wall. The free
surface of the ovary is covered by a modified peritoneum called the
germinal epithelium. Just beneath the germinal epithelium is a layer of
fibrous connective tissue. Most of the rest of the ovary is made up of a
more cellular and more loosely arranged connective tissue (stroma) in
which are embedded the germinal, endocrine, vascular, and nervous
elements.
The most obvious ovarian structures are the follicles and the corpora
lutea. The smallest, or primary, follicle consists of an oocyte
surrounded by a layer of follicle (nurse) cells. Follicular growth
results from an increase in oocyte size, multiplication of the follicle
cells, and differentiation of the perifollicular stroma to form a
fibrocellular envelope called the theca interna. Finally, a fluid-filled
antrum develops in the granulosa layer, resulting in a vesicular
follicle.
The cells of the theca intima hypertrophy during follicular growth and
many capillaries invade the layer, thus forming the endocrine element
that is thought to secrete estrogen. The other known endocrine structure
is the corpus luteum, which is primarily the product of hypertrophy of
the granulosa cells remaining after the follicular wall ruptures to
release the ovum. Ingrowths of connective tissue from the theca interna
deliver capillaries to vascularize the hypertrophied follicle cells of
this new corpus luteum; progesterone is secreted here.
` PREPARED BY ARNOLD WAMUKOTA`
## The Testes
Sperm need temperatures between 2 and 10 degrees Centigrade lower and
then the body temperature to develop. This is the reason why the testes
are located in a bag of skin called the scrotal sacs (or scrotum) that
hangs below the body and where the evaporation of secretions from
special glands can further reduce the temperature. In many animals
(including humans) the testes descend into the scrotal sacs at birth but
in some animals they do not descend until sexual maturity and in others
they only descend temporarily during the breeding season. A mature
animal in which one or both testes have not descended is called a
cryptorchid and is usually infertile.
The problem of keeping sperm at a low enough temperature is even greater
in birds that have a higher body temperature than mammals. For this
reason bird's sperm are usually produced at night when the body
temperature is lower and the sperm themselves are more resistant to
heat.
The testes consist of a mass of coiled tubes (the seminiferous or sperm
producing tubules) in which the sperm are formed by meiosis (see diagram
13.4). Cells lying between the seminiferous tubules produce the male sex
hormone testosterone.
When the sperm are mature they accumulate in the collecting ducts and
then pass to the epididymis before moving to the sperm duct or vas
deferens. The two sperm ducts join the urethra just below the bladder,
which passes through the penis and transports both sperm and urine.
Ejaculation discharges the semen from the erect penis. It is brought
about by the contraction of the epididymis, vas deferens, prostate gland
and urethra.
` PREPARED BY ARNOLD WAMUKOTA`
## Summary
- **Hormones** are chemicals that are released into the blood by
**endocrine glands** i.e. Glands with no ducts. Hormones act on
specific **target organs** that recognize them.
- The main endocrine glands in the body are the **hypothalamus,
pituitary, pineal, thyroid, parathyroid** and **adrenal glands,**
the **pancreas, ovaries** and **testes**.
- The **hypothalamus** is situated under the **cerebrum** of the
brain. It produces or controls many of the hormones released by the
pituitary gland lying adjacent to it.
- The **pituitary gland** is divided into two parts: the **anterior
pituitary** and the **posterior pituitary**.
- The **anterior pituitary** produces:
:\* **Growth hormone** that stimulates body growth
:\* **Prolactin** that initiates milk production
:\* **Follicle stimulating hormone** (**FSH**) that stimulates the
development of **ova**
:\* **Luteinising hormone (LH**) that stimulates the development of the
**corpus luteum**
:\* Plus several other hormones
- The **posterior pituitary** releases:
:\* **Antidiuretic hormone** (ADH) that regulates **water loss** and
raises **blood pressure**
:\* **Oxytocin** that stimulates milk "let down".
- The **pineal gland** in the brain produces **melatonin** that
influences **sexual development** and **breeding cycles**.
- The **thyroid gland** located in the neck, produces thyroxine, which
influences the **rate of growth** and **development** of young
animals. Thyroxine consists of 60% **iodine**. Lack of iodine leads
to **goitre**.
- The **parathyroid glands** situated adjacent to the thyroid glands
in the neck produce **parathormone** that regulates blood
**calcium** levels and the excretion of **phosphates**.
- The **adrenal gland** located adjacent to the kidneys is divided
into the outer **cortex** and the inner **medulla**.
- The **adrenal cortex** produces:
:\***Aldosterone** that regulates the blood concentration of **sodium
and potassium**
:\* **Cortisone** and **hydro-cortisone** that affect **glucose,
protein** and **fat** metabolism
:\* Male and female **sex hormones**
- The **adrenal medulla** produces **adrenalin** responsible for the
**flight, fright, fight** response that prepares animals for
emergencies.
- The **pancreas** that lies in the first bend of the small intestine
produces**insulin** that regulates blood **glucose** levels.
- The **ovaries** are located in the lower abdomen produce 2 important
sex hormones:
:\* The **follicle cells** of the developing ova produce **estrogen**,
which controls the development of the **mammary glands** and prepares
the uterus for pregnancy.
:\* The **corpus luteum** that develops in the empty **follicle** after
ovulation produces **progesterone**. This hormone further prepares the
**uterus** for pregnancy and maintains the pregnancy.
- The **testes** produce **testosterone** that stimulates the
development of the **male reproductive system** and **sexual
characteristics**.
## Homeostasis and Feedback Control
Animals can only survive if the environment within their bodies and
their cells is kept constant and independent of the changing conditions
in the external environment. As mentioned in module 1.6, the process by
which this stability is maintained is called homeostasis. The body
achieves this stability by constantly monitoring the internal conditions
and if they deviate from the norm initiating processes that bring them
back to it. This mechanism is called feedback control. For example, to
maintain a constant body temperature the hypothalamus monitors the blood
temperature and initiates processes that increase or decrease heat
production by the body and loss from the skin so the optimum temperature
is always maintained. The processes involved in the control of body
temperature, water balance, blood loss and acid/base balance are
summarized below.
## Summary of Homeostatic Mechanisms
### 1. Temperature control
The biochemical and physiological processes in the cell are sensitive to
temperature. The optimum body temperature is about 37 C \[99 F\] for
mammals, and about 40 C \[104 F\] for birds. Biochemical processes in
the cells, particularly in muscles and the liver, produce heat. The heat
is distributed through the body by the blood and is lost mainly through
the skin surface. The production of this heat and its loss through the
skin is controlled by the hypothalamus in the brain which acts rather
like a thermostat on an electric heater. .
\(a\) When the body temperature rises above the optimum, a decrease in
temperature is achieved by:
- Sweating and panting to increase heat loss by evaporation.
```{=html}
<!-- -->
```
- Expansion of the blood vessels near the skin surface so heat is lost
to the air.
```{=html}
<!-- -->
```
- Reducing muscle exertion to the minimum.
\(b\) When the body temperature falls below the optimum, an increase in
temperature can be achieved by:
- Moving to a heat source e.g. in the sun, out of the wind.
```{=html}
<!-- -->
```
- Increasing muscular activity
```{=html}
<!-- -->
```
- Shivering
```{=html}
<!-- -->
```
- Making the hair stand on end by contraction of the hair erector
muscles or fluffing of the feathers so there is an insulating layer
of air around the body
```{=html}
<!-- -->
```
- Constricting the blood vessels near the skin surface so heat loss to
the air is decreased
### 2. Water balance
The concentration of the body fluids remains relatively constant
irrespective of the diet or the quantity of water taken into the body by
the animal. Water is lost from the body by many routes (see module 1.6)
but the kidney is the main organ that influences the quantity that is
lost. Again it is the hypothalamus that monitors the concentration of
the blood and initiates the release of hormones from the posterior
pituitary gland. These act on the kidney tubules to influence the amount
of water (and sodium ions) absorbed from the fluid flowing along them.
\(a\) When the body fluids become too concentrated and the osmotic
pressure too high, water retention in the kidney tubules can be achieved
by:
- An increased production of anti-diuretic hormone (ADH) from the
posterior pituitary gland, which causes more water to be reabsorbed
from the kidney tubules.
```{=html}
<!-- -->
```
- A decreased blood pressure in the glomerulus of the kidney results
in less fluid filtering through into the kidney tubules so less
urine is produced.
\(b\) When the body fluids become too dilute and the osmotic pressure
too low, water loss in the urine can be achieved by:
- A decrease in the secretion of ADH, so less water is reabsorbed from
the kidney tubules and more diluted urine is produced.
```{=html}
<!-- -->
```
- An increase in the blood pressure in the glomerulus so more fluid
filters into the kidney tubule and more urine is produced.
```{=html}
<!-- -->
```
- An increase in sweating or panting that also increases the amount of
water lost.
Another hormone, aldosterone, secreted by the cortex of the adrenal
gland, also affects water balance indirectly. It does this by increasing
the absorption of sodium ions (Na-) from the kidney tubules. This
increases water retention since it increases the osmotic pressure of the
fluids around the tubules and water therefore flows out of them by
osmosis.
### 3. Maintenance of blood volume after moderate blood loss
Loss of blood or body fluids leads to decreased blood volume and hence
decreased blood pressure. The result is that the blood system fails to
deliver enough oxygen and nutrients to the cells, which stop functioning
properly and may die. Cells of the brain are particularly vulnerable.
This condition is known as shock.
If blood loss is not extreme, various mechanisms come into play to
compensate and ensure permanent tissue damage does not occur. These
mechanisms include:
- Increased thirst and drinking increases blood volume.
```{=html}
<!-- -->
```
- Blood vessels in the skin and kidneys constrict to reduce the total
volume of the blood system and hence retain blood pressure.
```{=html}
<!-- -->
```
- Heart rate increases. This also increases blood pressure.
```{=html}
<!-- -->
```
- Antidiuretic hormone (ADH) is released by the posterior pituitary
gland. This increases water re-absorption in the collecting ducts of
the kidney tubules so concentrated urine is produced and water loss
is reduced. This helps maintain blood volume.
```{=html}
<!-- -->
```
- Loss of fluid causes an increase in osmotic pressure of the blood.
Proteins, mainly albumin, released into the blood by the liver
further increase the osmotic pressure causing fluid from the tissues
to be drawn into the blood by osmosis. This increases blood volume.
```{=html}
<!-- -->
```
- Aldosterone, secreted by the adrenal cortex, increases the
absorption of sodium ions (Na+) and water from the kidney tubules.
This increases urine concentration and helps retain blood volume.
If blood or fluid loss is extreme and the blood volume falls by more
than 15-25%, the above mechanisms are unable to compensate and the
condition of the animal progressively deteriorates. The animal will die
unless a vet administers fluid or blood.
### 4. Acid/ base balance
Biochemical reactions within the body are very sensitive to even small
changes in acidity or alkalinity (i.e. pH) and any departure from the
narrow limits disrupts the functioning of the cells. It is therefore
important that the blood contains balanced quantities of acids and
bases.
The normal pH of blood is in the range 7.35 to 7.45 and there are a
number of mechanisms that operate to maintain the pH in this range.
Breathing is one of these mechanisms.
Much of the carbon dioxide produced by respiration in cells is carried
in the blood as carbonic acid. As the amount of carbon dioxide in the
blood increases the blood becomes more acidic and the pH decreases. This
is called acidosis and when severe can cause coma and death. On the
other hand, alkalosis (blood that is too alkaline) causes over
stimulation of the nervous system and when severe can lead to
convulsions and death.
\(a\) When vigorous activity generating large quantities of carbon
dioxide causes the blood to becomes too acidic it can be counteracted in
two ways:
- By the rapid removal of carbon dioxide from the blood by deep,
panting breaths
` By the secretion of hydrogen ions (H+) into the urine by the kidney tubules. `
\(b\) When over breathing or hyperventilation results in low levels of
carbon dioxide in the blood and the blood is too alkaline, various
mechanisms come into play to bring the pH back to within the normal
range. These include:
- A slower rate of breathing
- A reduction in the amount of hydrogen ions (H+) secreted into the
urine.
### SUMMARY
Homeostasis is the maintenance of constant conditions within a cell or
animal's body despite changes in the external environment.
The body temperature of mammals and birds is maintained at an optimum
level by a variety of heat regulation mechanisms. These include:
- Seeking out warm areas,
- Adjusting activity levels,
blood vessels on the body surface,
- Contraction of the erector muscles so hairs and feathers stand up to
form an insulating layer,
- Shivering,
- Sweating and panting in dogs.
Animals maintain water balance by:
- adjusting level of antidiuretic hormone(ADH)
- adjusting level of aldosterone,
- adjusting blood flow to the kidneys
- adjusting the amount of water lost through sweating or panting.
Animals maintain blood volume after moderate blood loss by:
- Drinking,
- Constriction of blood vessels in the skin and kidneys,
- increasing heart rate,
- secretion of anti-diuretic hormone
- secretion of aldosterone
- drawing fluid from the tissues into the blood by increasing the
osmotic pressure of the blood.
Animals maintain the acid/base balance or pH of the blood by:
- Adjusting the rate of breathing and hence the amount of CO2 removed
from the blood.
- Adjusting the secretion of hydrogen ions into the urine.
## Worksheet
Endocrine System
Worksheet
## Test Yourself
1\. What is Homeostasis?
2\. Give 2 examples of homeostasis
3\. List 3 ways in which animals keep their body temperature constant
when the weather is hot
4\. How does the kidney compensate when an animal is deprived of water
to drink
5\. After moderate blood loss, several mechanisms come into play to
increase blood pressure and make up blood volume. 3 of these mechanisms
are:
6\. Describe how panting helps to reduce the acidity of the blood
/Test Yourself Answers/
## Websites
- <http://www.zerobio.com/drag_oa/endo.htm> A drag and drop hormone
and endocrine organ matching exercise.
```{=html}
<!-- -->
```
- <http://en.wikipedia.org/wiki/Endocrine_system> Wikipedia. Much,
much more than you ever need to know about hormones and the
endocrine system but with a bit of discipline you can glean lots of
useful information from this site.
## Glossary
- Link to
Glossary
|
# Anatomy and Physiology of Animals/The Author
![](Ruth_-_small.jpg "Ruth_-_small.jpg"){width="300"}
Ruth Lawson is a zoologist who gained her first degree at Imperial
College, London University and her D.Phil from York University, UK.
After post graduate research on the tropical parasitic worm that causes
schistosomiasis, she emigrated to New Zealand where she spent 10 years
studying how hydatid disease spreads and can be controlled. With the
birth of her daughter, Kate, she started to teach at the Otago
Polytechnic, in Dunedin. Although human and animal anatomy and
physiology has been her main teaching focus, she retains a strong
interest and teaches courses in parasitology, public health, animal
nutrition and pig husbandry. Ruth lives on the Otago Peninsula
overlooking the beautiful Otago Harbour where she races her Topper
sailing dinghy. She also enjoys tramping, skiing and gardening and has
meditated for many years.
|
# Anatomy and Physiology of Animals/Acknowledgements
Many thanks to Terry Marler (B.V.S.C.) for his guiding vision, wisdom,
experience and patience. His advice throughout the writing of this
WikiBook has been invaluable. I would also like to thank Bronwyn
Hegarty, for gently shepherding the project through the many hurdles
encountered and Leigh Blackall, also in the Education Development Unit
at the Otago Polytechnic, for helpful discussions and advice. Many, many
thanks also to Sunshine Blackall for her skills in formatting the
diagrams and designing the artwork accompanying each chapter. The high
quality of this work would not have been possible without the financial
assistance of the Otago Polytechnic CAPEX fund. Thanks are also due to
Jeanette O\'Fee, my Head of School, for encouragement throughout the
project and to Keith Allnatt and Jan Bedford for proofreading and
reviewing. Finally I would like thank Peter and Kate who have patiently
suffered my \"unavailability\" as I tapped my evenings and weekends away
on the computer. *Ruth Lawson*
|
# Anatomy and Physiology of Animals/Table of contents
**Table of Contents for Print Version**
## Chapter 1 Chemicals
Page Number
14 Objectives
14 Elements and atoms
15 Compounds and molecules
16 Chemical reactions
16 Ionisation
16 Organic and inorganic compounds
17 Carbohydrates
18 Fats
19 Proteins
20 Summary
21 Test Yourself
21 Websites
22 Answers
## Chapter 2 Classification
23 Objectives
24 Naming and classifying animals
24 Naming animals
24 Classification of living organisms
25 The animal kingdom
26 The classification of vertebrates
27 Summary
28 Test yourself
28 Websites
29 Answers
## Chapter 3 The Cell
30 Objectives
30 The cell
32 The plasma membrane
32 How substances move across the plasma membrane
32 The cytoplasm
32 Diffusion
33 Osmosis
35 Active transport
37 Golgi apparatus
38 Lysosomes
38 Microfilaments and microtubules
38 The nucleus
38 Chromosomes
39 Cell division
40 Summary
41 Test yourself
42 Websites
43 Answers
## Chapter 4 Body Organisation
44 Objectives
44 The organisation of animal bodies
45 Epithelial tissues
47 Connective tissues
49 Muscle tissues
50 Nervous tissues
50 Vertebrate bodies
50 Body cavities
51 Organs
51 Generalised plan of the mammalian body
52 Body systems
53 Homeostasis
53 Directional terms
54 Summary
55 Test yourself
56 Websites
57 Answers
## Chapter 5 The Skin
58 Objectives
59 The skin
60 Skin structures made of keratin
60 laws and nails
60 Horns and antlers
61 Hair
62 Feathers
63 Skin glands
64 The skin and sun
64 The skin and temperature regulation
66 Summary
66 Test yourself
67 Websites
67 Answers
## Chapter 6 The Skeleton
69 Objectives
70 The vertebral column
71 The skull
71 The ribs
72 The forelimb
73 The hindlimb
74 The girdles
74 Categories of bones
74 Bird skeletons
75 The structure of long bones
76 Compact bone
77 Spongy bone
77 Bone growth
77 Broken bones
78 Joints
78 Common names of joints
79 Locomotion
79 Summary
80 Test yourself
81 Websites
82 Answers
## Chapter 7 Muscles
83 Objectives
83 Muscles
83 Smooth muscle
84 Cardiac muscle
84 Skeletal muscle
84 Structure of a muscle
85 Summary
85 Test yourself
86 Websites
87 Answers
## Chapter 8 Cardiovascular System
**Blood**
89 Objectives
89 Plasma
90 Red blood cells
91 White blood cells
92 Platelets
92 Transport of oxygen
92 Carbon monoxide poisoning
92 Transport of carbon dioxide
92 Transport of other substances
93 Blood clotting
93 Serum and plasma
93 Anticoagulants
93 Haemolysis
94 Blood groups
94 Blood volume
94 Summary
95 Test Yourself
95 Websites
96 Answers
**The Heart** 96 Objectives
97 The heart
97 Valves
98 The heartbeat
98 Cardiac muscle
99 Control of the heartbeat
99 The coronary vessels
**Blood circulation**
Objectives
102 Blood circulation
103 Arteries
103 The pulse
103 Capillaries
104 The formation of tissue fluid and lymph
104 Veins
105 Regulation of blood flow
105 Oedema and fluid loss
105 The spleen
106 Important blood vessels
106 Blood pressure
107 Summary
107 Test yourself
108 Websites
108 Answers
## Chapter 9 Respiratory System
109 Objectives
109 Respiratory system
110 Diffusion and transportation of oxygen
111 Diffusion and transportation of carbon dioxide
111 The air passages
111 The lungs and pleural cavities
112 Collapsed lung
112 Breathing
112 Inspiration
112 Expiration
113 Lung volumes
113 Composition of air
113 The acidity of the blood and breathing
114 Breathing in birds
114 Summary
114 Test yourself
115 Websites
116 Answers
## Chapter 10 Lymphatic System
117 Objectives
117 Lymph and the lymphatic system
119 Other organs of the lymphatic system
119 Summary
120 Test yourself
121 Websites
121 Answers
## Chapter 11 The Gut and Digestion
122 Objectives
122 The gut and digestion
122 Herbivores
122 Carnivores
122 mnivores
122 Treatment of food
124 The gut
125 Mouth
125 Teeth
126 Types of teeth
126 Dental formula
127 Oesophagus
128 Stomach
128 Small intestine
129 The rumen
129 Large intestine
130 Functional caecum
130 Gut of birds
131 Digestion
131 The liver
133 Summary
134 Test yourself
134 Websites
135 Answers
## Chapter 12 Urinary system
137 Objectives
137 Haemostasis
138 Water in the body
138 Maintaining water balance
139 Excretion
139 The kidneys and urinary system
140 Kidney tubules or nephrons
140 Processes occurring in the nephron
141 The production of concentrated urine
142 Diabetes and the kidney
143 Other functions of the kidney
143 Normal urine
143 Abnormal ingredients of urine
144 Excretion in birds
144 Summary
144 Test yourself
146 Websites
146 Answers
## Chapter 13 Reproductive System
148 Objectives
148 Reproductive system
149 Fertilisation
149 Sexual reproduction mammals
149 The male reproductive organs
150 The testes
151 Semen
151 Accessory glands
151 The penis
151 Sperm
152 The female reproductive organs
153 Ovaries
153 The ovarian cycle
153 The ovum
154 The oestrous cycle
154 Signs of oestrous or heat
155 Breeding seasons or breeding cycles
155 Fertilisation
156 Development of the morula and blastocyst
156 Implantation
156 Pregnancy
157 Hormones during pregnancy
157 Pregnancy testing
157 Gestation period
158 Signs of imminent birth
158 Labour
158 Adaptations of the foetus
158 Milk production
159 Summary
160 Test Yourself
161 Websites
162 Answers
## Chapter 14 Nervous System
164 Objectives
165 Coordination
165 The nervous system
165 The neuron
166 Connections between neurons
167 Reflexes
167 Conditioned reflexes
167 Parts of the nervous system
168 The central nervous system
168 The brain
169 The forebrain
170 The cerebellum
170 The spinal cord
170 The peripheral nervous system
171 Spinal nerves
171 The autonomic nervous system
172 Summary
173 Test Yourself
174 Websites
174 Answers
## Chapter 15 The Senses
176 Objectives
176 The sense organs
177 Touch and pressure
177 Pain
177 Temperature
177 Awareness of limb position
177 Smell
178 Taste
178 Sight
179 The structure of the eye
180 How the eye sees
181 Colour vision in animals
181 Binocular vision
182 Hearing
183 How the ear hears
183 Balance
184 Summary
185 Test Yourself
185 Websites
186 Answers
## Chapter 16 Endocrine System
188 Objectives
188 The endocrine system
189 Endocrine organs and hormones
189 The pituitary gland and hypothalamus
190 The pineal gland
190 The thyroid gland
191 The parathyroid gland
191 The adrenal gland
192 The pancreas
192 The ovaries
192 The testes
192 Summary
194 Test Yourself
194 Websites
194 Answers
**Homeostasis and Feedback Control**
194 Summary of homeostatic mechanisms
194 Temperature control
194 Water balance
195 Maintenance of blood volume after blood loss
196 Acid/base balance
197 Summary
## Glossary
|
# Introduction to Online Convex Optimization - Second Edition
# List of Symbols {#list-of-symbols .unnumbered}
### General {#general .unnumbered}
----------------------------------- --------------------------------------------------
$\stackrel{\text{\tiny def}}{=}$ definition
$\mathop{\mathrm{\arg\min}}\{ \}$ the argument minimizing the expression in braces
$[n]$ the set of integers $\{1,2,\ldots,n\}$
----------------------------------- --------------------------------------------------
### Geometry and Calculus {#geometry-and-calculus .unnumbered}
---------------------------- ----------------------------------------------------------------------------------------------------
${\mathbb R}^d$ $d$ dimensional Euclidean space
$\Delta_d$ $d$ dimensional simplex, $\{ \sum_i \ensuremath{\mathbf x}_i=1, \ensuremath{\mathbf x}_i \geq 0\}$
$\ensuremath{\mathbb {S}}$ $d$ dimensional sphere, $\{ \|\ensuremath{\mathbf x}\| =1\}$
$\mathbb{B}$ $d$ dimensional ball, $\{ \|\ensuremath{\mathbf x}\| \leq 1 \}$
${\mathbb R}$ real numbers
$\mathbb{C}$ complex numbers
$|A|$ determinant of matrix $A$
---------------------------- ----------------------------------------------------------------------------------------------------
### Learning Theory {#learning-theory .unnumbered}
-------------------------------- ---------------------------------------------------------
${\mathcal X},{\mathcal Y}$ feature/label sets
${\mathcal D}$ distribution over examples $(\ensuremath{\mathbf x},y)$
${\mathcal H}$ hypothesis class in ${\mathcal X}\mapsto Y$
$h$ single hypothesis $h \in {\mathcal H}$
$m$ training set size
$\mathop{\mbox{\rm error}}(h)$ generalization error of hypothesis $h \in {\mathcal H}$
-------------------------------- ---------------------------------------------------------
### Optimization {#optimization .unnumbered}
-------------------------------- ---------------------------------------------------------------------------------------------------------------
$\ensuremath{\mathbf x}$ vectors in the decision set
$\ensuremath{\mathcal K}$ decision set
$\nabla^k f$ the $k$'th differential of $f$; note $\nabla^k f \in {\mathbb R}^{d^k}$
$\nabla^{-2} f$ the inverse Hessian of $f$
$\nabla f$ the gradient of $f$
$\nabla_t$ the gradient of $f$ at point $\ensuremath{\mathbf x}_t$
$\ensuremath{\mathbf x}^\star$ the global or local optima of objective $f$
$h_t$ objective value distance to optimality, $h_t = f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^\star)$
$d_t$ Euclidean distance to optimality $d_t = \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^\star\|$
$G$ upper bound on norm of subgradients
$D$ upper bound on Euclidean diameter
$D_p,G_p$ upper bound on the $p$-norm of the subgradients/diameter
-------------------------------- ---------------------------------------------------------------------------------------------------------------
### Regularization {#regularization .unnumbered}
-------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$R$ strongly convex and smooth regularization function
$B_R(\ensuremath{\mathbf x}|| \ensuremath{\mathbf y})$ $R$-Bregman-divergence $R(\ensuremath{\mathbf x}) - R(\ensuremath{\mathbf y}) - \nabla R(\ensuremath{\mathbf y})^\top (\ensuremath{\mathbf x}-\ensuremath{\mathbf y})$
$G_R$ upper bound on norm of (sub)gradients
$D_R^2$ squared $R$ diameter $\max_{\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}} \{ R(\ensuremath{\mathbf x}) - R(\ensuremath{\mathbf y}) \}$
$\| \ensuremath{\mathbf x}\|_A^2$ squared matrix norm $\ensuremath{\mathbf x}^\top A \ensuremath{\mathbf x}$
$\| \ensuremath{\mathbf x}\|_\ensuremath{\mathbf y}^2$ local norm according to local regularization $\ensuremath{\mathbf x}^\top \nabla^2 R(\ensuremath{\mathbf y}) \ensuremath{\mathbf x}$
$\| \ensuremath{\mathbf x}\|^*$ dual norm to $\| \ensuremath{\mathbf x}\|$
-------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# Introduction {#chap:intro}
This book considers *optimization as a process*. In many practical
applications, the environment is so complex that it is not feasible to
lay out a comprehensive theoretical model and use classical algorithmic
theory and mathematical optimization. It is necessary, as well as
beneficial, to take a robust approach, by applying an optimization
method that learns as more aspects of the problem are observed. This
view of optimization as a process has become prominent in various
fields, which has led to spectacular successes in modeling and systems
that are now part of our daily lives.
The growing body of literature of machine learning, statistics, decision
science, and mathematical optimization blurs the classical distinctions
between deterministic modeling, stochastic modeling, and optimization
methodology. We continue this trend in this book, studying a prominent
optimization framework whose precise location in the mathematical
sciences is unclear: the framework of *online convex optimization*
(OCO), which was first defined in the machine learning literature (see
section [1.4](#sec:bib-of-sec-1){reference-type="ref"
reference="sec:bib-of-sec-1"}, later in this chapter). The metric of
success is borrowed from game theory, and the framework is closely tied
to statistical learning theory and convex optimization.
We embrace these fruitful connections and, on purpose, do not try to use
any particular jargon in the discussion. Rather, this book will start
with actual problems that can be modeled and solved via OCO. We will
proceed to present rigorous definitions, backgrounds, and algorithms.
Throughout, we provide connections to the literature in other fields. It
is our hope that you, the reader, will contribute to our understanding
of these connections from your domain of expertise, and expand the
growing amount of literature on this fascinating subject.
## The Online Convex Optimization Setting {#section:formaldef}
In OCO, an online player iteratively makes decisions. At the time of
each decision, the outcome or outcomes associated with it are unknown to
the player.
After committing to a decision, the decision maker suffers a loss: every
possible decision incurs a (possibly different) loss. These losses are
unknown to the decision maker beforehand. The losses can be
adversarially chosen, and even depend on the action taken by the
decision maker.
Already at this point, several restrictions are necessary in order for
this framework to make any sense at all:
- The losses determined by an adversary should not be allowed to be
unbounded . Otherwise, the adversary could keep decreasing the scale
of the loss at each step, and never allow the algorithm to recover
from the loss of the first step. Thus, we assume that the losses lie
in some bounded region.
- The decision set must be somehow bounded and/or structured, though
not necessarily finite.
To see why this is necessary, consider decision making with an
infinite set of possible decisions. An adversary can assign high
loss to all the strategies chosen by the player indefinitely, while
setting apart some strategies with zero loss. This precludes any
meaningful performance metric.
Surprisingly, interesting statements and algorithms can be derived with
not much more than these two restrictions. The online convex
optimization (OCO) framework models the decision set as a convex set in
Euclidean space denoted as $\ensuremath{\mathcal K}
\subseteq {\mathbb R}^n$. The costs are modeled as bounded convex
functions over $\ensuremath{\mathcal K}$.
The OCO framework can be seen as a structured repeated game. The
protocol of this learning framework is as follows.
At iteration $t$, the online player chooses
$\ensuremath{\mathbf x}_t \in \ensuremath{\mathcal K}$ . After the
player has committed to this choice, a convex cost function
$f_t \in {\mathcal F}: \ensuremath{\mathcal K}\mapsto {\mathbb R}$ is
revealed. Here, ${\mathcal F}$ is the bounded family of cost functions
available to the adversary. The cost incurred by the online player is
$f_t(\ensuremath{\mathbf x}_t)$, the value of the cost function for the
choice $\ensuremath{\mathbf x}_t$. Let $T$ denote the total number of
game iterations.
What would make an algorithm a good OCO algorithm? As the framework is
game-theoretic and adversarial in nature, the appropriate performance
metric also comes from game theory: define the *regret* of the decision
maker to be the difference between the total cost she has incurred and
that of the best fixed decision in hindsight. In OCO, we are usually
interested in an upper bound on the worst-case regret of an algorithm.
Let ${\mathcal A}$ be an algorithm for OCO, which maps a certain game
history to a decision in the decision set:
$$\ensuremath{\mathbf x}_t^{\mathcal A}= {\mathcal A}(f_1,...,f_{t-1}) \in \ensuremath{\mathcal K}.$$
We formally define the regret of ${\mathcal A}$ after $T$ iterations as:
$$\label{eqn:regret-defn}
\ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) = \sup_{\{f_1,...,f_T\} \subseteq {\mathcal F}} \left\{ \sum_{t=1}^T f_t(\ensuremath{\mathbf x}_t^{\mathcal A}) -\min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \sum_{t=1}^T f_t(\ensuremath{\mathbf x}) \right\} .$$
If the algorithm is clear from the context, we henceforth omit the
superscript and denote the algorithm's decision at time $t$ simply as
$\ensuremath{\mathbf x}_t$. Intuitively, an algorithm performs well if
its regret is sublinear as a function of $T$ (i.e.
$\ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) = o(T)$), since this
implies that on average, the algorithm performs as well as the best
fixed strategy in hindsight.
The running time of an algorithm for OCO is defined to be the worst-case
expected time to produce $\ensuremath{\mathbf x}_t$, for an iteration
$t \in [T]$ in a $T$-iteration repeated game. Typically, the running
time will depend on $n$ (the dimensionality of the decision set
$\mathcal{K}$), $T$ (the total number of game iterations), and the
parameters of the cost functions and underlying convex set.
## Examples of Problems That Can Be Modeled via Online Convex Optimization {#subsec:OCOexamples}
Perhaps the main reason that OCO has become a leading online learning
framework in recent years is its powerful modeling capability: problems
from diverse domains such as online routing, ad selection for search
engines, and spam filtering can all be modeled as special cases. In this
section, we briefly survey a few special cases and how they fit into the
OCO framework.
### Prediction from expert advice
Perhaps the most well known problem in prediction theory is the *experts
problem*. The decision maker has to choose among the advice of $n$ given
experts. After making her choice, a loss between zero and $1$ is
incurred. This scenario is repeated iteratively, and at each iteration,
the costs of the various experts are arbitrary (and possibly even
adversarial, trying to mislead the decision maker). The goal of the
decision maker is to do as well as the best expert in hindsight.
The OCO setting captures this as a special case: the set of decisions is
the set of all distributions over $n$ elements (experts); that is, the
$n$-dimensional simplex
$\ensuremath{\mathcal K}= \Delta_n = \{ \ensuremath{\mathbf x}\in {\mathbb R}^n \ , \ \sum_i \ensuremath{\mathbf x}_i = 1 \ , \ \ensuremath{\mathbf x}_i \geq 0\}$.
Let the cost of the $i$th expert at iteration $t$ be
$\ensuremath{\mathbf g_{t}}(i)$, and let $\ensuremath{\mathbf g_{t}}$ be
the cost vector of all $n$ experts. Then the cost function is the
expected cost of choosing an expert according to distribution
$\ensuremath{\mathbf x}$, and it is given by the linear function
$f_t(\ensuremath{\mathbf x}) = \ensuremath{\mathbf g_{t}}^\top \ensuremath{\mathbf x}$.
Thus, prediction from expert advice is a special case of OCO, in which
the decision set is the simplex and the cost functions are linear and
bounded, in the $\ell_\infty$ norm, to be at most $1$. The bound on the
cost functions is derived from the bound on the elements of the cost
vector $\ensuremath{\mathbf g_{t}}$.
The fundamental importance of the experts problem in machine learning
warrants special attention, and we shall return to it and analyze it in
detail at the end of this chapter.
### Online spam filtering
Consider an online spam-filtering system. Repeatedly, emails arrive in
the system and are classified as spam or valid. Obviously, such a system
has to cope with adversarially generated data and dynamically change
with the varying input---a hallmark of the OCO model.
The linear variant of this model is captured by representing the emails
as vectors according to the "bag-of-words" representation. Each email is
represented as a vector $\mathbf{a}\in {\mathbb R}^d$, where $d$ is the
number of words in the dictionary. The entries of this vector are all
zero, except for those coordinates that correspond to words appearing in
the email, which are assigned the value one.
To predict whether an email is spam, we learn a filter, for example a
vector $\ensuremath{\mathbf x}\in {\mathbb R}^d$. Usually a bound on the
Euclidean norm of this vector is decided upon a priori, and is a
parameter of great importance in practice.
Classification of an email $\mathbf{a}\in {\mathbb R}^d$ by a filter
$\ensuremath{\mathbf x}\in {\mathbb R}^d$ is given by the sign of the
inner product between these two vectors, i.e.,
$\hat{b} = \mathop{\mbox{\rm sign}}( \ensuremath{\mathbf x}^\top \mathbf{a})$
(with, for example, $+1$ meaning valid and $-1$ meaning spam).
In the OCO model of online spam filtering, the decision set is taken to
be the set of all such norm-bounded linear filters, i.e., the Euclidean
ball of a certain radius. The cost functions are determined according to
a stream of incoming emails arriving into the system, and their labels
(which may be known by the system, partially known, or not known at
all). Let $(\mathbf{a},b)$ be an email/label pair. Then the
corresponding cost function over filters is given by
$f(\ensuremath{\mathbf x}) = \ell( \hat{b},b)$. Here $\hat{b}$ is the
classification given by the filter $\ensuremath{\mathbf x}$, $b$ is the
true label, and $\ell$ is a convex loss function, for example, the
scaled square loss $\ell (\hat{b},b) = \frac{1}{4}(\hat{b} - b)^2$.
At this point the reader may wonder - why use a square loss rather than
any other function? The most natural choice being perhaps a loss of one
if $b = \hat{b}$ and zero otherwise.
To answer this, notice first that if both $b$ and $\hat{b}$ are binary
and in $\{-1,1\}$, then the square loss is indeed one or zero. However,
moving to a continuous function allows us much more flexibility in the
decision making process. We can allow, for example, the algorithm to
return a number in the interval $[-1,1]$ depending on its confidence.
Another reason has to do with the algorithmic efficiency of finding a a
good solution. This will be the subject of future chapters.
### Online shortest paths
In the online shortest path problem, the decision maker is given a
directed graph $G=(V,E)$ and a source-sink pair $u,v \in V$. At each
iteration $t \in [T]$, the decision maker chooses a path
$p_t \in {\mathcal P}_{u,v}$, where
${\mathcal P}_{u,v} \subseteq E^{|V|}$ is the set of all $u$-$v$-paths
in the graph. The adversary independently chooses weights (lengths) on
the edges of the graph, given by a function from the edges to the real
numbers $\mathbf{w}_t: E \mapsto {\mathbb R}$, which can be represented
as a vector $\mathbf{w}_t \in {\mathbb R}^m$, where $m=|E|$. The
decision maker suffers and observes a loss, which is the weighted length
of the chosen path $\sum_{e \in p_t} \mathbf{w}_t(e)$.
The discrete description of this problem as an experts problem, where we
have an expert for each path, presents an efficiency challenge. There
are potentially exponentially many paths in terms of the graph
representation size.
Alternatively, the online shortest path problem can be cast in the
online convex optimization framework as follows. Recall the standard
description of the set of all distributions over paths (flows) in a
graph as a convex set in ${\mathbb R}^{m}$, with $O(m+|V|)$ constraints
(figure [\[flow polytope\]](#flow polytope){reference-type="ref"
reference="flow polytope"}). Denote this flow polytope by
$\ensuremath{\mathcal K}$. The expected cost of a given flow
$\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}$ (distribution over
paths) is then a linear function, given by
$f_t(\ensuremath{\mathbf x}) = \mathbf{w}_t^\top \ensuremath{\mathbf x}$,
where, as defined above, $\mathbf{w}_t(e)$ is the length of the edge
$e \in E$. This inherently succinct formulation leads to computationally
efficient algorithms.
$$\begin{aligned}
& \sum_{ e = (u,w) , w\in V} \ensuremath{\mathbf x}_{e} = 1 = \sum_{ e = (w,v), w \in V } \ensuremath{\mathbf x}_{e} & \mbox{ flow value is one} \\
& \forall w \in V \setminus \{u,v\} \ \ \sum_{e = (w,x) \in E } \ensuremath{\mathbf x}_{e} = \sum_{e = (x,w) \in E } \ensuremath{\mathbf x}_{e} & \mbox{ flow conservation} \\
& \forall e \in E \ \ 0 \leq \ensuremath{\mathbf x}_{e} \leq 1 & \mbox{ capacity constraints}
\end{aligned}$$
### Portfolio selection {#section:portfolios}
In this section we consider a portfolio selection model that does not
make any statistical assumptions about the stock market (as opposed to
the standard geometric Brownian motion model for stock prices), and is
called the "universal portfolio selection" model.
At each iteration $t\in[T]$, the decision maker chooses a distribution
of her wealth over $n$ assets $\ensuremath{\mathbf x_{t}}\in \Delta_n$.
The adversary independently chooses market returns for the assets, i.e.,
a vector $\ensuremath{\mathbf r_{t}}\in {\mathbb R}^n$ with strictly
positive entries such that each coordinate
$\ensuremath{\mathbf r_{t}}(i)$ is the price ratio for the $i$'th asset
between the iterations $t$ and $t+1$. The ratio between the wealth of
the investor at iterations $t+1$ and $t$ is
$\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x_{t}}$, and hence
the gain in this setting is defined to be the logarithm of this change
ratio in wealth
$\log (\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x_{t}})$.
Notice that since $\ensuremath{\mathbf x_{t}}$ is the distribution of
the investor's wealth, even if
$\ensuremath{\mathbf x_{t+1}} =\ensuremath{\mathbf x_{t}}$, the investor
may still need to trade to adjust for price changes.
The goal of regret minimization, which in this case corresponds to
minimizing the difference
$\max_{\ensuremath{\mathbf x}^\star \in \Delta_n} {\textstyle \sum}_{t=1}^T \log(\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x}^\star) - {\textstyle \sum}_{t=1}^T \log(\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x}_t)$,
has an intuitive interpretation. The first term is the logarithm of the
wealth accumulated by the best possible in-hindsight distribution
$\ensuremath{\mathbf x}^\star$. Since this distribution is fixed, it
corresponds to a strategy of rebalancing the position after every
trading period, and hence, is called a *constant rebalanced portfolio*.
The second term is the logarithm of the wealth accumulated by the online
decision maker. Hence regret minimization corresponds to maximizing the
ratio of the investor's wealth to the wealth of the best benchmark from
a pool of investing strategies.
A *universal* portfolio selection algorithm is defined to be one that,
in this setting, attains regret converging to zero. Such an algorithm,
albeit requiring exponential time, was first described by Cover (see
bibliographic notes at the end of this chapter). The online convex
optimization framework has given rise to much more efficient algorithms
based on Newton's method. We shall return to study these in detail in
chapter [4](#chap:second order-methods){reference-type="ref"
reference="chap:second order-methods"}.
### Matrix completion and recommendation systems
The prevalence of large-scale media delivery systems such as the Netflix
online video library, Spotify music service and many others, give rise
to very large scale recommendation systems. One of the most popular and
successful models for automated recommendation is the matrix completion
model.
In this mathematical model, recommendations are thought of as composing
a matrix. The customers are represented by the rows, the different media
are the columns, and at the entry corresponding to a particular
user/media pair we have a value scoring the preference of the user for
that particular media.
For example, for the case of binary recommendations for music, we have a
matrix $X \in \{0,1\}^{n \times m}$ where $n$ is the number of persons
considered, $m$ is the number of songs in our library, and $0/1$
signifies dislike/like respectively: $$X_{ij} = {
\left\{
\begin{array}{ll}
{0}, & {\mbox{person $i$ dislikes song $j$}} \\\\
{1}, & {\mbox{person $i$ likes song $j$}}
\end{array}
\right. } .$$
In the online setting, for each iteration the decision maker outputs a
preference matrix $X_t \in \ensuremath{\mathcal K}$, where
$\ensuremath{\mathcal K}\subseteq \{0,1\}^{n \times m}$ is a subset of
all possible zero/one matrices. An adversary then chooses a user/song
pair $(i_t,j_t)$ along with a "real" preference for this pair
$y_t \in \{0,1\}$. Thus, the loss experienced by the decision maker can
be described by the convex loss function,
$$f_t(X) = ( X_{i_t,j_t} - y_t)^2 .$$
The natural comparator in this scenario is a low-rank matrix, which
corresponds to the intuitive assumption that preference is determined by
few unknown factors. Regret with respect to this comparator means
performing, on the average, as few preference-prediction errors as the
best low-rank matrix.
We return to this problem and explore efficient algorithms for it in
chapter [7](#chap:FW){reference-type="ref" reference="chap:FW"}.
## A Gentle Start: Learning from Expert Advice {#sec:experts}
Consider the following fundamental iterative decision making problem:
At each time step $t=1,2,\ldots,T$, the decision maker faces a choice
between two actions $A$ or $B$ (i.e., buy or sell a certain stock). The
decision maker has assistance in the form of $N$ "experts" that offer
their advice. After a choice between the two actions has been made, the
decision maker receives feedback in the form of a loss associated with
each decision. For simplicity one of the actions receives a loss of zero
(i.e., the "correct" decision) and the other a loss of one.
We make the following elementary observations:
1. A decision maker that chooses an action uniformly at random each
iteration, trivially attains a loss of $\frac{T}{2}$ and is
"correct" $50\%$ of the time.
2. In terms of the number of mistakes, no algorithm can do better in
the worst case! In a later exercise, we will devise a randomized
setting in which the expected number of mistakes of any algorithm is
at least $\frac{T}{2}$.
We are thus motivated to consider a *relative performance metric*: can
the decision maker make as few mistakes as the best expert in hindsight?
The next theorem shows that the answer in the worst case is negative for
a deterministic decision maker.
::: theorem
**Theorem 1.1**. *Let $L \leq \frac{T} {2}$ denote the number of
mistakes made by the best expert in hindsight. Then there does not exist
a deterministic algorithm that can guarantee less than $2L$ mistakes.*
:::
::: proof
*Proof.* Assume that there are only two experts and one always chooses
option $A$ while the other always chooses option $B$. Consider the
setting in which an adversary always chooses the opposite of our
prediction (she can do so, since our algorithm is deterministic). Then,
the total number of mistakes the algorithm makes is $T$. However, the
best expert makes no more than $\frac{T}{2}$ mistakes (at every
iteration exactly one of the two experts is mistaken). Therefore, there
is no algorithm that can always guarantee less than $2L$ mistakes. ◻
:::
This observation motivates the design of random decision making
algorithms, and indeed, the OCO framework gracefully models decisions on
a continuous probability space. Henceforth we prove Lemmas
[1.3](#lem:wm){reference-type="ref" reference="lem:wm"} and
[1.4](#lem:rwm){reference-type="ref" reference="lem:rwm"} that show the
following:
::: theorem
**Theorem 1.2**. *Let $\varepsilon\in (0,\frac{1}{2} )$. Suppose the
best expert makes $L$ mistakes. Then:*
1. *There is an efficient deterministic algorithm that can guarantee
less than $2(1+\varepsilon)L + \frac{2\log N}{\varepsilon}$
mistakes;*
2. *There is an efficient randomized algorithm for which the expected
number of mistakes is at most
$(1+\varepsilon)L + \frac{\log N}{\varepsilon}$.*
:::
### The weighted majority algorithm
The weighted majority (WM) algorithm is intuitive to describe: each
expert $i$ is assigned a weight $W_t(i)$ at every iteration $t$.
Initially, we set $W_1(i) = 1$ for all experts $i \in [N]$. For all
$t \in [T]$ let $S_t(A),S_t(B) \subseteq [N]$ be the set of experts that
choose $A$ (and respectively $B$) at time $t$. Define,
$$W_t(A) = \smashoperator[r]{\sum_{i \in S_t(A)}} W_t(i) \qquad \qquad W_t(B) = \smashoperator[r]{\sum_{i \in S_t(B)}} W_t(i)$$
and predict according to $$a_t =
\begin{cases}
A & \text{if $W_t(A) \ge W_t(B)$}\\
B & \text{otherwise.}
\end{cases}$$ Next, update the weights $W_t(i)$ as follows:
$$W_{t+1}(i) =
\begin{cases}
W_t(i) & \text{if expert $i$ was correct}\\
W_t(i) (1-\varepsilon) & \text{if expert $i$ was wrong}
\end{cases}
,$$ where $\varepsilon$ is a parameter of the algorithm that will affect
its performance. This concludes the description of the WM algorithm. We
proceed to bound the number of mistakes it makes.
::: {#lem:wm .lemma}
**Lemma 1.3**. *Denote by $M_t$ the number of mistakes the algorithm
makes until time $t$, and by $M_t(i)$ the number of mistakes made by
expert $i$ until time $t$. Then, for any expert $i \in [N]$ we have
$$M_T \le 2(1+\varepsilon)M_T(i) + \frac{2\log N}{\varepsilon} .$$*
:::
We can optimize $\varepsilon$ to minimize the above bound. The
expression on the right hand side is of the form $f(x)=ax+b/x$, that
reaches its minimum at $x=\sqrt{b/a}$. Therefore the bound is minimized
at $\varepsilon^\star = \sqrt{\log N/M_T(i)}$. Using this optimal value
of $\varepsilon$, we get that for the best expert $i^\star$
$$M_T \le 2M_T(i^\star) + O\left(\sqrt {M_T(i^\star)\log N}\right).$$ Of
course, this value of $\varepsilon^\star$ cannot be used in advance
since we do not know which expert is the best one ahead of time (and
therefore we do not know the value of $M_T(i^\star)$). However, we shall
see later on that the same asymptotic bound can be obtained even without
this prior knowledge.
Let us now prove Lemma [1.3](#lem:wm){reference-type="ref"
reference="lem:wm"}.
::: proof
*Proof.* Let $\Phi_t= \sum_{i=1}^N W_t(i)$ for all $t \in [T]$, and note
that $\Phi_1=N$.
Notice that $\Phi_{t+1} \le \Phi_t$. However, on iterations in which the
WM algorithm erred, we have
$$\Phi_{t+1} \le \Phi_t(1-\frac{\varepsilon}{2}) ,$$ the reason being
that experts with at least half of total weight were wrong (else WM
would not have erred), and therefore
$$\Phi_{t+1} \le \frac{1}{2} \Phi_t(1-\varepsilon) + \frac {1} {2} \Phi_t =\Phi_t(1-\frac {\varepsilon}{2}) .$$
From both observations,
$$\Phi_{t} \le \Phi_1 (1-\frac{\varepsilon}{2})^{M_t} = N (1-\frac{\varepsilon}{2})^{M_t} .$$
On the other hand, by definition we have for any expert $i$ that
$$W_T(i) = (1-\varepsilon)^{M_T(i)} .$$ Since the value of $W_T(i)$ is
always less than the sum of all weights $\Phi_T$, we conclude that
$$(1-\varepsilon)^{M_T(i)} = W_T(i) \le \Phi_T \le N(1-\frac{\varepsilon}{2})^{M_T}.$$
Taking the logarithm of both sides we get
$$M_T(i)\log(1-\varepsilon) \le \log{N} + M_T\log{(1-\frac{\varepsilon}{2})} .$$
Next, we use the approximations
$$-x-x^2 \le \log{(1-x)} \le -x \qquad \quad 0 < x < \frac{1}{2},$$
which follow from the Taylor series of the logarithm function, to obtain
that
$$-M_T(i)(\varepsilon+\varepsilon^2) \le \log{N} - M_T\frac {\varepsilon}{2} ,$$
and the lemma follows. ◻
:::
### Randomized weighted majority
In the randomized version of the WM algorithm, denoted RWM, we choose
expert $i$ w.p. $p_t(i) = W_t(i) / \sum_{j=1}^N W_t(j)$ at time $t$.
::: {#lem:rwm .lemma}
**Lemma 1.4**. *Let $M_t$ denote the number of mistakes made by RWM
until iteration $t$. Then, for any expert $i \in [N]$ we have
$$\mathop{\mbox{\bf E}}[ M_T] \le (1+\varepsilon)M_T(i) + \frac{\log N}{\varepsilon} .$$*
:::
The proof of this lemma is very similar to the previous one, where the
factor of two is saved by the use of randomness:
::: proof
*Proof.* As before, let $\Phi_t= \sum_{i=1}^N W_t(i)$ for all
$t \in [T]$, and note that $\Phi_1=N$. Let $\tilde{m}_t = M_t - M_{t-1}$
be the indicator variable that equals one if the RWM algorithm makes a
mistake on iteration $t$. Let $m_t(i)$ equal one if the $i$'th expert
makes a mistake on iteration $t$ and zero otherwise. Inspecting the sum
of the weights: $$\begin{aligned}
\Phi_{t+1} & = \sum_i W_t(i) (1 - \varepsilon m_t(i)) \\
& = \Phi_t (1 - \varepsilon\sum_i p_t(i) m_t(i)) & \mbox{ $p_t(i) = \frac{W_t(i)}{\sum_j W_t(j) }$} \\
& = \Phi_t ( 1 - \varepsilon\mathop{\mbox{\bf E}}[\tilde{m}_t ]) \\
& \leq \Phi_t e^{-\varepsilon\mathop{\mbox{\bf E}}[\tilde{m}_t] }. & \mbox{ $1 + x \leq e^x $}
\end{aligned}$$ On the other hand, by definition we have for any expert
$i$ that $$W_T(i) = (1-\varepsilon)^{M_T(i)}$$ Since the value of
$W_T(i)$ is always less than the sum of all weights $\Phi_T$, we
conclude that
$$(1-\varepsilon)^{M_T(i)} = W_T(i) \le \Phi_T \le N e^{-\varepsilon\mathop{\mbox{\bf E}}[M_T]}.$$
Taking the logarithm of both sides we get
$$M_T(i)\log(1-\varepsilon) \le \log{N} - \varepsilon\mathop{\mbox{\bf E}}[ M_T]$$
Next, we use the approximation
$$-x-x^2 \le \log{(1-x)} \le -x \qquad , \quad 0 < x < \frac{1}{2}$$ to
obtain
$$-M_T(i)(\varepsilon+\varepsilon^2) \le \log{N} - \varepsilon\mathop{\mbox{\bf E}}[M_T] ,$$
and the lemma follows. ◻
:::
### Hedge
The RWM algorithm is in fact more general: instead of considering a
discrete number of mistakes, we can consider measuring the performance
of an expert by a non-negative real number $\ell_t(i)$, which we refer
to as the *loss* of the expert $i$ at iteration $t$. The randomized
weighted majority algorithm guarantees that a decision maker following
its advice will incur an average expected loss approaching that of the
best expert in hindsight.
Historically, this was observed by a different and closely related
algorithm called Hedge, whose total loss bound will be of interest to us
later on in the book.
::: algorithm
::: algorithmic
Initialize: $\forall i\in [N], \ W_1(i) = 1$ Pick $i_t \sim_R W_t$,
i.e., $i_t = i$ with probability
$\ensuremath{\mathbf x}_t(i) = \frac{W_t(i) } {\sum_j W_t(j) }$ Incur
loss $\ell_t(i_t)$. Update weights
$W_{t+1}(i) = W_{t}(i) e^{-\varepsilon\ell_t(i)}$
:::
:::
Henceforth, denote in vector notation the expected loss of the algorithm
by
$$\mathop{\mbox{\bf E}}[ \ell_t(i_t) ] = \sum_{i=1}^N \ensuremath{\mathbf x}_t(i) \ell_t(i) = \ensuremath{\mathbf x}_t^\top \ell_t$$
::: {#lem:hedge .theorem}
**Theorem 1.5**. *Let $\ell_t^2$ denote the $N$-dimensional vector of
square losses, i.e., $\ell_t^2(i) = \ell_t(i)^2$, let $\varepsilon> 0$,
and assume all losses to be non-negative. The Hedge algorithm satisfies
for any expert $i^\star \in [N]$:
$$\sum_{t=1}^T \ensuremath{\mathbf x}_t^\top \ell_t \le \sum_{t=1}^T \ell_t(i^\star) + \varepsilon\sum_{t=1}^T \ensuremath{\mathbf x}_t^\top \ell_t^2 + \frac{\log N}{\varepsilon}$$*
:::
::: proof
*Proof.* As before, let $\Phi_t= \sum_{i=1}^N W_t(i)$ for all
$t \in [T]$, and note that $\Phi_1=N$.
Inspecting the sum of weights: $$\begin{aligned}
\Phi_{t+1} & = \sum_i W_t(i) e^{- \varepsilon\ell_t(i)} \\
& = \Phi_t \sum_i \ensuremath{\mathbf x}_t(i) e^{- \varepsilon\ell_t(i)} & \mbox{ $\ensuremath{\mathbf x}_t(i) = \frac{W_t(i)}{\sum_j W_t(j) }$} \\
& \leq \Phi_t \sum_i \ensuremath{\mathbf x}_t(i) ( 1 - \varepsilon\ell_t(i) + \varepsilon^2 \ell_t(i)^2 ) ) & \mbox{ for $x \geq 0$, } \\
& & \mbox{ $e^{-x} \leq 1 - x + x^2 $} \\
& = \Phi_t ( 1 - \varepsilon\ensuremath{\mathbf x}_t^\top \ell_t + \varepsilon^2 \ensuremath{\mathbf x}_t^\top \ell_t^2 ) \\
& \leq \Phi_t e^{-\varepsilon\ensuremath{\mathbf x}_t^\top \ell_t + \varepsilon^2 \ensuremath{\mathbf x}_t^\top \ell_t^2 }. & \mbox{ $1 + x \leq e^x $}
\end{aligned}$$ On the other hand, by definition, for expert $i^\star$
we have that
$$W_{T+1}(i^\star) = e^{ -\varepsilon\sum_{t=1}^{T} \ell_t(i^\star) }$$
Since the value of $W_T(i^\star)$ is always less than the sum of all
weights $\Phi_t$, we conclude that
$$W_{T+1}(i^\star) \le \Phi_{T+1} \le N e^{-\varepsilon\sum_t \ensuremath{\mathbf x}_t^\top \ell_{t} + \varepsilon^2 \sum_{t} \ensuremath{\mathbf x}_t^\top \ell_t^2 }.$$
Taking the logarithm of both sides we get
$$-\varepsilon\sum_{t=1}^T \ell_t(i^\star) \le \log{N} - \varepsilon\sum_{t=1}^T \ensuremath{\mathbf x}_t^\top \ell_t + \varepsilon^2 \sum_{t=1}^T \ensuremath{\mathbf x}_t^\top \ell_t^2$$
and the theorem follows by simplifying. ◻
:::
## Bibliographic Remarks {#sec:bib-of-sec-1}
The OCO model was first defined by @Zinkevich03 and has since become
widely influential in the learning community and significantly extended
since (see thesis and surveys
[@HazanThesis; @HazanSurvey; @shalev2011online]).
The problem of prediction from expert advice and the Weighted Majority
algorithm were devised in [@WarmuthLittlestone89; @LitWar94]. This
seminal work was one of the first uses of the multiplicative updates
method---a ubiquitous meta-algorithm in computation and learning, see
the survey [@AHK-MW] for more details. The Hedge algorithm was
introduced by @FreundSch1997.
The Universal Portfolios model was put forth in [@cover], and is one of
the first examples of a worst-case online learning model. Cover gave an
optimal-regret algorithm for universal portfolio selection that runs in
exponential time. A polynomial time algorithm was given in
[@KalaiVempalaPortfolios], which was further sped up in
[@AgarwalHKS06; @HAK07]. Numerous extensions to the model also appeared
in the literature, including addition of transaction costs
[@BlumKalaiPortfolios] and relation to the Geometric Brownian Motion
model for stock prices [@HazanKNips09].
In their influential paper, @AweKle08 put forth the application of
online convex optimization to online routing. A great deal of work has
been devoted since then to improve the initial bounds, and generalize it
into a complete framework for decision making with limited feedback.
This framework is an extension of OCO, called Bandit Convex Optimization
(BCO). We defer further bibliographic remarks to chapter
[6](#chap:bandits){reference-type="ref" reference="chap:bandits"} which
is devoted to the BCO framework.
## Exercises
# Basic Concepts in Convex Optimization {#chap:opt}
In this chapter we give a gentle introduction to convex optimization and
present some basic algorithms for solving convex mathematical programs.
Although offline convex optimization is not our main topic, it is useful
to recall the basic definitions and results before we move on to OCO.
This will help in assessing the advantages and limitations of OCO.
Furthermore, we describe some tools that will be our bread-and-butter
later on.
The material in this chapter is far from being new. A broad and
significantly more detailed literature exists, and the reader is
deferred to the bibliography at the end of this chapter for references.
We give here only the most elementary analysis, and focus on the
techniques that will be of use to us later on.
## Basic Definitions and Setup {#sec:optdefs}
The goal in this chapter is to minimize a continuous and convex function
over a convex subset of Euclidean space. Henceforth, let
$\ensuremath{\mathcal K}\subseteq {\mathbb R}^d$ be a bounded convex and
closed set in Euclidean space. We denote by $D$ an upper bound on the
diameter of $\ensuremath{\mathcal K}$:
$$\forall \ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}, \ \|\ensuremath{\mathbf x}-\ensuremath{\mathbf y}\| \leq D.$$
A set $\ensuremath{\mathcal K}$ is convex if for any
$\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}$,
all the points on the line segment connecting $\ensuremath{\mathbf x}$
and $\ensuremath{\mathbf y}$ also belong to $\ensuremath{\mathcal K}$,
i.e.,
$$\forall \alpha \in [0,1] , \ \alpha \ensuremath{\mathbf x}+ (1-\alpha)\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}.$$
A function $f: \ensuremath{\mathcal K}\mapsto {\mathbb R}$ is convex if
for any
$\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}$
$$\forall \alpha \in [0,1] , \ f( (1 - \alpha) \ensuremath{\mathbf x}+ \alpha \ensuremath{\mathbf y}) \leq (1- \alpha) f(\ensuremath{\mathbf x}) + \alpha f(\ensuremath{\mathbf y}).$$
This inequality, and generalizations thereof, is also known as Jensen's
inequality. Equivalently, if $f$ is differentiable, that is, its
gradient $\nabla f(\ensuremath{\mathbf x})$ exists for all
$\ensuremath{\mathbf x}\in\ensuremath{\mathcal K}$, then it is convex if
and only if
$\forall \ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}$
$$f(\ensuremath{\mathbf y}) \geq f(\ensuremath{\mathbf x}) + \nabla f(\ensuremath{\mathbf x})^\top (\ensuremath{\mathbf y}-\ensuremath{\mathbf x}).$$
For convex and non-differentiable functions $f$, the subgradient at
$\ensuremath{\mathbf x}$ is *defined* to be any member of the set of
vectors $\{ \nabla f(\ensuremath{\mathbf x}) \}$ that satisfies the
above for all $\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}$.
We denote by $G > 0$ an upper bound on the norm of the subgradients of
$f$ over $\ensuremath{\mathcal K}$, i.e.,
$\|\nabla f(\ensuremath{\mathbf x})\| \leq G$ for all
$\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}$. Such an upper bound
implies that the function $f$ is Lipschitz continuous with parameter
$G$, that is, for all
$\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}$
$$|f(\ensuremath{\mathbf x}) - f(\ensuremath{\mathbf y})| \leq G \|\ensuremath{\mathbf x}-\ensuremath{\mathbf y}\|.$$
The optimization and machine learning literature studies special types
of convex functions that admit useful properties, which in turn allow
for more efficient optimization. Notably, we say that a function is
$\alpha$-strongly convex if
$$f( \ensuremath{\mathbf y}) \geq f(\ensuremath{\mathbf x}) + \nabla f(\ensuremath{\mathbf x})^\top (\ensuremath{\mathbf y}-\ensuremath{\mathbf x}) + \frac{\alpha}{2} \|\ensuremath{\mathbf y}-\ensuremath{\mathbf x}\|^2.$$
A function is $\beta$-smooth if
$$f( \ensuremath{\mathbf y}) \leq f(\ensuremath{\mathbf x}) + \nabla f(\ensuremath{\mathbf x})^\top (\ensuremath{\mathbf y}-\ensuremath{\mathbf x}) + \frac{\beta}{2} \|\ensuremath{\mathbf y}-\ensuremath{\mathbf x}\|^2.$$
The latter condition is equivalent to a Lipschitz condition over the
gradients, i.e.,
$$\| \nabla f(\ensuremath{\mathbf x}) - \nabla f(\ensuremath{\mathbf y}) \| \leq {\beta} \|\ensuremath{\mathbf x}-\ensuremath{\mathbf y}\|.$$
If the function is twice differentiable and admits a second derivative,
known as a Hessian for a function of several variables, the above
conditions are equivalent to the following condition on the Hessian,
denoted $\nabla^2 f(\ensuremath{\mathbf x})$:
$$\alpha I \preccurlyeq \nabla^2 f(\ensuremath{\mathbf x}) \preccurlyeq \beta I,$$
where $A\preccurlyeq B$ if the matrix $B-A$ is positive semidefinite.
When the function $f$ is both $\alpha$-strongly convex and
$\beta$-smooth, we say that it is $\gamma$-well-conditioned where
$\gamma$ is the ratio between strong convexity and smoothness, also
called the *condition number* of $f$
$$\gamma = \frac{\alpha}{\beta} \leq 1$$
### Projections onto convex sets {#sec:projections}
In the following algorithms we shall make use of a projection operation
onto a convex set, which is defined as the closest point in terms of
Euclidean distance inside the convex set to a given point. Formally,
$$\mathop{\Pi}_\ensuremath{\mathcal K}(\ensuremath{\mathbf y}) \stackrel{\text{\tiny def}}{=}\mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \| \ensuremath{\mathbf x}- \ensuremath{\mathbf y}\|.$$
When clear from the context, we shall remove the
$\ensuremath{\mathcal K}$ subscript. It is left as an exercise to the
reader to prove that the projection of a given point over a closed,
bounded and convex set exists and is unique.
The computational complexity of projections is a subtle issue that
depends much on the characterization of $\ensuremath{\mathcal K}$
itself. Most generally, $\ensuremath{\mathcal K}$ can be represented by
a membership oracle---an efficient procedure that is capable of deciding
whether a given $\ensuremath{\mathbf x}$ belongs to
$\ensuremath{\mathcal K}$ or not. In this case, projections can be
computed in polynomial time. In certain special cases, projections can
be computed very efficiently in near-linear time. The computational cost
of projections, as well as optimization algorithms that avoid them
altogether, is discussed in chapter [7](#chap:FW){reference-type="ref"
reference="chap:FW"}.
A crucial property of projections that we shall make extensive use of is
the Pythagorean theorem, which we state here for completeness:
::: center
![Pythagorean theorem](images/fig_pyth.png){width="3.5in"}
:::
::: {#thm:pythagoras .theorem}
**Theorem 2.1** (Pythagoras, circa 500 BC). *Let
$\ensuremath{\mathcal K}\subseteq {\mathbb R}^d$ be a convex set,
$\ensuremath{\mathbf y}\in {\mathbb R}^d$ and
$\ensuremath{\mathbf x}= \mathop{\Pi}_\ensuremath{\mathcal K}(\ensuremath{\mathbf y})$.
Then for any $\ensuremath{\mathbf z}\in \ensuremath{\mathcal K}$ we have
$$\| \ensuremath{\mathbf y}- \ensuremath{\mathbf z}\| \geq \| \ensuremath{\mathbf x}- \ensuremath{\mathbf z}\|.$$*
:::
We note that there exists a more general version of the Pythagorean
theorem. The above theorem and the definition of projections are true
and valid not only for Euclidean norms, but for projections according to
other distances that are not norms. In particular, an analogue of the
Pythagorean theorem remains valid with respect to Bregman divergences
(see chapter [5](#chap:regularization){reference-type="ref"
reference="chap:regularization"}).
### Introduction to optimality conditions {#subsec:optimality-conditions}
The standard curriculum of high school mathematics contains the basic
facts concerning when a function (usually in one dimension) attains a
local optimum or saddle point. The generalization of these conditions to
more than one dimension is called the KKT (Karush-Kuhn-Tucker)
conditions, and the reader is referred to the bibliographic material at
the end of this chapter for an in-depth rigorous discussion of
optimality conditions in general mathematical programming.
For our purposes, we describe only briefly and intuitively the main
facts that we will require henceforth. Naturally, we restrict ourselves
to convex programming, and thus a local minimum of a convex function is
also a global minimum (see exercises at the end of this chapter). In
general there can be many points in which a function is minimized, and
thus we refer to the *set* of minima of a given objective function,
denoted as
$\mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in {\mathbb R}^n} \{ f(\ensuremath{\mathbf x})\}$
.
The generalization of the fact that a minimum of a convex differentiable
function on ${\mathbb R}$ is a point in which its derivative is equal to
zero, is given by the multi-dimensional analogue that its gradient is
zero:
$$\nabla f(\ensuremath{\mathbf x}) = 0 \ \ \Longleftrightarrow \ \ \ensuremath{\mathbf x}\in \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in {\mathbb R}^n} \{ f(\ensuremath{\mathbf x}) \}.$$
We will require a slightly more general, but equally intuitive, fact for
constrained optimization: at a minimum point of a constrained convex
function, the inner product between the negative gradient and direction
towards the interior of $\ensuremath{\mathcal K}$ is non-positive. This
is depicted in figure [2.1](#fig:optimality){reference-type="ref"
reference="fig:optimality"}, which shows that
$-\nabla f(\ensuremath{\mathbf x}^\star)$ defines a supporting
hyperplane to $\ensuremath{\mathcal K}$. The intuition is that if the
inner product were positive, one could improve the objective by moving
in the direction of the projected negative gradient. This fact is stated
formally in the following theorem.
::: {#thm:optim-conditions .theorem}
**Theorem 2.2** (Karush-Kuhn-Tucker). *Let
$\ensuremath{\mathcal K}\subseteq {\mathbb R}^d$ be a convex set,
$\ensuremath{\mathbf x}^\star \in \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} f(\ensuremath{\mathbf x})$.
Then for any $\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}$ we have
$$\nabla f(\ensuremath{\mathbf x}^\star) ^\top ( \ensuremath{\mathbf y}- \ensuremath{\mathbf x}^\star ) \geq 0.$$*
:::
::: center
![Optimality conditions: negative subgradient pointing outwards
](images/fig_kt.jpg){#fig:optimality width="4in"}
:::
## Gradient Descent
Gradient descent (GD) is the simplest and oldest of optimization
methods. It is an *iterative method*---the optimization procedure
proceeds in iterations, each improving the objective value. The basic
method amounts to iteratively moving the current point in the direction
of the gradient, which is a linear time operation if the gradient is
given explicitly (indeed, for many functions computing the gradient at a
certain point is a simple linear-time operation).
The basic template algorithm, for unconstrained optimization, is given
in [\[alg:basic\]](#alg:basic){reference-type="ref"
reference="alg:basic"}, and a depiction of the iterates it produced in
figure [2.2](#fig:gradient_descent){reference-type="ref"
reference="fig:gradient_descent"}.
::: center
![Iterates of the GD algorithm ](images/gd.png){#fig:gradient_descent
width="2.3in"}
:::
::: algorithm
::: algorithmic
Input: time horizon $T$, initial point $x_0$, step sizes $\{\eta_t\}$
$\ensuremath{\mathbf x}_{t+1} = \ensuremath{\mathbf x}_t - \eta_t \nabla_t$
$\bar{\mathbf{x}}= \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}_t} \{ f(\ensuremath{\mathbf x}_t) \}$
:::
:::
For a convex function there always exists a choice of step sizes that
will cause GD to converge to the optimal solution. The rates of
convergence, however, differ greatly and depend on the smoothness and
strong convexity properties of the objective function. The following
table summarises the convergence rates of GD variants for convex
functions with different convexity parameters. The rates described omit
the (usually small) constants in the bounds---we focus on asymptotic
rates.
::: center
::: {#table:offline}
------------------ ---------------------- ---------------------- ------------------------ ---------------------------
general $\alpha$-strongly $\beta$-smooth $\gamma$-well
convex conditioned
Gradient descent $\frac{1}{\sqrt{T}}$ $\frac{1}{\alpha T}$ $\frac{\beta}{ T}$ $e^{- \gamma T }$
Accelerated GD --- --- $\frac{{\beta}}{ T^2}$ $e^{- \sqrt{\gamma} \ T}$
------------------ ---------------------- ---------------------- ------------------------ ---------------------------
: Rates of convergence of first order (gradient-based) methods as a
function of the number of iterations and the smoothness and
strong-convexity of the objective. Dependence on other parameters and
constants, namely the Lipchitz constant, diameter of constraint set
and initial distance to the objective is omitted. Acceleration for
non-smooth functions is not possible in general.
:::
:::
[]{#table:offline label="table:offline"}
In this section we address only the first row of Table
[\[table:GD\]](#table:GD){reference-type="ref" reference="table:GD"}.
For accelerated methods and their analysis see references at the
bibliographic section.
### The Polyak stepsize
Luckily, there exists a simple choice of step sizes that yields the
optimal convergence rate, called the Polyak stepsize. It has a huge
advantage of not depending on the strong convexity and/or smoothness
parameters of the objective function.
However, it does depend on the distance in function value to optimality
and gradient norm. While the latter can be efficiently estimated, the
distance to optimality is not always available if
$f(\ensuremath{\mathbf x}^*)$ is not known ahead of time. This can be
remedied, as referred to in the bibliography.
We henceforth denote:
1. Distance to optimality in value:
$h_t = h(\ensuremath{\mathbf x}_t) = f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^*)$
2. Euclidean distance to optimality:
$d_t = \| \ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^*\|$
3. Current gradient norm
$\| \nabla_t\| = \|\nabla f(\ensuremath{\mathbf x}_t)\|$
With these notations we can describe the algorithm precisely in
Algorithm [\[alg:basicpolyak\]](#alg:basicpolyak){reference-type="ref"
reference="alg:basicpolyak"}:
::: algorithm
::: algorithmic
Input: time horizon $T$, $x_0$ Set
$\eta_t = \frac{h_t}{\|\nabla_t\|^2}$
$\ensuremath{\mathbf x}_{t+1} = \ensuremath{\mathbf x}_t - \eta_t \nabla_t$
Return
$\bar{\mathbf{x}}= \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}_t} \{ f(\ensuremath{\mathbf x}_t) \}$
:::
:::
To prove precise convergence bounds, assume $\|\nabla_t\| \leq G$, and
define: $$\begin{aligned}
B_T &=& \min\left\{
\frac{G d_0}{\sqrt{ T}},
\frac {2 \beta d_0^2}{ T },
\frac{3 G^2}{ \alpha T } ,
\beta d_0^2\left(1-\frac{\gamma}{4}\right)^T
\right\}
\end{aligned}$$
We can now state the main guarantee of GD with the Polyak stepsize:
::: {#thm:simple .theorem}
**Theorem 2.3**. *(GD with the Polyak Step Size) Algorithm
[\[alg:basicpolyak\]](#alg:basicpolyak){reference-type="ref"
reference="alg:basicpolyak"} guarantees the following after $T$ steps:
$$\begin{aligned}
f(\bar{\mathbf{x}})- f(\ensuremath{\mathbf x}^\star) \leq \min_{ 0 \leq t \leq T} \{ h_t \} \leq B_T % \\
\end{aligned}$$*
:::
### Measuring distance to optimality
When analyzing convergence of gradient methods, it is useful to use
potential functions in lieu of function distance to optimality, such as
gradient norm and/or Euclidean distance. The following relationships
hold between these quantities.
::: {#lem:elementary_properties .lemma}
**Lemma 2.4**. *The following properties hold for
$\alpha$-strongly-convex functions and/or $\beta$-smooth functions over
Euclidean space ${\mathbb R}^d$.*
1. *$\frac{\alpha}{2} d_t^2 \leq h_t$*
2. *$h_t \leq \frac{\beta}{2} d_t^2$*
3. *$\frac{1}{2 \beta} \|\nabla_t\|^2 \leq h_t$*
4. *$h_t \leq \frac{1}{2 \alpha} \|\nabla_t\|^2$*
:::
::: proof
*Proof.*
1. $h_t \geq \frac{\alpha}{2} d_t^2$:
By strong convexity, we have $$\begin{aligned}
h_t & = f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^{\star}) \\
& \geq \nabla f(\ensuremath{\mathbf x}^{\star})^\top (\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^{\star}) + \frac{\alpha}{2} \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^{\star}\|^2 \\
& = \frac{\alpha}{2} \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^{\star}\|^2
\end{aligned}$$ where the last inequality follows since the gradient
at the global optimum is zero.
2. $h_t \leq \frac{\beta}{2} d_t^2$:
By smoothness, $$\begin{aligned}
h_t & = f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^{\star}) \\
& \leq \nabla f(\ensuremath{\mathbf x}^{\star})^\top (\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^{\star}) + \frac{\beta}{2} \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^{\star}\|^2 \\
& = \frac{\beta}{2} \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^{\star}\|^2
\end{aligned}$$ where the last inequality follows since the gradient
at the global optimum is zero.
3. $h_t \geq \frac{1}{2\beta} \|\nabla_t\|^2$: Using smoothness, and
let
$\ensuremath{\mathbf x}_{t+1} = \ensuremath{\mathbf x}_t - \eta \nabla_t$
for $\eta = \frac{1}{\beta}$, $$\begin{aligned}
h_t = & f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^{\star}) \\
& \geq f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}_{t+1}) \\
& \geq \nabla f(\ensuremath{\mathbf x}_t)^\top (\ensuremath{\mathbf x}_{t} - \ensuremath{\mathbf x}_{t+1}) - \frac{\beta}{2} \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}_{t+1} \|^2 \\
& = \eta \|\nabla_t\|^2 - \frac{\beta}{2} \eta^2 \|\nabla_t\|^2 \\
& = \frac{1}{2\beta} \|\nabla_t\|^2 .
\end{aligned}$$
4. $h_t \leq \frac{1}{2\alpha} \|\nabla_t\|^2$:
We have for any pair
$\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in {\mathbb R}^d$:
$$\begin{aligned}
f(\ensuremath{\mathbf y}) & \ge f(\ensuremath{\mathbf x}) + \nabla f(\ensuremath{\mathbf x})^\top (\ensuremath{\mathbf y}- \ensuremath{\mathbf x}) + \frac{\alpha}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf y}\|^2 \\
&\ge \min_{\ensuremath{\mathbf z}\in {\mathbb R}^d } \left\{ f(\ensuremath{\mathbf x}) + \nabla f(\ensuremath{\mathbf x})^\top (\ensuremath{\mathbf z}- \ensuremath{\mathbf x}) + \frac{\alpha}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf z}\|^2 \right\} \\
& = f(\ensuremath{\mathbf x}) - \frac{1}{2 \alpha} \| \nabla f(\ensuremath{\mathbf x})\|^2. \\
& \text{ by taking $\ensuremath{\mathbf z}= \ensuremath{\mathbf x}- \frac{1}{ \alpha} \nabla f(\ensuremath{\mathbf x}) $ }
\end{aligned}$$ In particular, taking
$\ensuremath{\mathbf x}= \ensuremath{\mathbf x}_t \ , \ \ensuremath{\mathbf y}= \ensuremath{\mathbf x}^\star$,
we get $$\label{eqn:gradlowerbound}
h_t = f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^\star) \leq \frac{1}{2 \alpha} \|\nabla_t\|^2 .$$
◻
:::
### Analysis of the Polyak stepsize
We are now ready to prove
Theorem [2.3](#thm:simple){reference-type="ref" reference="thm:simple"},
which directly follows from the following lemma.
::: {#lemma:shalom2 .lemma}
**Lemma 2.5**. *Suppose that a sequence $\ensuremath{\mathbf x}_0,
\ldots \ensuremath{\mathbf x}_t$ satisfies: $$\label{eqn:shalom3}
d_{t+1}^2 \leq d_t^2 - \frac{ h_t^2}{\|\nabla_t\|^2}$$ then for
$\bar{\mathbf{x}}$ as defined in the algorithm, we have:
$$f(\bar{\mathbf{x}}) - f(\ensuremath{\mathbf x}^\star) \leq \frac{1}{T} \sum_t h_t \leq B_{T}\, .$$*
:::
::: proof
*Proof.* The proof analyzes different cases:
1. For convex functions with gradient bounded by $G$, $$\begin{aligned}
d_{t+1}^2 - d_t^2 & \leq - \frac{ h_t^2}{\|\nabla_t\|^2} \leq -
\frac{ h_t^2}{G^2}
\end{aligned}$$ Summing up over $T$ iterations, and using
Cauchy-Schwartz on the $T$-dimensional vectors of
$\frac{1}{T} \mathbf{1}$ and $(h_1,...,h_T)$, we have
$$\begin{aligned}
\frac{1}{T} \sum_t h_t
& \leq& \frac{1}{\sqrt{T}} \sqrt{\sum_t h_t^2} \\
& \leq& \frac{ G}{\sqrt{ T}} \sqrt{\sum_t (d_{t}^2 - d_{t+1}^2)} \leq
\frac{ G d_0 }{\sqrt{ T}} \, .
\end{aligned}$$
2. For smooth functions whose gradient is bounded by $G$,
Lemma [2.4](#lem:elementary_properties){reference-type="ref"
reference="lem:elementary_properties"} implies:
$$d_{t+1}^2 - d_t^2 \leq - \frac{ h_t^2}{\|\nabla_t\|^2} \leq -
\frac{ h_t}{2 \beta} \, .$$ This implies
$$\frac{1}{T} \sum_t h_t \leq \frac{2 \beta d_0^2}{ T}\, .$$
3. For strongly convex functions,
Lemma [2.4](#lem:elementary_properties){reference-type="ref"
reference="lem:elementary_properties"} implies: $$d_{t+1}^2 - d_t^2
\leq - \frac{h_t^2}{\|\nabla_t\|^2}
\leq - \frac{h_t^2}{G^2}
\leq - \frac{\alpha^2 d_t^4 }{4 G^2} \, .$$ In other words,
$d_{t+1}^2 \leq d_t^2 ( 1- \frac{\alpha^2 d_t^2}{4 G^2} ) \, .$
Defining $a_t := \frac{\alpha^2 d_t^2}{4 G^2}$, we have:
$$a_{t+1} \leq a_t (1-a_t) \, .$$ This implies that
$a_t \leq \frac{1}{t+1}$, which can be seen by induction. The proof
is completed as follows : $$\begin{aligned}
\frac{1}{ T/2 } \sum_{t= T/2 }^T h_t^2 &
\leq& \frac{2G^2}{ T }\sum_{t= T/2 }^T ( d_t^2 -
d_{t+1}^2) \\
&=&\frac{2 G^2}{ T } ( d _{ T/2 }^2 - d_T^2) \\
&=&\frac{8 G^4}{ \alpha^2 T} ( a
_{ T/2 } - a_T) \\
& \leq &\frac{9 G^4}{ \alpha^2 T ^2}
\, .
\end{aligned}$$ Thus, there exists a $t$ for which
$h_t^2 \leq \frac{ 9 G^4}{ \alpha^2 T^2}$. Taking the square root
completes the claim.
4. For both strongly convex and smooth functions:
$$d_{t+1}^2 - d_t^2 \leq - \frac{h_t^2}{\|\nabla_t\|^2} \leq
- \frac{ h_t}{2 \beta} \leq
- \frac{\alpha}{4\beta} d_t^2$$ Thus,
$$h_{T} \leq \beta d_{T}^2 \leq \beta d_0^2
\left(1-\frac{\alpha}{4 \beta}\right)^T = \beta d_0^2
\left(1-\frac{\gamma}{4}\right)^T \, .$$
This completes the proof of all cases. ◻
:::
## Constrained Gradient/Subgradient Descent
The vast majority of the problems considered in this text include
constraints. Consider the examples given in section
[1.2](#subsec:OCOexamples){reference-type="ref"
reference="subsec:OCOexamples"}: a path is a point in the flow polytope,
a portfolio is a point in the simplex and so on. In the language of
optimization, we require $\ensuremath{\mathbf x}$ not only to minimize a
certain objective function, but also to belong to a convex set
$\ensuremath{\mathcal K}$.
In this section we describe and analyze constrained gradient descent.
Algorithmically, the change from the previous section is small: after
updating the current point in the direction of the gradient, one may
need to project back to the decision set. However, the analysis is
somewhat more involved, and instructive for the later parts of this
text.
### Basic gradient descent---linear convergence
Algorithmic box [\[alg:BasicGD\]](#alg:BasicGD){reference-type="ref"
reference="alg:BasicGD"} describes a template for gradient descent over
a constrained set. It is a template since the sequence of step sizes
$\{\eta_t\}$ is left as an input parameter, and the several variants of
the algorithm differ on its choice.
::: algorithm
::: algorithmic
Input: $f$, $T$, initial point
$\ensuremath{\mathbf x}_1 \in \ensuremath{\mathcal K}$, sequence of step
sizes $\{\eta_t\}$ Let
$\ensuremath{\mathbf y}_{t+1} = \ensuremath{\mathbf x}_{t}-\eta_t {\nabla f}(\ensuremath{\mathbf x}_t) , \ \ensuremath{\mathbf x}_{t+1}= \mathop{\Pi}_{\ensuremath{\mathcal K}} \left( \ensuremath{\mathbf y}_{t+1} \right)$
${\ensuremath{\mathbf x}}_{T+1}$
:::
:::
As opposed to the unconstrained setting, here we require a precise
setting of the learning rate to obtain the optimal convergence rate.
::: {#thm:basicGD .theorem}
**Theorem 2.6**. *For constrained minimization of
$\gamma$-well-conditioned functions and $\eta_t = \frac{1}{\beta}$,
Algorithm [\[alg:BasicGD\]](#alg:BasicGD){reference-type="ref"
reference="alg:BasicGD"} converges as
$$h_{t+1} \leq h_1 \cdot e^{-\frac{\gamma t}{ 4}}$$*
:::
::: proof
*Proof.* By strong convexity we have for every
$\ensuremath{\mathbf x},\ensuremath{\mathbf x}_t \in \ensuremath{\mathcal K}$
(where we denote $\nabla_t = \nabla f(\ensuremath{\mathbf x}_t)$ as
before): $$\label{eqn:tempGD}
\nabla_t^\top (\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_t) \leq f(\ensuremath{\mathbf x}) - f(\ensuremath{\mathbf x}_t) - \frac{\alpha}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_t\|^2.$$
Next, appealing to the algorithm's definition and the choice
$\eta_t = \frac{1}{\beta}$, we have $$\begin{aligned}
\ensuremath{\mathbf x}_{t+1}
& = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{ \nabla_t^\top ( \ensuremath{\mathbf x}-\ensuremath{\mathbf x}_t) + \frac{\beta}{2 } \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_t\|^2 \right\} \label{eqn:alg_defn_GD} .
\end{aligned}$$ To see this, notice that $$\begin{aligned}
%\label{eqn:quad-solution}
& \mathop{\Pi}_\ensuremath{\mathcal K}( \ensuremath{\mathbf x}_t - \eta_t \nabla_t ) \\
& = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{ \|\ensuremath{\mathbf x}- (\ensuremath{\mathbf x}_t - \eta_t \nabla_t) \|^2 \right\} & \mbox{ definition of projection} \\
& = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{ \nabla_t^\top ( \ensuremath{\mathbf x}-\ensuremath{\mathbf x}_t) + \frac{1}{2 \eta_t} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_t\|^2 \right\} . & \mbox{ see exercise 6}
\end{aligned}$$ Thus, we have $$\begin{aligned}
h_{t+1} - h_{t} & = f(\ensuremath{\mathbf x}_{t+1}) - f(\ensuremath{\mathbf x}_t) \\
& \leq \nabla_t^\top ( \ensuremath{\mathbf x}_{t+1} - \ensuremath{\mathbf x}_t) + \frac{\beta}{2} \|\ensuremath{\mathbf x}_{t+1} - \ensuremath{\mathbf x}_{t} \|^2 & \mbox{ smoothness} \\
& \leq \min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{ \nabla_t^\top ( \ensuremath{\mathbf x}-\ensuremath{\mathbf x}_t) + \frac{\beta}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_t\|^2 \right\} & \mbox{ by \eqref{eqn:alg_defn_GD} } \\
& \leq \min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{ f(\ensuremath{\mathbf x}) - f(\ensuremath{\mathbf x}_t) + \frac{\beta - \alpha}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_t\|^2 \right\}. & \mbox{by } \eqref{eqn:tempGD} \\
\end{aligned}$$ The minimum can only grow if we take it over a subset of
$\ensuremath{\mathcal K}$. Thus we can restrict our attention to all
points that are convex combination of $\ensuremath{\mathbf x}_t$ and
$\ensuremath{\mathbf x}^\star$, which we denote by the interval
$[\ensuremath{\mathbf x}_t,\ensuremath{\mathbf x}^\star] = \{ (1 - \mu )\ensuremath{\mathbf x}_t + \mu \ensuremath{\mathbf x}^\star , \mu \in [0,1]\}$,
and write $$\begin{aligned}
\label{eqn:shalom22}
h_{t+1} - h_{t} & \leq \min_{\ensuremath{\mathbf x}\in [\ensuremath{\mathbf x}_t,\ensuremath{\mathbf x}^\star] } \left\{ f(\ensuremath{\mathbf x}) - f(\ensuremath{\mathbf x}_t) + \frac{\beta - \alpha}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_t\|^2 \right\} \notag \\
& = f( (1-\mu) \ensuremath{\mathbf x}_t + \mu \ensuremath{\mathbf x}^\star ) - f(\ensuremath{\mathbf x}_t) + \frac{\beta - \alpha}{2}\mu^2 \|\ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}_t\|^2 \notag \\ %& \mbox{ $ \x = (1-\mu) \x_t + \mu \x^\star $ } \\
& \le (1-\mu) f( \ensuremath{\mathbf x}_t) + \mu f(\ensuremath{\mathbf x}^\star ) - f(\ensuremath{\mathbf x}_t) + \frac{\beta - \alpha}{2}\mu^2 \|\ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}_t\|^2 & \mbox{convexity} \notag \\
& = - \mu h_t + \frac{\beta - \alpha}{2} \mu^2 \|\ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}_t\|^2 .
\end{aligned}$$ Where the equality is by writing
$\ensuremath{\mathbf x}$ as
$\ensuremath{\mathbf x}= (1-\mu) \ensuremath{\mathbf x}_t + \mu \ensuremath{\mathbf x}^\star$.
Using strong convexity, we have for any $\ensuremath{\mathbf x}_t$ and
the minimizer $\ensuremath{\mathbf x}^\star$: $$\begin{aligned}
h_t & = f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^\star ) \\
&\ge \nabla f(\ensuremath{\mathbf x}^\star)^\top (\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^\star ) + \frac{\alpha}{2} \|\ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}_t\|^2 & \text{ $\alpha$-strong convexity} \\
& \ge \frac{\alpha}{2} \| \ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}_t \|^2. & \text{ optimality Thm \ref{thm:optim-conditions} }
\end{aligned}$$ Thus, plugging this into equation
[\[eqn:shalom22\]](#eqn:shalom22){reference-type="eqref"
reference="eqn:shalom22"}, we get $$\begin{aligned}
h_{t+1} - h_t & \leq ( - \mu + \frac{\beta - \alpha}{\alpha} \mu^2 ) h_t \\
& \leq - \frac{\alpha}{4 (\beta- \alpha)} h_t. & \mbox{ optimal choice of $\mu$}
\end{aligned}$$ Thus,
$$h_{t+1} \leq h_t (1 - \frac{\alpha}{4(\beta - \alpha)}) \leq h_t( 1 - \frac{\alpha}{4 \beta} ) \leq h_t e^{ -\gamma/4}.$$
This gives the theorem statement by induction. ◻
:::
## Reductions to Non-smooth and Non-strongly Convex Functions {#sec:gd-reductions}
The previous section dealt with $\gamma$-well-conditioned functions,
which may seem like a significant restriction over vanilla convexity.
Indeed, many interesting convex functions are not strongly convex nor
smooth, and as we have seen, the convergence rate of gradient descent
greatly differs for these functions. We have completed the picture for
unconstrained optimization, and in this section we complete it for a
bounded set.
The literature on first order methods is abundant with specialized
analyses that explore the convergence rate of gradient descent for more
general functions. In this manuscript we take a different approach:
instead of analyzing variants of GD from scratch, we use reductions to
derive near-optimal convergence rates for smooth functions that are not
strongly convex, or strongly convex functions that are not smooth, or
general convex functions without any further restrictions.
While attaining sub-optimal convergence bounds (by logarithmic factors),
the advantage of this approach is two-fold: first, the reduction method
is very simple to state and analyze, and its analysis is significantly
shorter than analyzing GD from scratch. Second, the reduction method is
generic, and thus extends to the analysis of accelerated gradient
descent (or any other first order method) along the same lines. We turn
to these reductions next.
### Reduction to smooth, non strongly convex functions
Our first reduction applies the GD algorithm to functions that are
$\beta$-smooth but not strongly convex.
The idea is to add a controlled amount of strong convexity to the
function $f$, and then apply the algorithm
[\[alg:BasicGD\]](#alg:BasicGD){reference-type="ref"
reference="alg:BasicGD"} to optimize the new function. The solution is
distorted by the added strong convexity, but a tradeoff guarantees a
meaningful convergence rate.
::: algorithm
::: algorithmic
Input: $f$, $T$, $\ensuremath{\mathbf x}_1 \in \ensuremath{\mathcal K}$,
parameter $\tilde{\alpha}$. Let
$g(\ensuremath{\mathbf x}) = f(\ensuremath{\mathbf x}) + \frac{\tilde{\alpha}}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_1 \|^2$
Apply Algorithm [\[alg:BasicGD\]](#alg:BasicGD){reference-type="ref"
reference="alg:BasicGD"} with parameters
$g,T, \{\eta_t = \frac{1}{\beta}\},\ensuremath{\mathbf x}_1$, return
$\ensuremath{\mathbf x}_T$.
:::
:::
::: {#thm:smoothGDreduction .lemma}
**Lemma 2.7**. *For $\beta$-smooth convex functions, Algorithm
[\[alg:non-strongly convex-GD\]](#alg:non-strongly convex-GD){reference-type="ref"
reference="alg:non-strongly convex-GD"} with parameter
$\tilde{\alpha} = \frac{ \beta \log t }{D^2 t}$ converges as
$$h_{t+1} = O \left( \frac{\beta \log t } {t} \right)$$*
:::
::: proof
*Proof.* The function $g$ is $\tilde{\alpha}$-strongly convex and
$(\beta+ \tilde{\alpha})$-smooth (see exercises). Thus, it is
$\gamma = \frac{\tilde{\alpha}}{\tilde{\alpha} + \beta}$-well-conditioned.
Notice that $$\begin{aligned}
h_t & = f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^\star) \\
& = g(\ensuremath{\mathbf x}_t) - g(\ensuremath{\mathbf x}^\star) + \frac{\tilde{\alpha}}{2} (\|\ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}_1\|^2 - \|\ensuremath{\mathbf x}_t -\ensuremath{\mathbf x}_1\|^2) \\
& \le h_t^g + \tilde{\alpha}D^2. & \mbox{ def of $D$, \S \ref{sec:optdefs}}
\end{aligned}$$ Here, we denote
$h^g_t = g(\ensuremath{\mathbf x}_t) - g(\ensuremath{\mathbf x}^\star)$.
Since $g(\ensuremath{\mathbf x})$ is
$\frac{\tilde{\alpha}}{\tilde{\alpha} + \beta}$-well-conditioned,
$$\begin{aligned}
h_{t+1} & \le h_{t+1}^g + \tilde{\alpha} D^2 \\
& \leq h_1^g e^{-\frac{\tilde{\alpha} t}{ 4( \tilde{\alpha}+\beta)}} + \tilde{\alpha} D^2 & \mbox{ Theorem \ref{thm:basicGD}} \\
& = O ( \frac{ \beta \log t}{ t} ), & \mbox { choosing $\tilde{\alpha} = \frac{ \beta \log t }{D^2 t}$}
\end{aligned}$$ where we ignore constants and terms depending on $D$ and
$h_1^g$. ◻
:::
Stronger convergence rates of $O(\frac{\beta}{t})$ can be obtained by
analyzing GD from scratch, and these are known to be tight. Thus, our
reduction is suboptimal by a factor of $O(\log T)$, which we tolerate
for the reasons stated at the beginning of this section.
### Reduction to strongly convex, non-smooth functions
Our reduction from non-smooth functions to $\gamma$-well-conditioned
functions is similar in spirit to the one of the previous subsection.
However, whereas for strong convexity the obtained rates were off by a
factor of $\log T$, in this section we will also be off by factor of
$d$, the dimension of the decision variable $\ensuremath{\mathbf x}$, as
compared to the standard analyses in convex optimization. For tight
bounds, the reader is referred to the excellent reference books and
surveys listed in the bibliography section
[2.6](#sec:bib_of_optimization){reference-type="ref"
reference="sec:bib_of_optimization"}.
::: algorithm
::: algorithmic
Input: $f,\mathbf{x}_1,T,\delta$ Let
$\hat{f}_\delta (\ensuremath{\mathbf x}) = \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \mathbb{B}} \left[ f ( \ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf v}) \right]$
Apply Algorithm [\[alg:BasicGD\]](#alg:BasicGD){reference-type="ref"
reference="alg:BasicGD"} on
$\hat{f}_\delta,\ensuremath{\mathbf x}_1,T,\{\eta_t = {\delta}\}$,
return $\ensuremath{\mathbf x}_T$
:::
:::
We apply the GD algorithm to a smoothed variant of the objective
function. In contrast to the previous reduction, smoothing cannot be
obtained by simple addition of a smooth (or any other) function.
Instead, we need a smoothing operation. The one we describe is
particularly simple and amounts to taking a local integral of the
function. More sophisticated, but less general, smoothing operators
exist that are based on the Moreau-Yoshida regularization, see
bibliographic section for more details.
Let $f$ be $G$-Lipschitz continuous and $\alpha$-strongly convex. Define
for any $\delta > 0$,
$$S_\delta[f] : {\mathbb R}^d \mapsto {\mathbb R}\ \ , \ \ S_\delta[f](\ensuremath{\mathbf x}) = \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \mathbb{B}} \left[ f ( \ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf v}) \right] ,$$
where
$\mathbb{B}= \{ \ensuremath{\mathbf x}\in \mathbb{R} ^d : \|\ensuremath{\mathbf x}\| \leq 1 \}$
is the Euclidean ball and $\ensuremath{\mathbf v}\sim \mathbb{B}$
denotes a random variable drawn from the uniform distribution over
$\mathbb{B}$. When the function $f$ is clear from the context, we
henceforth use the simpler notation $\hat{f}_\delta = S_\delta[f]$.
We will prove that the function
${\ensuremath{\hat{f}}}_\delta = S_\delta[f]$ is a smooth approximation
to $f: {\mathbb R}^d \mapsto {\mathbb R}$, i.e., it is both smooth and
close in value to $f$, as given in the following lemma.
::: {#lem:SmoothingLemma .lemma}
**Lemma 2.8**. *$\hat{f}_\delta$ has the following properties:*
1. *If $f$ is $\alpha$-strongly convex, then so is
${\ensuremath{\hat{f}}}_\delta$*
2. *$\hat{f}_\delta$ is $\frac{d G}{\delta}$-smooth*
3. *$|\hat{f} _\delta (\ensuremath{\mathbf x}) - f(\ensuremath{\mathbf x}) | \le \delta G$
for all $\ensuremath{\mathbf x}\in \mathcal{K}$ .*
:::
Before proving this lemma, let us first complete the reduction. Using
Lemma [2.8](#lem:SmoothingLemma){reference-type="ref"
reference="lem:SmoothingLemma"} and the convergence for
$\gamma$-well-conditioned functions the following approximation bound is
obtained.
::: lemma
**Lemma 2.9**. *For $\delta = \frac{dG}{\alpha} \frac{\log{t}}{t}$
Algorithm [\[alg:reduction2\]](#alg:reduction2){reference-type="ref"
reference="alg:reduction2"} converges as
$$h_t=O\left( \frac{G^2 d \log{t}}{\alpha t}\right).$$*
:::
Before proving this lemma, notice that the gradient descent method is
applied with gradients of the smoothed function
${\ensuremath{\hat{f}}}_\delta$ rather than gradients of the original
objective $f$. In this section we ignore the computational cost of
computing such gradients given only access to gradients of $f$, which
may be significant. Techniques for estimating these gradients are
further explored in chapter [6](#chap:bandits){reference-type="ref"
reference="chap:bandits"}.
::: proof
*Proof.* Note that by Lemma
[2.8](#lem:SmoothingLemma){reference-type="ref"
reference="lem:SmoothingLemma"} the function
${\ensuremath{\hat{f}}}_\delta$ is $\gamma$-well-conditioned for
$\gamma = \frac{\alpha \delta}{d G}.$
$$\begin{aligned}
h_{t+1} & = f(\ensuremath{\mathbf x}_{t+1})-f(\ensuremath{\mathbf x}^\star) \\
&\le \hat{f}_\delta(\ensuremath{\mathbf x}_{t+1})-\hat{f}_\delta(\ensuremath{\mathbf x}^\star) + 2\delta G &\mbox{Lemma \ref{lem:SmoothingLemma}} \\
&\le h_1 e^{-\frac{\gamma t}{4}}+2\delta G & \mbox{Theorem \ref{thm:basicGD}}\\
&= h_1 e^{-\frac{\alpha t \delta}{4 dG}}+2\delta G& \mbox{$\gamma = \frac{\alpha \delta}{d G}$ by Lemma \ref{lem:SmoothingLemma}}\\
&= O \left( \frac{dG^2 \log t }{\alpha t} \right). &\mbox{$\delta = \frac{dG}{\alpha} \frac{\log{t}}{t}$}\\
\end{aligned}$$ ◻
:::
We proceed to prove that ${\ensuremath{\hat{f}}}_\delta$ is indeed a
good approximation to the original function.
::: proof
*Proof of Lemma [2.8](#lem:SmoothingLemma){reference-type="ref"
reference="lem:SmoothingLemma"}.* First, since $\hat{f}_\delta$ is an
average of $\alpha$-strongly convex functions, it is also
$\alpha$-strongly convex. In order to prove smoothness, we will use
Stokes' theorem from calculus: For all
$\ensuremath{\mathbf x}\in {\mathbb R}^d$ and for a vector random
variable $\ensuremath{\mathbf v}$ which is uniformly distributed over
the Euclidean sphere
$\ensuremath{\mathbb {S}}= \{ \ensuremath{\mathbf y}\in \mathbb{R} ^d : \|\ensuremath{\mathbf y}\| = 1 \}$,
$$\label{lem:stokes_application}
\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \ensuremath{\mathbb {S}}} [ f(\ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf v}) \ensuremath{\mathbf v}] = \frac{\delta}{d} \nabla\hat{f}_\delta (\ensuremath{\mathbf x}).$$
Recall that a function $f$ is $\beta$-smooth if and only if for all
$\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}$,
it holds that
$\| \nabla f(\ensuremath{\mathbf x}) -\nabla f(\ensuremath{\mathbf y}) \| \le \beta \|\ensuremath{\mathbf x}-\ensuremath{\mathbf y}\|$.
Now, $$\begin{aligned}
& \| \nabla \hat{f}_\delta (\ensuremath{\mathbf x}) -\nabla \hat{f}_\delta (\ensuremath{\mathbf y}) \| = \\
& = \frac{d}{\delta} \| \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \ensuremath{\mathbb {S}}} \left[ f(\ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf v}) \ensuremath{\mathbf v}\right] -\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \ensuremath{\mathbb {S}}} \left[ f(\ensuremath{\mathbf y}+ \delta \ensuremath{\mathbf v}) \ensuremath{\mathbf v}\right]\| & \mbox{by \eqref{lem:stokes_application}} \\
& = \frac{d}{\delta} \| \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \ensuremath{\mathbb {S}}} \left[ f(\ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf v}) \ensuremath{\mathbf v}- f(\ensuremath{\mathbf y}+ \delta \ensuremath{\mathbf v}) \ensuremath{\mathbf v}\right] \| &\mbox{linearity of expectation} \\
& \le \frac{d}{\delta} \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \ensuremath{\mathbb {S}}} \| f(\ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf v}) \ensuremath{\mathbf v}- f(\ensuremath{\mathbf y}+ \delta \ensuremath{\mathbf v}) \ensuremath{\mathbf v}\| & \mbox{Jensen's inequality}\\
& \le \frac{dG}{\delta} \| \ensuremath{\mathbf x}- \ensuremath{\mathbf y}\| \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \ensuremath{\mathbb {S}}} \left[ \|\ensuremath{\mathbf v}\| \right] & \mbox{Lipschitz continuity}\\
&= \frac{dG}{\delta} \| \ensuremath{\mathbf x}- \ensuremath{\mathbf y}\|. & \mbox{$ \ensuremath{\mathbf v}\in \ensuremath{\mathbb {S}}$}
\end{aligned}$$ This proves the second property of Lemma
[2.8](#lem:SmoothingLemma){reference-type="ref"
reference="lem:SmoothingLemma"}. We proceed to show the third property,
namely that ${\ensuremath{\hat{f}}}_\delta$ is a good approximation to
$f$.
$$\begin{aligned}
& |\hat{f}_\delta (\ensuremath{\mathbf x})-f(\ensuremath{\mathbf x})|
= \left|\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \mathbb{B}} \left[ f(\ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf v})\right] - f(\ensuremath{\mathbf x}) \right| &\mbox{definition of $\hat{f}_\delta$}\\
& \leq \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \mathbb{B}} \left[ |f(\ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf v}) - f(\ensuremath{\mathbf x}) |\right] &\mbox{ Jensen's inequality} \\
& \le \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \mathbb{B}}\left[ G\| \delta \ensuremath{\mathbf v}\| \right] & \mbox{$f$ is $G$-Lipschitz}\\
& \leq G \delta. & \mbox{ $\ensuremath{\mathbf v}\in \mathbb{B}$}
\end{aligned}$$ ◻
:::
We note that GD variants for $\alpha$-strongly convex functions, even
without the smoothing approach used in our reduction, are known to
converge quickly and without dependence on the dimension. We state the
known algorithm and result here without proof (see bibliography for
references).
::: {#thm:strongly convex-GD-bubeck .theorem}
**Theorem 2.10**. *Let $f$ be $\alpha$-strongly convex, and let
$\ensuremath{\mathbf x}_1,...,\ensuremath{\mathbf x}_t$ be the iterates
of Algorithm [\[alg:BasicGD\]](#alg:BasicGD){reference-type="ref"
reference="alg:BasicGD"} applied to $f$ with
$\eta_t = \frac{2}{\alpha (t+1)}$. Then
$$f\left( \frac{1}{t} \sum_{s=1}^t \frac{2 s }{t+1} \ensuremath{\mathbf x}_s \right) - f(\ensuremath{\mathbf x}^\star) \leq \frac{2 G^2}{\alpha (t+1)} .$$*
:::
### Reduction to general convex functions
One can apply both reductions simultaneously to obtain a rate of
$\tilde{O}(\frac{d}{\sqrt{t}})$. While near-optimal in terms of the
number of iterations, the weakness of this bound lies in its dependence
on the dimension. In the next chapter we shall show a rate of
$O(\frac{1}{\sqrt{t}})$ as a direct consequence of a more general online
convex optimization algorithm.
## Example: Support Vector Machine Training {#sec:svmexample}
To illustrate the usefulness of the gradient descent method, let us
describe an optimization problem that has gained much attention in
machine learning and can be solved efficiently using the methods we have
just analyzed.
A very basic and successful learning paradigm is the linear
classification model. In this model, the learner is presented with
positive and negative examples of a concept. Each example, denoted by
$\mathbf{a}_i$, is represented in Euclidean space by a $d$ dimensional
feature vector. For example, a common representation for emails in the
spam-classification problem are binary vectors in Euclidean space, where
the dimension of the space is the number of words in the language. The
$i$'th email is a vector $\mathbf{a}_i$ whose entries are given as ones
for coordinates corresponding to words that appear in the email, and
zero otherwise. In addition, each example has a label
$b_i \in \{-1,+1\}$, corresponding to whether the email has been labeled
spam/not spam. The goal is to find a hyperplane separating the two
classes of vectors: those with positive labels and those with negative
labels. If such a hyperplane, which completely separates the training
set according to the labels, does not exist, then the goal is to find a
hyperplane that achieves a separation of the training set with the
smallest number of mistakes.
Mathematically speaking, given a set of $n$ examples to train on, we
seek $\ensuremath{\mathbf x}\in {\mathbb R}^d$ that minimizes the number
of incorrectly classified examples, i.e.
$$\label{eqn:linear-classification}
\min_{\ensuremath{\mathbf x}\in {\mathbb R}^d} \sum_{i \in [n]} \delta( \mathop{\mbox{\rm sign}}(\ensuremath{\mathbf x}^\top \mathbf{a}_i ) \neq b_i)$$
where $\mathop{\mbox{\rm sign}}(x) \in \{-1,+1\}$ is the sign function,
and $\delta(z) \in \{0,1\}$ is the indicator function that takes the
value $1$ if the condition $z$ is satisfied and zero otherwise.
This optimization problem, which is at the heart of the linear
classification formulation, is NP-hard, and in fact NP-hard to even
approximate non-trivially . However, in the special case that a linear
classifier (a hyperplane $\ensuremath{\mathbf x}$) that classifies all
of the examples correctly exists, the problem is solvable in polynomial
time via linear programming.
Various relaxations have been proposed to solve the more general case,
when no perfect linear classifier exists. One of the most successful in
practice is the Support Vector Machine (SVM) formulation.
The soft margin SVM relaxation replaces the $0/1$ loss in
[\[eqn:linear-classification\]](#eqn:linear-classification){reference-type="eqref"
reference="eqn:linear-classification"} with a convex loss function,
called the hinge-loss, given by
$$\ell_{\mathbf{a},b}(\ensuremath{\mathbf x}) = \text{hinge}(b \cdot \ensuremath{\mathbf x}^\top \mathbf{a}) = \max\{0, 1 - b \cdot \ensuremath{\mathbf x}^\top \mathbf{a}\}.$$
In figure [2.3](#fig:hinge){reference-type="ref" reference="fig:hinge"}
we depict how the hinge loss is a convex relaxation for the non-convex
$0/1$ loss.
::: center
![The hinge loss function versus the 0/1 loss function
](images/hinge.pdf){#fig:hinge width="2.3in"}
:::
Further, the SVM formulation adds to the loss minimization objective a
term that regularizes the size of the elements in
$\ensuremath{\mathbf x}$. The reason and meaning of this additional term
shall be addressed in later sections. For now, let us consider the SVM
convex program: $$\label{eqn:soft-margin}
\min_{\ensuremath{\mathbf x}\in {\mathbb R}^d} \left \{ \lambda \frac{1}{n} \sum_{i \in [n]} \ell_{\mathbf{a}_i,b_i}(\ensuremath{\mathbf x}) + \frac{1}{2} \| \ensuremath{\mathbf x}\|^2 \right \}$$
::: algorithm
::: algorithmic
Input: training set of $n$ examples $\{(\mathbf{a}_i,b_i) \}$, $T$,
learning rates $\{\eta_t\}$, initial $\ensuremath{\mathbf x}_1 = 0$. Let
${\nabla_t} = \lambda \frac{1}{n} \sum_{i=1}^n \nabla \ell_{\mathbf{a}_i,b_i} (\ensuremath{\mathbf x}_t) + \ensuremath{\mathbf x}_t$
where $$\nabla \ell_{\mathbf{a}_i,b_i}(\ensuremath{\mathbf x}) = {
\left\{
\begin{array}{ll}
{0}, & { b_i \ensuremath{\mathbf x}^\top \mathbf{a}_i > 1 } \\\\
{ - b_i \mathbf{a}_i}, & { \text{otherwise}}
\end{array}
\right. }$$
${\ensuremath{\mathbf x}}_{t+1} = \ensuremath{\mathbf x}_{t}-\eta_t {\nabla_t}$
for $\eta_t = \frac{2}{t+1}$
$\bar{\ensuremath{\mathbf x}}_T = \frac{1}{T} \sum_{t=1}^T \frac{2 t }{T+1} \ensuremath{\mathbf x}_t$
:::
:::
This is an unconstrained non-smooth and strongly convex program. It
follows from Theorems [2.3](#thm:simple){reference-type="ref"
reference="thm:simple"} and
[2.10](#thm:strongly convex-GD-bubeck){reference-type="ref"
reference="thm:strongly convex-GD-bubeck"} that
${O}(\frac{1}{\varepsilon})$ iterations suffice to attain an
$\varepsilon$-approximate solution. We spell out the details of applying
the subgradient descent algorithm to this formulation in Algorithm
[\[alg:BasicGDSVM\]](#alg:BasicGDSVM){reference-type="ref"
reference="alg:BasicGDSVM"}.
Notice that the learning rates are left unspecified, even though they
can be explicitly set as in Theorem
[2.10](#thm:strongly convex-GD-bubeck){reference-type="ref"
reference="thm:strongly convex-GD-bubeck"}, or using the Polyak rate.
The Polyak rate requires knowing the function value at optimality,
although this can be relaxed (see bibliography).
A caveat of using gradient descent for SVM is the requirement to compute
the full gradient, which may require a full pass over the data for each
iteration. We will see a significantly more efficient algorithm in the
next chapter!
## Bibliographic Remarks {#sec:bib_of_optimization}
The reader is referred to dedicated books on convex optimization for
much more in-depth treatment of the topics surveyed in this background
chapter. For background in convex analysis see the texts
[@borwein2006convex; @rockafellar1997convex]. The classic textbook of
@boyd.convex gives a broad introduction to convex optimization with
numerous applications, see also [@BoydNotes]. For detailed rigorous
convergence proofs and in depth analysis of first order methods, see
lecture notes by @NesterovBook and books by @NY83
[@Nemirovski04lectures], as well as more recent lecture notes and texts
[@bubeckOPT; @hazan2019lecture]. Theorem
[2.10](#thm:strongly convex-GD-bubeck){reference-type="ref"
reference="thm:strongly convex-GD-bubeck"} is taken from [@bubeckOPT]
Theorem 3.9.
The logarithmic overhead in the reductions of section
[2.4](#sec:gd-reductions){reference-type="ref"
reference="sec:gd-reductions"} can be removed with a more careful
reduction and analysis, for details see [@ZeyuanH06]. A more
sophisticated smoothing operator is the Moreau-Yoshida regularization:
it avoids the dimension factor loss. However, it is sometimes less
computationally efficient to work with [@parikh2014proximal].
The Polyak learning rate is detailed in [@polyak]. A recent exposition
allows obtaining the same optimal rate without knowledge of the optimal
function value [@hazan2019revisiting].
Using linear separators and halfspaces to learn and separate data was
considered in the very early days of AI
[@rosenblatt1958perceptron; @minsky69perceptrons]. Notable the
Perceptron algorithm was one of the first learning algorithms, and
closely related to gradient descent. Support vector machines were
introduced in [@CortesV95; @Boser92], see also the book of @ScSm02.
Learning halfspaces with the zero-one loss is computationally hard, and
hard to even approximate non-trivially [@daniely2016complexity]. Proving
that a problem is hard to approximate is at the forefront of
computational complexity, and based on novel characterizations of the
complexity class NP [@AroraBarakbook].
## Exercises
# First-Order Algorithms for Online Convex Optimization {#chap:first order}
In this chapter we describe and analyze the most simple and basic
algorithms for online convex optimization (recall the definition of the
model as introduced in chapter [1](#chap:intro){reference-type="ref"
reference="chap:intro"}), which are also surprisingly useful and
applicable in practice. We use the same notation introduced in
§[2.1](#sec:optdefs){reference-type="ref" reference="sec:optdefs"}.
However, in contrast to the previous chapter, the goal of the algorithms
introduced in this chapter is to minimize *regret*, rather than the
optimization error (which is ill-defined in an online setting).
Recall the definition of regret in an OCO setting, as given in equation
[\[eqn:regret-defn\]](#eqn:regret-defn){reference-type="eqref"
reference="eqn:regret-defn"}, with subscripts, superscripts and the
supremum over the function class omitted when they are clear from the
context:
$$\ensuremath{\mathrm{{Regret}}}_T = \sum_{t=1}^{T} f_t(\mathbf{x}_t) -\min_{\mathbf{x}\in \ensuremath{\mathcal K}}\sum_{t=1}^{T} f_t(\mathbf{x}) .$$
Table [\[table:regret-rates\]](#table:regret-rates){reference-type="ref"
reference="table:regret-rates"} details known upper and lower bounds on
the regret for different types of convex functions as it depends on the
number of prediction iterations.
::: center
::: {#default}
$\alpha$-strongly convex $\beta$-smooth $\delta$-exp-concave
---------------- ---------------------------- ---------------------- ------------------------------
Upper bound $\frac{1}{ \alpha} \log T$ $\sqrt{T}$ $\frac{n}{ \delta} \log T$
Lower bound $\frac{1}{ \alpha} \log T$ $\sqrt{T}$ $\frac{n}{ \delta } \log T$
Average regret $\frac{\log T}{ \alpha T}$ $\frac{1}{\sqrt{T}}$ $\frac{n \log T}{ \delta T}$
: Attainable asymptotic regret bounds for loss function classes.
:::
:::
[]{#default label="default"}
In order to compare regret to optimization error it is useful to
consider the average regret, or ${\ensuremath{\mathrm{{Regret}}}}/{T}$.
Let
$\bar{\ensuremath{\mathbf x}}_T = \frac{1}{T} \sum_{t=1}^T \ensuremath{\mathbf x}_t$
be the average decision. If the functions $f_t$ are all equal to a
single function $f : \ensuremath{\mathcal K}\mapsto {\mathbb R}$, then
Jensen's inequality implies that $f( \bar{\ensuremath{\mathbf x}}_T)$
converges to $f(\ensuremath{\mathbf x}^\star)$ at a rate at most the
average regret, since
$$f(\bar{\ensuremath{\mathbf x}}_T) - f(\ensuremath{\mathbf x}^\star ) \leq \frac{1}{T} \sum_{t=1} ^T [f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^\star) ] = \frac{\ensuremath{\mathrm{{Regret}}}_T}{T} .$$
The reader may recall Table
[\[table:GD\]](#table:GD){reference-type="ref" reference="table:GD"}
describing offline convergence of first order methods: as opposed to
offline optimization, smoothness does not improve asymptotic regret
rates. However, exp-concavity, a weaker property than strong convexity,
comes into play and gives improved regret rates.
This chapter will present algorithms and lower bounds that realize the
above known results for OCO. The property of exp-concavity and its
applications, as well as logarithmic regret algorithms for exp-concave
functions are deferred to the next chapter.
## Online Gradient Descent {#section:ogd}
Perhaps the simplest algorithm that applies to the most general setting
of online convex optimization is online gradient descent. This
algorithm, which is based on standard gradient descent from offline
optimization, was introduced in its online form by Zinkevich (see
bibliography at the end of this section).
::: algorithm
::: algorithmic
Input: convex set $\ensuremath{\mathcal K}$, $T$,
$\ensuremath{\mathbf x}_1 \in \mathcal{K}$, step sizes $\{ \eta_t \}$
Play $\ensuremath{\mathbf x}_t$ and observe cost
$f_t(\ensuremath{\mathbf x}_t)$. Update and project: $$\begin{aligned}
& \ensuremath{\mathbf y}_{t+1} = \mathbf{x}_{t}-\eta_{t} \nabla f_{t}(\mathbf{x}_{t}) \\
& \mathbf{x}_{t+1} = \mathop{\Pi}_\ensuremath{\mathcal K}(\ensuremath{\mathbf y}_{t+1})
\end{aligned}$$
:::
:::
Pseudo-code for the algorithm is given in Algorithm
[\[alg:ogd\]](#alg:ogd){reference-type="ref" reference="alg:ogd"}, and a
conceptual illustration is given in figure
[3.1](#fig:ogd){reference-type="ref" reference="fig:ogd"}.
::: center
![OGD: the iterate $\ensuremath{\mathbf x}_{t+1}$ is derived by
advancing $\ensuremath{\mathbf x}_t$ in the direction of the current
gradient $\nabla_t$, and projecting back into $\ensuremath{\mathcal K}$
](images/fig_gd_poly3.jpg){#fig:ogd width="4in"}
:::
In each iteration, the algorithm takes a step from the previous point in
the direction of the gradient of the previous cost. This step may result
in a point outside of the underlying convex set. In such cases, the
algorithm projects the point back to the convex set, i.e. finds its
closest point in the convex set. Despite the fact that the next cost
function may be completely different than the costs observed thus far,
the regret attained by the algorithm is sublinear. This is formalized in
the following theorem (recall the definition of $G$ and $D$ from the
previous chapter).
::: {#thm:gradient .theorem}
**Theorem 3.1**. *Online gradient descent with step sizes
$\{\eta_t = \frac{D}{G \sqrt{t}} , \ t \in [T] \}$ guarantees the
following for all $T \geq 1$:
$$\ensuremath{\mathrm{{Regret}}}_T = \sum_{t=1}^{T} f_t(\mathbf{x}_t) -\min_{\mathbf{x}^\star \in \ensuremath{\mathcal K}}\sum_{t=1}^{T}
f_t(\mathbf{x}^\star)\ \leq \frac{3}{2} {G D}\sqrt{T} .$$*
:::
::: proof
*Proof.* Let
$\mathbf{x}^\star \in \mathop{\mathrm{\arg\min}}_{\mathbf{x}\in \ensuremath{\mathcal K}} \sum_{t=1}^T f_t(\mathbf{x})$.
Define
$\nabla_t \stackrel{\text{\tiny def}}{=}\nabla f_{t}(\mathbf{x}_{t})$.
By convexity $$\begin{aligned}
\label{eqn:gradient_inequality}
f_t(\mathbf{x}_t) - f_t(\mathbf{x}^\star) \leq \nabla_t^\top (\mathbf{x}_t - \mathbf{x}^\star)
\end{aligned}$$ We first upper-bound
$\nabla_t^\top (\mathbf{x}_t-\mathbf{x}^\star)$ using the update rule
for $\mathbf{x}_{t+1}$ and Theorem
[2.1](#thm:pythagoras){reference-type="ref" reference="thm:pythagoras"}
(the Pythagorean theorem): $$\label{eqn:ogdtriangle}
\| \mathbf{x}_{t+1}-\mathbf{x}^\star \|^2\ =\ \left\|\mathop{\Pi}_\ensuremath{\mathcal K}(\mathbf{x}_t - \eta_t
\nabla_{t}) -\mathbf{x}^\star\right\|^2 \leq \left\|\mathbf{x}_t - \eta_t \nabla_t-\mathbf{x}^\star\right\|^2 .$$
Hence, $$\begin{aligned}
\label{eqn:ogd_eq2}
\|\mathbf{x}_{t+1}-\mathbf{x}^\star\|^2\ &\leq&\ \|\mathbf{x}_t- \mathbf{x}^\star\|^2 + \eta_t^2
\|\nabla_t\|^2 -2 \eta_t \nabla_t^\top (\mathbf{x}_t -\mathbf{x}^\star)\nonumber\\
2 \nabla_t^\top (\mathbf{x}_t-\mathbf{x}^\star)\ &\leq&\ \frac{ \|\mathbf{x}_t-
\mathbf{x}^\star\|^2-\|\mathbf{x}_{t+1}-\mathbf{x}^\star\|^2}{\eta_t} + \eta_t G^2 .
\end{aligned}$$ Summing
[\[eqn:gradient_inequality\]](#eqn:gradient_inequality){reference-type="eqref"
reference="eqn:gradient_inequality"} and
[\[eqn:ogd_eq2\]](#eqn:ogd_eq2){reference-type="eqref"
reference="eqn:ogd_eq2"} from $t= 1$ to $T$, and setting $\eta_t =
\frac{D}{G \sqrt{t}}$ (with
$\frac{1}{\eta_0} \stackrel{\text{\tiny def}}{=}0$): $$\begin{aligned}
& 2 \left( \sum_{t=1}^T f_t(\mathbf{x}_t)-f_t(\mathbf{x}^\star) \right ) \leq 2\sum_{t=1}^T \nabla_t^\top (\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x}^\star) \\
&\leq \sum_{t=1}^T \frac{ \|\mathbf{x}_t-
\mathbf{x}^\star\|^2-\|\mathbf{x}_{t+1}-\mathbf{x}^\star\|^2}{\eta_t} + G^2 \sum_{t=1}^T \eta_t \\
&\leq \sum_{t=1}^T \|\mathbf{x}_t - \mathbf{x}^\star\|^2 \left(
\frac{1}{\eta_{t}} -
\frac{1}{\eta_{t-1}} \right) + G^2 \sum_{t=1}^T \eta_t & \frac{1}{\eta_0} \stackrel{\text{\tiny def}}{=}0, \\
& & \|\ensuremath{\mathbf x_{T+1}} - \ensuremath{\mathbf x_{}}^* \|^2 \geq 0 \\
&\leq D^2 \sum_{t=1}^T \left(
\frac{1}{\eta_{t}} -
\frac{1}{\eta_{t-1}} \right) + G^2 \sum_{t=1}^T \eta_t \\
& \leq D^2 \frac{1}{\eta_{T}} + G^2 \sum_{t=1}^T \eta_t & \mbox{ telescoping series } \\
& \leq 3 DG \sqrt{T}.
\end{aligned}$$ The last inequality follows since
$\eta_t = \frac{D}{G \sqrt{t}}$ and
$\sum_{t=1}^T \frac{1}{\sqrt{t}} \leq 2 \sqrt{T}$. ◻
:::
The online gradient descent algorithm is straightforward to implement,
and updates take linear time given the gradient. However, there is a
projection step which may take significantly longer, as discussed in
§[2.1.1](#sec:projections){reference-type="ref"
reference="sec:projections"} and chapter
[7](#chap:FW){reference-type="ref" reference="chap:FW"}.
## Lower Bounds {#section:lowerbound}
The previous section introduces and analyzes a very simple and natural
approach to online convex optimization. Before continuing our venture,
it is worthwhile to consider whether the previous bound can be improved?
We measure performance of OCO algorithms both by regret and by
computational efficiency. Therefore, we ask ourselves whether even
simpler algorithms that attain tighter regret bounds exist.
The computational efficiency of online gradient descent seemingly leaves
little room for improvement, putting aside the projection step it runs
in linear time per iteration. What about obtaining better regret?
Perhaps surprisingly, the answer is negative: online gradient descent
attains, in the worst case, tight regret bounds up to small constant
factors! This is formally given in the following theorem.
::: {#thm:lowerbound .theorem}
**Theorem 3.2**. *Any algorithm for online convex optimization incurs
$\Omega(DG
\sqrt{T})$ regret in the worst case. This is true even if the cost
functions are generated from a fixed stationary distribution.*
:::
We give a sketch of the proof; filling in all details is left as an
exercise at the end of this chapter.
Consider an instance of OCO where the convex set
$\ensuremath{\mathcal K}$ is the $n$-dimensional hypercube, i.e.
$$\ensuremath{\mathcal K}= \{ \mathbf{x}\in {\mathbb R}^n \ , \ \|\mathbf{x}\|_\infty \leq 1 \}.$$
There are $2^n$ linear cost functions, one for each vertex
$\mathbf{v}\in \{
\pm 1\}^n$, defined as
$$\forall \mathbf{v}\in \{ \pm 1 \}^n \ , \ f_\mathbf{v}(\mathbf{x}) = \mathbf{v}^\top \mathbf{x}.$$
Notice that both the diameter of $\ensuremath{\mathcal K}$ and the bound
on the norm of the cost function gradients, denoted G, are bounded by
$$D \leq \sqrt{ \sum_{i=1}^n 2^2 } = 2 \sqrt{n} , \ G \leq \sqrt{ \sum_{i=1}^n (\pm1)^2 } = \sqrt{n}$$
The cost functions in each iteration are chosen at random, with uniform
probability, from the set
$\{ f_\mathbf{v}, \mathbf{v}\in \{\pm 1\}^n \}$. Denote by
$\mathbf{v}_t \in \{\pm 1\}^n$ the vertex chosen in iteration $t$, and
denote $f_t = f_{\mathbf{v}_t}$. By uniformity and independence, for any
$t$ and $\mathbf{x}_t$ chosen online,
$\mathop{\mbox{\bf E}}_{\mathbf{v}_t}[f_{t}(\mathbf{x}_t)]= \mathop{\mbox{\bf E}}_{\mathbf{v}_t}[ \mathbf{v}_t^\top
\mathbf{x}_t] = 0$. However, $$\begin{aligned}
\mathop{\mbox{\bf E}}_{\mathbf{v}_1,\ldots,\mathbf{v}_T}\left[\min_{\mathbf{x}\in \ensuremath{\mathcal K}} \sum_{t=1}^T f_t(\mathbf{x})\right] & =
\mathop{\mbox{\bf E}}\left[\min_{\mathbf{x}\in \ensuremath{\mathcal K}} \sum_{i \in [n]} \sum_{t=1}^T \mathbf{v}_t(i) \cdot \mathbf{x}_i \right] \\
& = n \mathop{\mbox{\bf E}}\left[-\left|\sum_{t=1}^T \mathbf{v}_t(1) \right|\right] & \mbox{i.i.d. coordinates}\\
& = -\Omega(n \sqrt{T}).
\end{aligned}$$ The last equality is left as an exercise.
The facts above nearly complete the proof of Theorem
[3.2](#thm:lowerbound){reference-type="ref" reference="thm:lowerbound"};
see the exercises at the end of this chapter.
## Logarithmic Regret
At this point, the reader may wonder: we have introduced a seemingly
sophisticated and obviously general framework for learning and
prediction, as well as a linear-time algorithm for the most general
case, complete with tight regret bounds, and done so with elementary
proofs! Is this all OCO has to offer?
The answer to this question is two-fold:
1. Simple is good: the philosophy behind OCO treats simplicity as a
merit. The main reason OCO has taken the stage in online learning in
recent years is the simplicity of its algorithms and their analysis,
which allow for numerous variations and tweaks in their host
applications.
2. A very wide class of settings, which will be the subject of the next
sections, admit more efficient algorithms, in terms of both regret
and computational complexity.
In §[2](#chap:opt){reference-type="ref" reference="chap:opt"} we
surveyed optimization algorithms with convergence rates that vary
greatly according to the convexity properties of the function to be
optimized. Do the regret bounds in online convex optimization vary as
much as the convergence bounds in offline convex optimization over
different classes of convex cost functions?
Indeed, next we show that for important classes of loss functions
significantly better regret bounds are possible.
### Online gradient descent for strongly convex functions {#section:ogdnew}
The first algorithm that achieves regret logarithmic in the number of
iterations is a twist on the online gradient descent algorithm, changing
only the step size. The following theorem establishes logarithmic bounds
on the regret if the cost functions are strongly convex.
::: {#thm:gradient2 .theorem}
**Theorem 3.3**. *For $\alpha$-strongly convex loss functions, online
gradient descent with step sizes $\eta_t = \frac{1}{\alpha {t}}$
achieves the following guarantee for all $T \geq 1$
$$\ensuremath{\mathrm{{Regret}}}_T\ \leq\ \frac{G^2}{2 \alpha}(1 + \log T).$$*
:::
::: proof
*Proof.* Let
$\mathbf{x}^\star \in \mathop{\mathrm{\arg\min}}_{\mathbf{x}\in \ensuremath{\mathcal K}} \sum_{t=1}^T f_t(\mathbf{x})$.
Recall the definition of regret
$$\ensuremath{\mathrm{{Regret}}}_T\ = \sum_{t=1}^{T} f_t(\mathbf{x}_t) - \sum_{t=1}^{T} f_t(\mathbf{x}^\star).$$
Define
$\nabla_t \stackrel{\text{\tiny def}}{=}\nabla f_t(\mathbf{x}_t)$.
Applying the definition of $\alpha$-strong convexity to the pair of
points $\{\ensuremath{\mathbf x}_t$,$\ensuremath{\mathbf x}^*\}$, we
have $$\begin{aligned}
2(f_t(\mathbf{x}_t)-f_t(\mathbf{x}^\star)) &\leq& 2\nabla_t^\top (\mathbf{x}_t-\mathbf{x}^\star)-\alpha
\|\mathbf{x}^\star-\mathbf{x}_t\|^2.\label{eqsz}
\end{aligned}$$ We proceed to upper-bound $\nabla_t^\top
(\mathbf{x}_t-\mathbf{x}^\star)$. Using the update rule for
$\mathbf{x}_{t+1}$ and the Pythagorean theorem
[2.1](#thm:pythagoras){reference-type="ref" reference="thm:pythagoras"},
we get
$$\| \mathbf{x}_{t+1}-\mathbf{x}^\star \|^2\ =\ \|\mathop{\Pi}_\ensuremath{\mathcal K}(\mathbf{x}_t - \eta_{t} \nabla_t)-\mathbf{x}^\star\|^2 \leq \|\mathbf{x}_t - \eta_{t} \nabla_t-\mathbf{x}^\star\|^2.$$
Hence, $$\begin{aligned}
\|\mathbf{x}_{t+1}-\mathbf{x}^\star\|^2\ &\leq&\ \|\mathbf{x}_t- \mathbf{x}^\star\|^2 + \eta_{t}^2
\|\nabla_t\|^2 -2
\eta_{t} \nabla_t^\top (\mathbf{x}_t - \mathbf{x}^\star)\nonumber\\
\end{aligned}$$ and $$\begin{aligned}
2 \nabla_t^\top (\mathbf{x}_t-\mathbf{x}^\star)\ &\leq&\ \frac{ \|\mathbf{x}_t-
\mathbf{x}^\star\|^2-\|\mathbf{x}_{t+1}-\mathbf{x}^\star\|^2}{\eta_{t}} + \eta_{t} G^2.
\label{eqer}
\end{aligned}$$ Summing [\[eqer\]](#eqer){reference-type="eqref"
reference="eqer"} from $t= 1$ to $T$, setting
$\eta_{t} = \frac{1}{\alpha t}$ (define
$\frac{1}{\eta_0} \stackrel{\text{\tiny def}}{=}0$), and combining with
[\[eqsz\]](#eqsz){reference-type="eqref" reference="eqsz"}, we have:
$$\begin{aligned}
& & 2 \sum_{t=1}^T (f_t(\mathbf{x}_t)-f_t(\mathbf{x}^\star) ) \\
&\leq &\
\sum_{t=1}^T \|\mathbf{x}_t-\mathbf{x}^\star\|^2
\left(\frac{1}{\eta_{t}}-\frac{1}{\eta_{t-1}}-\alpha\right) +G^2
\sum_{t=1}^{T} \eta_{t} \\
& & \mbox{ since } \frac{1}{\eta_0} \stackrel{\text{\tiny def}}{=}0, \|\ensuremath{\mathbf x_{T+1}} - \ensuremath{\mathbf x_{}}^* \|^2 \geq 0 \\ \\
&=&\ 0 + G^2 \sum_{t=1}^{T} \frac{1}{\alpha t} \\
& \leq & \frac{G^2}{\alpha}(1 + \log T )
\end{aligned}$$ ◻
:::
## Application: Stochastic Gradient Descent {#sec:sgd}
A special case of Online Convex Optimization is the well-studied setting
of stochastic optimization. In stochastic optimization, the optimizer
attempts to minimize a convex function over a convex domain as given by
the mathematical program: $$\begin{aligned}
\min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} f(\ensuremath{\mathbf x}).
\end{aligned}$$ However, unlike standard offline optimization, the
optimizer is given access to a noisy gradient oracle, defined by
$$\mathcal{O}(\ensuremath{\mathbf x}) \stackrel{\text{\tiny def}}{=}\tilde{\nabla }_\ensuremath{\mathbf x}\ \mbox{ s.t. } \ \mathop{\mbox{\bf E}}[\tilde{\nabla }_\ensuremath{\mathbf x}] = \nabla f(\ensuremath{\mathbf x}) \ , \ \mathop{\mbox{\bf E}}[ \|\tilde{\nabla }_\ensuremath{\mathbf x}\|^2 ] \leq G^2$$
That is, given a point in the decision set, a noisy gradient oracle
returns a random vector whose expectation is the gradient at the point
and whose variance is bounded by $G^2$.
We will show that regret bounds for OCO translate to convergence rates
for stochastic optimization. As a special case, consider the online
gradient descent algorithm whose regret is bounded by
$$\ensuremath{\mathrm{{Regret}}}_{T} = O(DG\sqrt{T})$$ Applying the OGD
algorithm over a sequence of linear functions that are defined by the
noisy gradient oracle at consecutive points, and finally returning the
average of all points along the way, we obtain the stochastic gradient
descent algorithm, presented in Algorithm
[\[alg:sgd\]](#alg:sgd){reference-type="ref" reference="alg:sgd"}.
::: algorithm
::: algorithmic
Input: ${\mathcal O}$,$\ensuremath{\mathcal K}$, $T$,
$\ensuremath{\mathbf x}_1 \in \mathcal{K}$, step sizes $\{ \eta_t \}$
[]{#alg:sgd-defnft label="alg:sgd-defnft"} Let
$\tilde{\nabla}_t = \mathcal{O}(\ensuremath{\mathbf x}_t)$ Update and
project:
$$\ensuremath{\mathbf y}_{t+1} = \mathbf{x}_{t}-\eta_t \tilde{\nabla}_t$$
$$\mathbf{x}_{t+1} = \mathop{\Pi}_\ensuremath{\mathcal K}(\ensuremath{\mathbf y}_{t+1})$$
$\bar{\ensuremath{\mathbf x}}_T \stackrel{\text{\tiny def}}{=}\frac{1}{T} \sum_{t=1}^T \ensuremath{\mathbf x}_t$
:::
:::
::: {#thm:sgd .theorem}
**Theorem 3.4**. *Algorithm [\[alg:sgd\]](#alg:sgd){reference-type="ref"
reference="alg:sgd"} with step sizes $\eta_t = \frac{D}{G \sqrt{t}}$
guarantees
$$\mathop{\mbox{\bf E}}[ f(\bar{\ensuremath{\mathbf x}}_T) ] \leq \min_{\ensuremath{\mathbf x}^\star \in \ensuremath{\mathcal K}} f(\ensuremath{\mathbf x}^\star) + \frac{3 GD }{2\sqrt{T}} .$$*
:::
::: proof
*Proof.* For the analysis, we define the linear functions
$f_t(\ensuremath{\mathbf x}) \stackrel{\text{\tiny def}}{=}\tilde{\nabla}_t^\top \ensuremath{\mathbf x}$.
Using the regret guarantee of OGD, we have $$\begin{aligned}
& \mathop{\mbox{\bf E}}[ f(\bar{\ensuremath{\mathbf x}}_T) ] - f(\ensuremath{\mathbf x}^\star) \\
& \leq \mathop{\mbox{\bf E}}[ \frac{1}{T} \sum_t f(\ensuremath{\mathbf x}_t) ] - f(\ensuremath{\mathbf x}^\star) & \mbox{ convexity of $f$ (Jensen) }\\
&\leq \frac{1}{T} \mathop{\mbox{\bf E}}[ \sum_t \nabla f(\ensuremath{\mathbf x}_t)^\top( \ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^\star) ] & \mbox{ convexity again }\\
& = \frac{1}{T} \mathop{\mbox{\bf E}}[ \sum_t \tilde{\nabla}_t^\top ( \ensuremath{\mathbf x}_t -\ensuremath{\mathbf x}^\star ) ] & \mbox{ noisy gradient estimator }\\
& = \frac{1}{T} \mathop{\mbox{\bf E}}[ \sum_t f_t( \ensuremath{\mathbf x}_t) -f_t(\ensuremath{\mathbf x}^\star) ] & \mbox{ Algorithm \ref{alg:sgd}, line \eqref{alg:sgd-defnft} }\\
& \leq \frac{\ensuremath{\mathrm{{Regret}}}_T }{T} & \mbox{ definition }\\
& \leq \frac{3GD }{2\sqrt{T}} & \mbox{ theorem \ref{thm:gradient}}
\end{aligned}$$ ◻
:::
It is important to note that in the proof above, we have used the fact
that the regret bounds of online gradient descent hold against an
adaptive adversary. This need arises since the cost functions $f_t$
defined in Algorithm [\[alg:sgd\]](#alg:sgd){reference-type="ref"
reference="alg:sgd"} depend on the choice of decision
$\ensuremath{\mathbf x}_t \in \ensuremath{\mathcal K}$.
In addition, the careful reader may notice that by plugging in different
step sizes (also called learning rates) and applying SGD to strongly
convex functions, one can attain $\tilde{O}({1}/{T})$ convergence rates,
where the $\tilde{O}$ notation hides logarithmic factors in $T$. Details
of this derivation are left as an exercise.
### Example: stochastic gradient descent for SVM training
Recall our example of Support Vector Machine training from
§[2.5](#sec:svmexample){reference-type="ref"
reference="sec:svmexample"}. The task of training an SVM over a given
data set amounts to solving the following convex program (equation
[\[eqn:soft-margin\]](#eqn:soft-margin){reference-type="eqref"
reference="eqn:soft-margin"}): $$\begin{aligned}
& f(\ensuremath{\mathbf x}) = \min_{\ensuremath{\mathbf x}\in {\mathbb R}^d} \left \{ \lambda \frac{1}{n} \sum_{i \in [n]} \ell_{\mathbf{a}_i,b_i}(\ensuremath{\mathbf x}) + \frac{1}{2} \| \ensuremath{\mathbf x}\|^2 \right \} \\
& \ell_{\mathbf{a},b}(\ensuremath{\mathbf x}) = \max\{0, 1 - b \cdot \ensuremath{\mathbf x}^\top \mathbf{a}\} .
\end{aligned}$$
::: algorithm
::: algorithmic
Input: training set of $n$ examples $\{(\mathbf{a}_i,b_i) \}$, $T$. Set
$\ensuremath{\mathbf x}_1 = 0$ Pick an example uniformly at random
$t \in [n]$. Let
$\tilde{\nabla}_t = \lambda \nabla \ell_{\mathbf{a}_t,b_t} (\ensuremath{\mathbf x}_t) + \ensuremath{\mathbf x}_t$
where $$\nabla \ell_{{\mathbf{a}_t},b_t}(\ensuremath{\mathbf x}_t) = {
\left\{
\begin{array}{ll}
{0}, & { b_t \ensuremath{\mathbf x}_t^\top \mathbf{a}_t > 1 } \\\\
{ - b_t \mathbf{a}_t}, & { \mbox{otherwise}}
\end{array}
\right. }$$
${\ensuremath{\mathbf x}}_{t+1} = \ensuremath{\mathbf x}_{t}-\eta_t \tilde{\nabla}_t$
$\bar{\ensuremath{\mathbf x}}_T \stackrel{\text{\tiny def}}{=}\frac{1}{T} \sum_{t=1}^T \ensuremath{\mathbf x}_t$
:::
:::
Using the technique described in this chapter, namely the OGD and SGD
algorithms, we can devise a much faster algorithm than the one presented
in the previous chapter. The idea is to generate an unbiased estimator
for the gradient of the objective using a single example in the dataset,
and use it in lieu of the entire gradient. This is given formally in the
SGD algorithm for SVM training presented in Algorithm
[\[alg:sgd4svm\]](#alg:sgd4svm){reference-type="ref"
reference="alg:sgd4svm"}.
It follows from Theorem [3.4](#thm:sgd){reference-type="ref"
reference="thm:sgd"} that this algorithm, with appropriate parameters
$\eta_t$, returns an $\varepsilon$-approximate solution after
$T = O(\frac{1}{\varepsilon^2})$ iterations. Furthermore, with a little
more care and using Theorem [3.3](#thm:gradient2){reference-type="ref"
reference="thm:gradient2"}, a rate of $\tilde{O}(\frac{1}{\varepsilon})$
is obtained with parameters $\eta_t = O( \frac{1}{t})$.
This matches the convergence rate of standard offline gradient descent.
However, observe that each iteration is significantly cheaper---only one
example in the data set need be considered! That is the magic of SGD; we
have matched the nearly optimal convergence rate of first order methods
using extremely cheap iterations. This makes it the method of choice in
numerous applications.
## Bibliographic Remarks {#bibliographic-remarks}
The OCO framework was introduced by @Zinkevich03, where the OGD
algorithm was introduced and analyzed. Precursors to this algorithm,
albeit for less general settings, were introduced and analyzed in
[@KivWar97]. Logarithmic regret algorithms for Online Convex
Optimization were introduced and analyzed in [@HAK07].
The stochastic gradient descent (SGD) algorithm dates back to
@robbins1951, where it was called "stochastic approximation\". The
importance of SGD for machine learning was advocated for in
[@bottou1998online; @bottou2008tradeoffs]. The literature on SGD is vast
and the reader is referred to the text of @bubeckOPT and paper by
@lan2012optimal.
Application of SGD to soft-margin SVM training was explored in
[@Shalev-ShwartzSSC11]. Tight convergence rates of SGD for strongly
convex and non-smooth functions were only recently obtained in
[@hazan:beyond; @RSS; @SZ].
## Exercises
# Second-Order Methods {#chap:second order-methods}
The motivation for this chapter is the application of online portfolio
selection, considered in the first chapter of this book. We begin with a
detailed description of this application. We proceed to describe a new
class of convex functions that model this problem. This new class of
functions is more general than the class of strongly convex functions
discussed in the previous chapter. It allows for logarithmic regret
algorithms, which are based on second order methods from convex
optimization. In contrast to first order methods, which have been our
focus thus far and relied on (sub)gradients, second order methods
exploit information about the second derivative of the objective
function.
## Motivation: Universal Portfolio Selection
In this subsection we give the formal definition of the universal
portfolio selection problem that was informally described in
§[1.2](#subsec:OCOexamples){reference-type="ref"
reference="subsec:OCOexamples"}.
### Mainstream portfolio theory
Mainstream financial theory models stock prices as a stochastic process
known as Geometric Brownian Motion (GBM). This model assumes that the
fluctuations in the prices of the stocks behave essentially as a random
walk. It is perhaps easier to think about a price of an asset (stock) on
time segments, obtained from a discretization of time into equal
segments. Thus, the logarithm of the price at segment $t+1$, denoted
$l_{t+1}$, is given by the sum of the logarithm of the price at segment
$t$ and a Gaussian random variable with a particular mean and variance,
$$l_{t+1} \sim l_t + \mathcal{N}(\mu,\sigma).$$
This is only an informal way of thinking about GBM. The formal model is
a continuous time process, similar to the discrete time stochastic
process we have just described, obtained as the time intervals, means,
and variances approach zero.
The GBM model gives rise to particular algorithms for portfolio
selection (as well as more sophisticated applications such as options
pricing). Given the means and variances of the stock prices over time of
a set of assets, as well as their cross-correlations, a portfolio with
maximal expected gain (mean) for a specific risk (variance) threshold
can be formulated.
The fundamental question is, of course, how does one obtain the mean and
variance parameters, not to mention the cross-correlations, of a given
set of stocks? One accepted solution is to estimate these from
historical data, e.g., by taking the recent history of stock prices.
### Universal portfolio theory
The theory of universal portfolio selection is very different from the
GBM model. The main difference being the lack of statistical assumptions
about the stock market. The idea is to model investing as a repeated
decision making scenario, which fits nicely into our OCO framework, and
to measure regret as a performance metric.
Consider the following scenario: at each iteration $t \in [T]$, the
decision maker chooses $\ensuremath{\mathbf x}_t$, a distribution of her
wealth over $n$ assets, such that
$\ensuremath{\mathbf x_{t}}\in \Delta_n$. Here
$\Delta_n = \{ \ensuremath{\mathbf x}\in {\mathbb R}^n_+ , \sum_i \ensuremath{\mathbf x}_i = 1 \}$
is the $n$-dimensional simplex, i.e., the set of all distributions over
$n$ elements. An adversary independently chooses market returns for the
assets, i.e., a vector $\ensuremath{\mathbf r_{t}}\in {\mathbb R}_+^n$
such that each coordinate $\ensuremath{\mathbf r_{t}}(i)$ is the price
ratio for the $i$'th asset between the iterations $t$ and $t+1$. For
example, if the $i$'th coordinate is the Google ticker symbol GOOG
traded on the NASDAQ, then
$$\ensuremath{\mathbf r_{t}}(i) = \frac{\mbox{price of GOOG at time $t+1$}}{\mbox{price of GOOG at time $t$}}$$
How does the decision maker's wealth change? Let $W_t$ be her total
wealth at iteration $t$. Then, ignoring transaction costs, we have
$$W_{t+1} = W_t \cdot \ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x_{t}}$$
Over $T$ iterations, the total wealth of the investor is given by
$$W_{T} = W_1 \cdot \prod_{t=1}^{T-1} \ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x_{t}}$$
The goal of the decision maker, to maximize the overall wealth gain
${W_T}/{W_0}$, can be attained by maximizing the following more
convenient logarithm of this quantity, given by
$$\log \frac{W_T}{W_1} = \sum_{t=1}^{T-1} \log \ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x_{t}}$$
The above formulation is already very similar to our OCO setting, albeit
phrased as a gain maximization rather than a loss minimization setting.
Let
$$f_t(\ensuremath{\mathbf x}) = \log (\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x})$$
The convex set is the $n$-dimensional simplex
$\ensuremath{\mathcal K}= \Delta_n$, and define the regret to be
$$\ensuremath{\mathrm{{Regret}}}_T = \max_{\ensuremath{\mathbf x}^\star \in \ensuremath{\mathcal K}} \sum_{t=1}^T f_t(\ensuremath{\mathbf x}^\star) - \sum_{t=1}^T f_t(\ensuremath{\mathbf x}_t)$$
The functions $f_t$ are concave rather than convex, which is perfectly
fine as we are framing the problem as a maximization rather than a
minimization. Note also that the regret is the negation of the usual
regret notion
[\[eqn:regret-defn\]](#eqn:regret-defn){reference-type="eqref"
reference="eqn:regret-defn"} we have considered for minimization
problems.
Since this is an online convex optimization instance, we can use the
online gradient descent algorithm from the previous chapter to invest,
which ensures $O(\sqrt{T})$ regret (see exercises). What guarantee do we
attain in terms of investing? To answer this, in the next section we
reason about what $\ensuremath{\mathbf x}^\star$ in the above expression
may be.
### Constant rebalancing portfolios
As $\ensuremath{\mathbf x}^\star \in \ensuremath{\mathcal K}= \Delta_n$
is a point in the $n$-dimensional simplex, consider the special case of
$\ensuremath{\mathbf x}^\star = \ensuremath{\mathbf e_{1}}$, i.e., the
first standard basis vector (the vector that has zero in all coordinates
except the first, which is set to one). The term
$\sum_{t=1}^T f_t(\ensuremath{\mathbf e_{1}})$ becomes
$\sum_{t=1}^T \log \ensuremath{\mathbf r_{t}}(1)$, or
$$\log \prod_{t=1}^T \ensuremath{\mathbf r_{t}}(1) = \log \left( \frac{\mbox{price of stock at time $T+1$}} {\mbox{initial price of stock}} \right)$$
As $T$ becomes large, any sublinear regret guarantee (e.g., the
$O(\sqrt{T})$ regret guarantee achieved using online gradient descent)
achieves an average regret that approaches zero. In this context, this
implies that the log-wealth gain achieved (in average over $T$ rounds)
is as good as that of the first stock. Since
$\ensuremath{\mathbf x}^\star$ can be taken to be any vector, sublinear
regret guarantees average log-wealth growth as good as any stock!
However, $\ensuremath{\mathbf x}^\star$ can be significantly better, as
shown in the following example. Consider a market of two stocks that
fluctuate wildly. The first stock increases by $100\%$ every even day
and returns to its original price the following (odd) day. The second
stock does exactly the opposite: decreases by $50\%$ on even days and
rises back on odd days. Formally, we have
$$\ensuremath{\mathbf r_{t}}(1) = (2 \ , \ \frac{1}{2} \ , \ 2 \ , \ \frac{1}{2} , ... )$$
$$\ensuremath{\mathbf r_{t}}(2) = (\frac{1}{2} \ , \ 2 \ , \ \frac{1}{2} \ , \ 2 \ , ... )$$
Clearly, any investment in either of the stocks will not gain in the
long run. However, the portfolio
$\ensuremath{\mathbf x}^\star = (0.5,0.5)$ increases wealth by a factor
of
$\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x}^\star = (\frac{1}{2})^2 + 1 = 1.25$
daily! Such a mixed distribution is called a fixed rebalanced portfolio,
as it needs to rebalance the proportion of total capital invested in
each stock at each iteration to maintain this fixed distribution
strategy.
Thus, vanishing average regret guarantees long-run growth as the best
constant rebalanced portfolio in hindsight. Such a portfolio strategy is
called *universal*. We have seen that the online gradient descent
algorithm gives essentially a universal algorithm with regret
$O(\sqrt{T})$. Can we get better regret guarantees?
## Exp-Concave Functions
For convenience, we return to considering losses of convex functions,
rather than gains of concave functions as in the application for
portfolio selection. The two problems are equivalent: we simply replace
the maximization of the concave
$f(\ensuremath{\mathbf x})= \log(\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x})$
with the minimization of the convex
$f(\ensuremath{\mathbf x})=-\log(\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x})$.
In the previous chapter we have seen that the OGD algorithm with
carefully chosen step sizes can deliver logarithmic regret for strongly
convex functions. However, the loss function for the OCO setting of
portfolio selection,
$f_t(\ensuremath{\mathbf x}) = -\log (\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x}),$
is not strongly convex. Instead, the Hessian of this function is given
by
$$\nabla^2 f_t(\ensuremath{\mathbf x}) = \frac{\ensuremath{\mathbf r_{t}}\ensuremath{\mathbf r_{t}}^\top }{(\ensuremath{\mathbf r_{t}}^\top \ensuremath{\mathbf x})^2}$$
which is a rank one matrix. Recall that the Hessian of a
twice-differentiable strongly convex function is larger than a multiple
of identity matrix and is positive definite and in particular has full
rank. Thus, the loss function above is quite far from being strongly
convex.
However, an important observation is that this Hessian is large in the
direction of the gradient. This property is called exp-concavity. We
proceed to define this property rigorously and show that it suffices to
attain logarithmic regret.
::: definition
**Definition 4.1**. *A convex function
$f : {\mathbb R}^n \mapsto {\mathbb R}$ is defined to be
$\alpha$-exp-concave over
$\ensuremath{\mathcal K}\subseteq {\mathbb R}^n$ if the function $g$ is
concave, where $g: \ensuremath{\mathcal K}\mapsto {\mathbb R}$ is
defined as
$$g(\ensuremath{\mathbf x}) = e^{-\alpha f(\ensuremath{\mathbf x}) }$$*
:::
For the following discussion, recall the notation of
§[2.1](#sec:optdefs){reference-type="ref" reference="sec:optdefs"}, and
in particular our convention over matrices that $A \succcurlyeq B$ if
and only if $A - B$ is positive semidefinite. Exp-concavity implies
strong-convexity in the direction of the gradient. This reduces to the
following property:
::: {#lem:quadratic_approximation1 .lemma}
**Lemma 4.2**. *A twice-differentiable function
$f : {\mathbb R}^n \mapsto {\mathbb R}$ is $\alpha$-exp-concave at
$\ensuremath{\mathbf x}$ if and only if
$$\nabla^2 f(\ensuremath{\mathbf x}) \succcurlyeq {\alpha} \nabla f(\ensuremath{\mathbf x}) \nabla f(\ensuremath{\mathbf x})^\top.$$*
:::
The proof of this lemma is given as a guided exercise at the end of this
chapter. We prove a slightly stronger lemma below.
::: {#lem:quadratic_approximation2 .lemma}
**Lemma 4.3**. *Let $f :\ensuremath{\mathcal K}\rightarrow {\mathbb R}$
be an $\alpha$-exp-concave function, and $D,G$ denote the diameter of
$\ensuremath{\mathcal K}$ and a bound on the (sub)gradients of $f$
respectively. The following holds for all
$\gamma \leq \frac{1}{2}\min\{\frac{1}{GD},\alpha\}$ and all
$\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}$:
$$f(\mathbf{x}) \geq f(\mathbf{y}) + \nabla f(\mathbf{y})^\top (\mathbf{x}-\mathbf{y}) +
\frac{\gamma}{2} (\mathbf{x}- \mathbf{y})^\top \nabla f(\mathbf{y}) \nabla
f(\mathbf{y})^\top(\mathbf{x}-\mathbf{y}).$$*
:::
::: proof
*Proof.* The composition of a concave and non-decreasing function with
another concave function is concave . Therefore, since
$2\gamma \leq \alpha$, the composition of $g(x)= x^{2 \gamma/\alpha}$
with $f(\ensuremath{\mathbf x}) = \exp(-\alpha f(\mathbf{x}))$ is
concave. It follows that the function
$h(\mathbf{x}) \stackrel{\text{\tiny def}}{=}\exp(-2\gamma f(\mathbf{x}))$
is concave. Then by the concavity of $h(\mathbf{x})$,
$$h(\mathbf{x}) \leq h(\mathbf{y}) + \nabla h(\mathbf{y})^\top(\mathbf{x}- \mathbf{y})$$
Plugging in
$\nabla h(\mathbf{y}) = -2\gamma \exp(-2\gamma f(\mathbf{y})) \nabla
f(\mathbf{y})$ gives
$$\exp(-2\gamma f(\mathbf{x})) \leq \exp(-2\gamma f(\mathbf{y})) [1 - 2\gamma \nabla f(\mathbf{y})^\top (\mathbf{x}- \mathbf{y})].$$
Simplifying gives
$$f(\mathbf{x}) \geq f(\mathbf{y}) - \frac{1}{2\gamma} \log \left( 1-2\gamma \nabla f(\mathbf{y})^\top
(\mathbf{x}-\mathbf{y}) \right).$$ Next, note that
$|2\gamma \nabla f(\mathbf{y})^\top
(\mathbf{x}-\mathbf{y})|\ \leq\ 2\gamma GD \leq 1$ and that using the
Taylor approximation, for $z \geq -1$, it holds that
$-\log(1-z) \geq z+\frac{1}{4}{z^2}$. Applying the inequality for
$z = 2\gamma \nabla f(\mathbf{y})^\top
(\mathbf{x}-\mathbf{y})$ implies the lemma. ◻
:::
## Exponentially Weighted Online Convex Optimization
Before diving into efficient second order methods, we first describe a
simple algorithm based on the multiplicative updates method which gives
logarithmic regret for exp-concave losses. Algorithm
[\[alg:ewoo\]](#alg:ewoo){reference-type="eqref" reference="alg:ewoo"}
below, called EWOO, is a close relative to the Hedge Algorithm
[\[alg:Hedge\]](#alg:Hedge){reference-type="eqref"
reference="alg:Hedge"}. Its regret guarantee is robust: it does not
include a Lipschitz constant or a diameter bound. In addition, it is
particularly simple to describe and analyze.
The downside of EWOO is its running time. A naive implementation would
run in exponential time of the dimension. It is possible to given a
randomized polynomial time implementation based on random sampling
techniques, where the polynomial depends both on the dimension as well
as the number of iterations, see bibliographic section for more details.
::: algorithm
::: algorithmic
Input: convex set $\ensuremath{\mathcal K}$, $T$, parameter
$\alpha > 0$. Let
$w_t(\mathbf{x}) = e^{-\alpha{\textstyle \sum}_{\tau=1}^{t-1} f_\tau(\mathbf{x})}$.
Play $\mathbf{x}_t$ given by
$$\ensuremath{\mathbf x}_t = \frac{\int_\ensuremath{\mathcal K}\mathbf{x}\ w_t(\mathbf{x}) d \mathbf{x}}{\int_\ensuremath{\mathcal K}w_t(\mathbf{x})d \mathbf{x}} .$$
:::
:::
In the analysis below, it can be observed that choosing $\mathbf{x}_t$
at random with density proportional to $w_t(\mathbf{x})$, instead of
computing the entire integral, also guarantees our regret bounds on the
expectation. This is the basis for the polynomial time implementation.
We proceed to give the logarithmic regret bounds.
::: {#thm:exp .theorem}
**Theorem 4.4**.
*$$\ensuremath{\mathrm{{Regret}}}_T(EWOO) \ \leq \ \frac{d}{\alpha} \log T + \frac{2}{\alpha} .$$*
:::
::: proof
*Proof.* Let $h_t(\mathbf{x}) = e^{- \alpha f_t(\mathbf{x})}$. Since
$f_t$ is $\alpha$-exp-concave, we have that $h_t$ is concave and thus
$$h_t(\mathbf{x}_t) \geq \frac{\int_\ensuremath{\mathcal K}h_t(\mathbf{x}) \prod_{\tau=1}^{t-1} h_\tau(\mathbf{x}) ~d \mathbf{x}}{\int_\ensuremath{\mathcal K}
\prod_{\tau=1}^{t-1} h_\tau(\mathbf{x}) ~ d \mathbf{x}}.$$ Hence, we
have by telescoping product, $$\label{eqn:telescope}
\prod_{\tau=1}^t h_\tau(\mathbf{x}_\tau) \geq \frac{\int_\ensuremath{\mathcal K}
\prod_{\tau=1}^t h_\tau(\mathbf{x}) ~ d \mathbf{x}}{\int_\ensuremath{\mathcal K}1 ~ d \mathbf{x}} =
\frac{\int_\ensuremath{\mathcal K}\prod_{\tau=1}^t h_\tau(\mathbf{x}) ~ d \mathbf{x}}{\mbox{vol}(\ensuremath{\mathcal K})}$$
By definition of $\mathbf{x}^\star$ we have
$\mathbf{x}^\star \in \arg\max_{\mathbf{x}\in \ensuremath{\mathcal K}}
\prod_{t=1}^T h_t(\mathbf{x})$. Denote by
$S_\delta \subset \ensuremath{\mathcal K}$ the translated Minkowski set
given by
$$S_\delta = (1-\delta) \ensuremath{\mathbf x}^\star + \ensuremath{\mathcal K}_{1-\delta} = \left\{ \mathbf{x}= (1-\delta) \mathbf{x}^\star + \delta \mathbf{y}\ , \ \mathbf{y}\in
\ensuremath{\mathcal K}\right\}.$$ By concavity of $h_t$ and the fact
that $h_t$ is non-negative, we have that,
$$\forall \mathbf{x}\in S_\delta \ . \ \quad h_t(\mathbf{x}) \geq (1-\delta) h_t(\mathbf{x}^\star).$$
Hence,
$$\forall \mathbf{x}\in S_\delta \quad \prod_{\tau=1}^T h_\tau(\mathbf{x})
\geq \left( 1 - \delta\right)^T \prod_{\tau=1}^T h_\tau(\mathbf{x}^\star)$$
Finally, since
$S_\delta = (1-\delta) \mathbf{x}^\star + \delta \ensuremath{\mathcal K}$
is simply a rescaling of $\ensuremath{\mathcal K}$ followed by a
translation, and we are in $d$ dimensions,
$\mbox{vol}(S_\delta) = \mbox{vol}(\ensuremath{\mathcal K}) \times \delta^d$.
Putting this together with equation
[\[eqn:telescope\]](#eqn:telescope){reference-type="eqref"
reference="eqn:telescope"}, we have
$$\prod_{\tau=1}^T h_\tau(\mathbf{x}_\tau) \geq \frac{\mbox{vol}{(S_\delta)}}{\mbox{vol}(\ensuremath{\mathcal K})} (1-\delta)^T
\prod_{\tau=1}^T h_\tau(\mathbf{x}^\star) \geq
{\delta^d}(1-\delta)^T\prod_{\tau=1}^T h_\tau(\mathbf{x}^\star).$$ We
can now simplify by taking logarithms and changing sides,
$$\begin{aligned}
\ensuremath{\mathrm{{Regret}}}_T(EWOO) & = \sum_t f_t(\ensuremath{\mathbf x}_t) - f_t(\ensuremath{\mathbf x}^\star) \\
& = \frac{1}{\alpha} \log \frac { \prod_{\tau=1}^T h_\tau(\mathbf{x}^\star)} {\prod_{\tau=1}^T h_\tau(\mathbf{x}_\tau) } \\
& \leq \frac{1}{\alpha} \left( d \log \frac{1}{\delta} + T \log \frac{1}{1-\delta} \right) \leq \frac{d}{\alpha} \log T + \frac{2}{\alpha} ,
\end{aligned}$$ where the last step is by choosing
$\delta = \frac{1}{T}$. ◻
:::
## The Online Newton Step Algorithm {#section:ons}
Thus far we have only considered first order methods for regret
minimization. In this section we consider a quasi-Newton approach, i.e.,
an online convex optimization algorithm that approximates the second
derivative, or Hessian in more than one dimension. However, strictly
speaking, the algorithm we analyze is also first order, in the sense
that it only uses gradient information.
The algorithm we introduce and analyze, called online Newton step, is
detailed in Algorithm [\[alg:ons\]](#alg:ons){reference-type="ref"
reference="alg:ons"}. At each iteration, this algorithm chooses a vector
that is the projection of the sum of the vector chosen at the previous
iteration and an additional vector. Whereas for the online gradient
descent algorithm this added vector was the gradient of the previous
cost function, for online Newton step this vector is different: it is
reminiscent to the direction in which the Newton-Raphson method would
proceed if it were an offline optimization problem for the previous cost
function. The Newton-Raphson algorithm would move in the direction of
the vector which is the inverse Hessian times the gradient. In online
Newton step, this direction is $A_t^{-1} \nabla_t$, where the matrix
$A_t$ is related to the Hessian as will be shown in the analysis.
Since adding a multiple of the Newton vector $A_t^{-1} \nabla_t$ to the
current vector may result in a point outside the convex set, an
additional projection step is required to obtain
$\ensuremath{\mathbf x}_t$, the decision at time $t$. This projection is
different than the standard Euclidean projection used by online gradient
descent in Section [3.1](#section:ogd){reference-type="ref"
reference="section:ogd"}. It is the projection according to the norm
defined by the matrix $A_t$, rather than the Euclidean norm.
::: algorithm
::: algorithmic
Input: convex set $\ensuremath{\mathcal K}$, $T$,
$\ensuremath{\mathbf x}_1 \in \mathcal{K} \subseteq {\mathbb R}^n$,
parameters $\gamma,\varepsilon > 0$, $A_ 0 = \varepsilon \mathbf{I}_n$
Play $\ensuremath{\mathbf x}_t$ and observe cost
$f_t(\ensuremath{\mathbf x}_t)$. Rank-1 update:
${A}_t = {A}_{t-1} + \nabla_t \nabla_t^\top$ Newton step and generalized
projection:
$$\ensuremath{\mathbf y}_{t+1} = \mathbf{x}_{t} - \frac{1}{\gamma} {A}_{t}^{-1} \nabla_{t}$$
$$\mathbf{x}_{t+1} = \mathop{\Pi}_\ensuremath{\mathcal K}^{{A}_t} (\ensuremath{\mathbf y}_{t+1}) = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{ \|\ensuremath{\mathbf y}_{t+1} - \ensuremath{\mathbf x}\|^2_{{A}_t} \right\}$$
:::
:::
The advantage of the online Newton step algorithm is its logarithmic
regret guarantee for exp-concave functions, as defined in the previous
section. The following theorem bounds the regret of online Newton step.
::: {#thm:onsregret .theorem}
**Theorem 4.5**. *Algorithm [\[alg:ons\]](#alg:ons){reference-type="ref"
reference="alg:ons"} with parameters
$\gamma = \frac{1}{2}\min\{\frac{1}{GD},\alpha\}$,
$\varepsilon = \frac{1}{\gamma^2 D^2}$ and $T \geq 4$ guarantees
$$\ensuremath{\mathrm{{Regret}}}_T \ \leq\ 2 \left(\frac{1}{\alpha} + GD\right) n
\log T.$$*
:::
As a first step, we prove the following lemma.
::: {#lemma:onsbound .lemma}
**Lemma 4.6**. *The regret of online Newton step is bounded by
$$\ensuremath{\mathrm{{Regret}}}_T(\text{ONS})\ \leq\ \left(\frac{1}{\alpha}
+ GD\right) \left(\sum_{t=1}^T \nabla_t^\top {A}_t^{-1} \nabla_t + 1\right)$$*
:::
::: proof
*Proof.* Let
$\mathbf{x}^\star \in \arg\min_{\mathbf{x}\in \ensuremath{\mathcal K}} \sum_{t=1}^T f_t(\mathbf{x})$
be the best decision in hindsight. By Lemma
[4.3](#lem:quadratic_approximation2){reference-type="ref"
reference="lem:quadratic_approximation2"}, we have for
$\gamma = \frac{1}{2}\min\{\frac{1}{GD},\alpha\}$,
$$f_t(\mathbf{x}_t ) - f_t(\mathbf{x}^\star)\ \leq\ R_t ,$$ where we
define $$R_t \stackrel{\text{\tiny def}}{=}\ \nabla_t^\top
(\mathbf{x}_t - \mathbf{x}^\star) - \frac{\gamma}{2} (\mathbf{x}^\star - \mathbf{x}_t)^\top \nabla_t
\nabla_t^\top (\mathbf{x}^\star - \mathbf{x}_t) .$$ According to the
update rule of the algorithm
$\mathbf{x}_{t+1} = \mathop{\Pi}_{\ensuremath{\mathcal K}}^{{A}_t}(\mathbf{y}_{t+1})$.
Now, by the definition of $\mathbf{y}_{t+1}$: $$\label{eq:update-rule}
\mathbf{y}_{t+1} - \mathbf{x}^\star = \mathbf{x}_{t} - \mathbf{x}^\star - \frac{1}{\gamma} {A}_t^{-1}
\nabla_t, \text{ and}$$ $$\label{eq:A_t-multiply}
{A}_t (\mathbf{y}_{t+1} - \mathbf{x}^\star) = {A}_t(\mathbf{x}_t - \mathbf{x}^\star) - \frac{1}{\gamma}
\nabla_t.$$ Multiplying the transpose of
[\[eq:update-rule\]](#eq:update-rule){reference-type="eqref"
reference="eq:update-rule"} by
[\[eq:A_t-multiply\]](#eq:A_t-multiply){reference-type="eqref"
reference="eq:A_t-multiply"} we get $$\begin{gathered}
(\mathbf{y}_{t+1} - \mathbf{x}^\star)^\top {A}_t(\mathbf{y}_{t+1} - \mathbf{x}^\star) = \notag \\
(\mathbf{x}_t\! -\! \mathbf{x}^\star)^\top {A}_t(\mathbf{x}_t\! -\! \mathbf{x}^\star) -
\frac{2}{\gamma} \nabla_t^\top (\mathbf{x}_t\! -\! \mathbf{x}^\star) +
\frac{1}{\gamma^2} \nabla_t^\top {A}_t^{-1} \nabla_t.
\label{eq:multiplied}
\end{gathered}$$ Since $\mathbf{x}_{t+1}$ is the projection of
$\mathbf{y}_{t+1}$ in the norm induced by ${A}_t$, we have by the
Pythagorean theorem (see §[2.1.1](#sec:projections){reference-type="ref"
reference="sec:projections"}) $$\begin{aligned}
(\mathbf{y}_{t+1} - \mathbf{x}^\star)^\top {A}_t(\mathbf{y}_{t+1} - \mathbf{x}^\star) & = \| \mathbf{y}_{t+1} - \mathbf{x}^\star \|_{{A}_t}^2 \\
& \ge \| \mathbf{x}_{t+1} - \mathbf{x}^\star \|_{{A}_t}^2 \\
& = (\mathbf{x}_{t+1} - \mathbf{x}^\star)^\top {A}_t(\mathbf{x}_{t+1} - \mathbf{x}^\star ).
\end{aligned}$$ This inequality is the reason for using generalized
projections as opposed to standard projections, which were used in the
analysis of online gradient descent(see
§[3.1](#section:ogd){reference-type="ref" reference="section:ogd"}
Equation [\[eqn:ogdtriangle\]](#eqn:ogdtriangle){reference-type="eqref"
reference="eqn:ogdtriangle"}). This fact together with
[\[eq:multiplied\]](#eq:multiplied){reference-type="eqref"
reference="eq:multiplied"} gives $$\begin{aligned}
\nabla_t^\top (\mathbf{x}_t \! -\! \mathbf{x}^\star) &\leq \ \frac{1}{2\gamma}
\nabla_t^\top {A}_t^{-1} \nabla_t + \frac{\gamma}{2} (\mathbf{x}_t\! -\!
\mathbf{x}^\star)^\top {A}_t (\mathbf{x}_t\! -\! \mathbf{x}^\star) \\
& - \frac{\gamma}{2}
(\mathbf{x}_{t+1} - \mathbf{x}^\star)^\top {A}_t(\mathbf{x}_{t+1} - \mathbf{x}^\star).
\end{aligned}$$ Now, summing up over $t=1$ to $T$ we get that
$$\begin{aligned}
&\sum_{t=1}^T \nabla_t^\top (\mathbf{x}_t - \mathbf{x}^\star)
\leq \frac{1}{2\gamma} \sum_{t=1}^T \nabla_t^\top {A}_t^{-1} \nabla_t +
\frac{\gamma}{2} (\mathbf{x}_{1} - \mathbf{x}^\star)^\top {A}_1 (\mathbf{x}_{1} - \mathbf{x}^\star) \\
&\quad + \frac{\gamma}{2} \sum_{t=2}^{T} (\mathbf{x}_t - \mathbf{x}^\star)^\top
({A}_t - {A}_{t-1}) (\mathbf{x}_t - \mathbf{x}^\star) \\
& \quad - \frac{\gamma}{2} (\mathbf{x}_{T+1} - \mathbf{x}^\star)^\top {A}_T (\mathbf{x}_{T+1} - \mathbf{x}^\star) \\
&\leq \frac{1}{2\gamma} \sum_{t=1}^T \nabla_t^\top {A}_t^{-1}
\nabla_t+ \frac{\gamma}{2} \sum_{t=1}^{T} (\mathbf{x}_t\! -\! \mathbf{x}^\star)^\top
\nabla_t \nabla_t^\top (\mathbf{x}_t\! -\! \mathbf{x}^\star) \\
& + \frac{\gamma}{2}
(\mathbf{x}_{1} - \mathbf{x}^\star)^\top ({A}_1 - \nabla_1\nabla_1^\top) (\mathbf{x}_{1} -
\mathbf{x}^\star).
\end{aligned}$$ In the last inequality we use the fact that
$A_t - A_{t-1} =
\nabla_t \nabla_t^\top$, and the fact that the matrix $A_T$ is PSD and
hence the last term before the inequality is negative. Thus,
$$\sum_{t=1}^T R_t\ \leq\ \frac{1}{2 \gamma } \sum_{t=1}^T
\nabla_t^\top {A}_t^{-1} \nabla_t + \frac{\gamma}{2} (\mathbf{x}_{1} -
\mathbf{x}^\star)^\top ({A}_1 - \nabla_1\nabla_1^\top) (\mathbf{x}_{1} - \mathbf{x}^\star).$$
Using the algorithm parameters
${A}_1 - \nabla_1 \nabla_1^\top = \varepsilon
\mathbf{I}_n$ , $\varepsilon = \frac{1}{\gamma^2 D^2}$ and our notation
for the diameter $\|\mathbf{x}_1 - \mathbf{x}^\star\|^2 \leq D^2$ we
have $$\begin{aligned}
\ensuremath{\mathrm{{Regret}}}_T(\text{\em ONS})\ & \leq\ & \sum_{t=1}^T R_t\
\leq\ \frac{1}{2 \gamma } \sum_{t=1}^T \nabla_t^\top {A}_t^{-1}
\nabla_t + \frac{ \gamma }{2} {D^2}{\varepsilon} \\
& \leq & \frac{1}{2 \gamma } \sum_{t=1}^T \nabla_t^\top {A}_t^{-1}
\nabla_t + \frac{1}{2 \gamma}.
\end{aligned}$$ Since $\gamma = \frac{1}{2}\min\{\frac{1}{GD},\alpha\}$,
we have $\frac{1}{\gamma} \leq 2( \frac{1}{\alpha} + GD)$. This gives
the lemma. ◻
:::
We can now prove Theorem [4.5](#thm:onsregret){reference-type="ref"
reference="thm:onsregret"}.
::: proof
*Proof of Theorem [4.5](#thm:onsregret){reference-type="ref"
reference="thm:onsregret"}.* First we show that the term
$\sum_{t=1}^T \nabla_t^\top {A}_t^{-1}
\nabla_t$ is upper bounded by a telescoping sum. Notice that
$$\nabla_t^\top {A}_t^{-1} \nabla_t = A_t^{-1} \bullet \nabla_t \nabla_t^\top = A_t^{-1} \bullet (A_{t} - A_{t-1})$$
where for matrices $A,B \in {\mathbb R}^{n \times n}$ we denote by $A
\bullet B = \sum_{i = 1}^n \sum_{j=1}^nA_{ij} B_{ij} = {\bf Tr}(AB^\top)$,
which is equivalent to the inner product of these matrices as vectors in
${\mathbb R}^{n^2}$.
For real numbers $a,b \in {\mathbb R}_+$, the first order Taylor
expansion of the logarithm of $b$ at $a$ implies
$a^{-1} (a-b) \leq \log \frac{a}{b}$. An analogous fact holds for
positive semidefinite matrices, i.e., $A^{-1} \bullet (A-B)
\leq \log \frac{|A|}{|B|}$, where $|A|$ denotes the determinant of the
matrix $A$ (this is proved in Lemma
[4.7](#lem:logdet){reference-type="ref" reference="lem:logdet"}). Using
this fact we have $$\begin{aligned}
\sum_{t=1}^T \nabla_t^\top {A}_t^{-1} \nabla_t & = & \sum_{t=1}^T
A_t^{-1} \bullet \nabla_t \nabla_t^\top \\
& = & \sum_{t=1}^T A_t^{-1} \bullet (A_{t} - A_{t-1}) \\
& \leq & \sum_{t=1}^T \log \frac{ |A_t|} {|A_{t-1}|} = \log
\frac{|A_T|}{|A_0|}.
\end{aligned}$$
Since $A_T = \sum_{t=1}^T \nabla_t\nabla_t^\top + \varepsilon I_n$ and
$\|\nabla_t\| \leq G$, the largest eigenvalue of $A_T$ is at most
$T G^2 + \varepsilon$. Hence the determinant of $A_T$ can be bounded by
$|A_T | \leq (T G^2 + \varepsilon)^n$. Hence recalling that
$\varepsilon = \frac{1}{\gamma^2D^2}$ and $\gamma =
\frac{1}{2}\min\{\frac{1}{GD},\alpha\}$, for $T > 4$,
$$\begin{aligned}
\sum_{t=1}^T \nabla_t^\top {A}_t^{-1} \nabla_t\ & \leq\ \log
\left( \frac{T G^2 + \varepsilon}{\varepsilon }\right)^n \leq n
\log (TG^2 \gamma^2 D^2 + 1) \leq n \log T.
\end{aligned}$$ Plugging into Lemma
[4.6](#lemma:onsbound){reference-type="ref" reference="lemma:onsbound"}
we obtain
$$\ensuremath{\mathrm{{Regret}}}_T(\text{ONS})\ \leq\ \left(\frac{1}{\alpha} + GD\right) (n \log T + 1),$$
which implies the theorem for $n > 1, \ T \geq 4$. ◻
:::
It remains to prove the technical lemma for positive semidefinite (PSD)
matrices used above.
::: {#lem:logdet .lemma}
**Lemma 4.7**. *Let $A \succcurlyeq B \succ 0$ be positive definite
matrices. Then $$A^{-1} \bullet (A - B) \ \leq\ \log \frac{|A|}{|B|}$$*
:::
::: proof
*Proof.* For any positive definite matrix $C$, denote by
$\lambda_1(C), \ldots, \lambda_n(C)$ its eigenvalues (which are
positive). $$\begin{aligned}
& A^{-1} \bullet (A - B) \ =\ {\bf Tr}(A^{-1} (A - B)) \\
& = {\bf Tr}(A^{-1/2} (A - B) A^{-1/2}) & {\bf Tr}(XY) = {\bf Tr}(YX) \\
& = {\bf Tr}(I - A^{-1/2} B A^{-1/2}) \\
& = \sum_{i=1}^n \left[ 1 - \lambda_i( A^{-1/2} B A^{-1/2}) \right] & {\bf Tr}(C) = \sum_{i=1}^n \lambda_i(C) \\
& \leq - \sum_{i=1}^n \log \left[ \lambda_i( A^{-1/2} B
A^{-1/2}) \right] & 1-x \leq -\log(x) \\
& = - \log \left[ \prod_{i=1}^n \lambda_i( A^{-1/2} B
A^{-1/2}) \right] \\
& = - \log | A^{-1/2} B A^{-1/2}| = \log \frac{|A|}{|B|} &
|C| = \prod_{i=1}^n \lambda_i(C)
\end{aligned}$$ In the last equality we use the facts $|AB| = |A||B|$
and $|A^{-1}| = \frac{1}{|A|}$ for positive definite matrices (see
exercises). ◻
:::
##### Implementation and running time. {#implementation-and-running-time. .unnumbered}
The online Newton step algorithm requires $O(n^2)$ space to store the
matrix $A_t$. Every iteration requires the computation of the matrix
$A_{t}^{-1}$, the current gradient, a matrix-vector product, and
possibly a projection onto the underlying convex set
$\ensuremath{\mathcal K}$.
A naı̈ve implementation would require computing the inverse of the matrix
$A_t$ on every iteration. However, in the case that $A_t$ is invertible,
the matrix inversion lemma (see bibliography) states that for invertible
matrix $A$ and vector $\mathbf{x}$,
$$(A + \mathbf{x}\mathbf{x}^\top)^{-1} = A^{-1} - \frac{A^{-1} \mathbf{x}\mathbf{x}^\top A^{-1}}{1 + \mathbf{x}^\top A^{-1} \mathbf{x}}.$$
Thus, given $A_{t-1}^{-1}$ and $\nabla_t$ one can compute $A_t^{-1}$ in
time $O(n^2)$ using only matrix-vector and vector-vector products.
The online Newton step algorithm also needs to make projections onto
$\ensuremath{\mathcal K}$, but of a slightly different nature than
online gradient descent and other online convex optimization algorithms.
The required projection, denoted by
$\mathop{\Pi}_\ensuremath{\mathcal K}^{A_t}$, is in the vector norm
induced by the matrix $A_t$, viz.
$\|\mathbf{x}\|_{A_t} = \sqrt{\mathbf{x}^\top A_t \mathbf{x}}$. It is
equivalent to finding the point $\mathbf{x}\in \ensuremath{\mathcal K}$
which minimizes
$(\mathbf{x}- \mathbf{y})^\top A_t(\mathbf{x}- \mathbf{y})$ where
$\ensuremath{\mathbf y}$ is the point we are projecting. This is a
convex program which can be solved up to any degree of accuracy in
polynomial time.
Modulo the computation of generalized projections, the online Newton
step algorithm can be implemented in time and space $O(n^2)$. In
addition, the only information required is the gradient at each step
(and the exp-concavity constant $\alpha$ of the loss functions).
## Bibliographic Remarks {#bibliographic-remarks-1}
The Geometric Brownian Motion model for stock prices was suggested and
studied as early as 1900 in the PhD thesis of Louis Bachelier
[@bachelier], see also [@osborne], and used in the Nobel Prize winning
work of Black and Scholes on options pricing [@black-scholes]. In a
strong deviation from standard financial theory, Thomas Cover [@cover]
put forth the universal portfolio model, whose algorithmic theory we
have historically sketched in chapter
[1](#chap:intro){reference-type="ref" reference="chap:intro"}. The EWOO
algorithm was essentially given in Cover's paper for the application of
portfolio selection and logarithmic loss functions, and extended to
exp-concave loss functions in [@HazanKKA06]. The randomized extension of
Cover's algorithm that runs in polynomial running time is due to
@KalaiVempalaPortfolios, and it naturally extends to EWOO.
Some bridges between classical portfolio theory and the universal model
appear in [@AbernethyStoc12]. Options pricing and its relation to regret
minimization was recently also explored in the work of [@DKM-options].
Exp-concave functions have been considered in the context of prediction
in [@kivinen-warmuth], see also [@CesaBianchiLugosi06book] (chapter 3.3
and bibliography). A more general condition than exp-concavity called
mixability was used by @vovk1990aggregating to give a general
multiplicative update algorithm, see also [@foster2018logistic]. For a
thorough discussion of various conditions that allow logarithmic regret
in online learning see [@van2015fast].
For the square-loss, [@Azoury] gave a specially tailored and
near-optimal prediction algorithm. Logarithmic regret algorithms for
online convex optimization and the Online Newton Step algorithm were
presented in [@HAK07].
Logarithmic regret algorithms were used to derive
$\tilde{O}(\frac{1}{\varepsilon})$-convergent algorithms for non-smooth
convex optimization in the context of training support vector machines
in [@Shalev-ShwartzSSC11]. Building upon these results, tight
convergence rates of SGD for strongly convex and non-smooth functions
were obtained in [@hazan:beyond].
The Sherman-Morrison formula, a.k.a. the matrix inversion lemma, gives
the form of the inverse of a matrix after a rank-1 update, see
[@pseudoinverse].
## Exercises
# Regularization {#chap:regularization}
In the previous chapters we have explored algorithms for OCO that are
motivated by convex optimization. However, unlike convex optimization,
the OCO framework optimizes the Regret performance metric. This
distinction motivates a family of algorithms, called "Regularized Follow
The Leader" (RFTL), which we introduce in this chapter.
In an OCO setting of regret minimization, the most straightforward
approach for the online player is to use at any time the optimal
decision (i.e., point in the convex set) in hindsight. Formally, let
$$\ensuremath{\mathbf x}_{t+1} = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \sum_{\tau=1}^{t} f_\tau(\ensuremath{\mathbf x}).$$
This flavor of strategy is known as "fictitious play" in economics, and
has been named "Follow the Leader" (FTL) in machine learning. It is not
hard to see that this simple strategy fails miserably in a worst-case
sense. That is, this strategy's regret can be linear in the number of
iterations, as the following example shows: Consider
$\ensuremath{\mathcal K}= [-1,1]$, let $f_1(x) = \frac{1}{2} x$, and let
$f_\tau$ for $\tau=2 , \ldots , T$ alternate between $- x$ or $x$. Thus,
$$\sum_{\tau=1}^t f_\tau(x) = {
\left\{
\begin{array}{ll}
{ \frac{1}{2} x }, & {t \mbox{ is odd} } \\\\
{-\frac{1}{2} x }, & {\text{otherwise}}
\end{array}
\right. }$$ The FTL strategy will keep shifting between $x_t = -1$ and
$x_t = 1$, always making the wrong choice.
The intuitive FTL strategy fails in the example above because it is
unstable. Can we modify the FTL strategy such that it won't change
decisions often, thereby causing it to attain low regret?
This question motivates the need for a general means of stabilizing the
FTL method. Such a means is referred to as "regularization".
## Regularization Functions
In this chapter we consider regularization functions, denoted
$R : \ensuremath{\mathcal K}\mapsto {\mathbb R}$, which are strongly
convex and smooth (recall definitions in
§[2.1](#sec:optdefs){reference-type="ref" reference="sec:optdefs"}).
Although it is not strictly necessary, we assume that the regularization
functions in this chapter are twice differentiable over
$\ensuremath{\mathcal K}$ and, for all points
$\ensuremath{\mathbf x}\in \text{int}(\ensuremath{\mathcal K})$ in the
interior of the decision set, have a Hessian
$\nabla^2 R(\ensuremath{\mathbf x})$ that is, by the strong convexity of
$R$, positive definite.
We denote the diameter of the set $\ensuremath{\mathcal K}$ relative to
the function $R$ as
$$D_R = \sqrt{ \max_{\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}} \{ R(\ensuremath{\mathbf x}) - R(\ensuremath{\mathbf y}) \}} .$$
Henceforth we make use of general norms and their dual. The dual norm to
a norm $\| \cdot \|$ is given by the following definition:
$$\| \ensuremath{\mathbf y}\|^* \stackrel{\text{\tiny def}}{=}\sup_{ \| \ensuremath{\mathbf x}\| \leq 1 } \left\{ \ensuremath{\mathbf x}^\top \ensuremath{\mathbf y}\right\} .$$
A positive definite matrix $A$ gives rise to the matrix norm
$\|\ensuremath{\mathbf x}\|_A = \sqrt{\ensuremath{\mathbf x}^\top A \ensuremath{\mathbf x}}$.
The dual norm of a matrix norm is
$\|\ensuremath{\mathbf x}\|_A^*=\|\ensuremath{\mathbf x}\|_{A^{-1}}$.
The generalized Cauchy-Schwarz theorem asserts
$\ensuremath{\mathbf x}^\top \ensuremath{\mathbf y}\leq \| \ensuremath{\mathbf x}\| \| \ensuremath{\mathbf y}\|^*$
and in particular for matrix norms,
$\ensuremath{\mathbf x}^\top \ensuremath{\mathbf y}\leq \|\ensuremath{\mathbf x}\|_A \| \ensuremath{\mathbf y}\|_A^*$
(see exercises).
In our derivations, we usually consider matrix norms with respect to
$\nabla^2R(\ensuremath{\mathbf x})$, the Hessian of the regularization
function $R(\ensuremath{\mathbf x})$, as well as the inverse Hessian
denoted $\nabla^{-2} R(\ensuremath{\mathbf x})$. In such cases, we use
the notation
$$\|\ensuremath{\mathbf x}\|_\ensuremath{\mathbf y}\stackrel{\text{\tiny def}}{=}\|\ensuremath{\mathbf x}\|_{\nabla^2 {R}(\ensuremath{\mathbf y})} ,$$
and similarly
$$\|\ensuremath{\mathbf x}\|_\ensuremath{\mathbf y}^* \stackrel{\text{\tiny def}}{=}\|\ensuremath{\mathbf x}\|_{\nabla^{-2} {R}(\ensuremath{\mathbf y})} .$$
A crucial quantity in the analysis of OCO algorithms that use
regularization is the remainder term of the Taylor approximation of the
regularization function, and especially the remainder term of the first
order Taylor approximation. The difference between the value of the
regularization function at $\ensuremath{\mathbf x}$ and the value of the
first order Taylor approximation is known as the Bregman divergence,
given by
::: definition
**Definition 5.1**. *Denote by
$B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y})$ the Bregman
divergence with respect to the function ${R}$, defined as
$$B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y}) = {R}(\ensuremath{\mathbf x}) - {R}(\ensuremath{\mathbf y}) - \nabla {R}(\ensuremath{\mathbf y})^\top (\ensuremath{\mathbf x}-\ensuremath{\mathbf y}) .$$*
:::
For twice differentiable functions, Taylor expansion and the mean-value
theorem assert that the Bregman divergence is equal to the second
derivative at an intermediate point, i.e., (see exercises)
$$B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y}) = \frac{1}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf y}\|_\ensuremath{\mathbf z}^2,$$
for some point
$\ensuremath{\mathbf z}\in [\ensuremath{\mathbf x},\ensuremath{\mathbf y}]$,
meaning there exists some $\alpha \in [0,1]$ such that
$\ensuremath{\mathbf z}= \alpha \ensuremath{\mathbf x}+ (1-\alpha) \ensuremath{\mathbf y}$.
Therefore, the Bregman divergence defines a local norm, which has a dual
norm. We shall denote this dual norm by
$$\| \cdot \|_{\ensuremath{\mathbf x},\ensuremath{\mathbf y}}^* \stackrel{\text{\tiny def}}{=}\| \cdot \|_\ensuremath{\mathbf z}^*.$$
With this notation we have
$$B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y}) = \frac{1}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf y}\|_{\ensuremath{\mathbf x},\ensuremath{\mathbf y}} ^2.$$
In online convex optimization, we commonly refer to the Bregman
divergence between two consecutive decision points
$\ensuremath{\mathbf x}_t$ and $\ensuremath{\mathbf x}_{t+1}$. In such
cases, we shorthand notation for the norm defined by the Bregman
divergence with respect to ${R}$ on the intermediate point in
$[\ensuremath{\mathbf x}_t,\ensuremath{\mathbf x}_{t+1}]$ as
$\| \cdot \|_t \stackrel{\text{\tiny def}}{=}\| \cdot \|_{\ensuremath{\mathbf x}_t,\ensuremath{\mathbf x}_{t+1}}$.
The latter norm is called the local norm at iteration $t$. With this
notation, we have
$B_{R}(\ensuremath{\mathbf x}_t||\ensuremath{\mathbf x}_{t+1}) = \frac{1}{2} \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}_{t+1}\|_t^2$.
Finally, we consider below generalized projections that use the Bregman
divergence as a distance instead of a norm. Formally, the projection of
a point $\ensuremath{\mathbf y}$ according to the Bregman divergence
with respect to function $R$ is given by
$$\mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y}) .$$
## The RFTL Algorithm and its Analysis
Recall the caveat with straightforward use of follow-the-leader: as in
the bad example we have considered, the predictions of FTL may vary
wildly from one iteration to the next. This motivates the modification
of the basic FTL strategy in order to stabilize the prediction. By
adding a regularization term, we obtain the RFTL (Regularized Follow the
Leader) algorithm.
We proceed to formally describe the RFTL algorithmic template and
analyze it. The analysis gives asymptotically optimal regret bounds.
However, we do not optimize the constants in the regret bounds in order
to improve clarity of presentation.
Throughout this chapter, recall the notation of $\nabla_t$ to denote the
gradient of the current cost function at the current point, i.e.,
$$\nabla_t \stackrel{\text{\tiny def}}{=}\nabla f_t(\ensuremath{\mathbf x}_t) .$$
In the OCO setting, the regret of convex cost functions can be bounded
by a linear function via the inequality
$f_t(\ensuremath{\mathbf x_{t}}) - f_t(\ensuremath{\mathbf x}^\star) \leq \nabla_t^\top (\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x}^\star)$.
Thus, the overall regret (recall definition
[\[eqn:regret-defn\]](#eqn:regret-defn){reference-type="eqref"
reference="eqn:regret-defn"}) of an OCO algorithm can be bounded by
$$\label{eqn:rftl-shalom}
\sum_t f_t(\ensuremath{\mathbf x}_t) - f_t(\ensuremath{\mathbf x}^\star) \leq \sum_t \nabla_t^\top (\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^\star).$$
### Meta-algorithm definition
The generic RFTL meta-algorithm is defined in Algorithm
[\[alg:RFTLmain\]](#alg:RFTLmain){reference-type="ref"
reference="alg:RFTLmain"}. The regularization function ${R}$ is assumed
to be strongly convex, smooth, and twice differentiable.
::: algorithm
::: algorithmic
Input: $\eta > 0$, regularization function ${R}$, and a bounded, convex
and closed set $\ensuremath{\mathcal K}$. Let
$\ensuremath{\mathbf x_{1}} = \arg\min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} {\left\{ {R}(\ensuremath{\mathbf x})\right\} }$.
Play $\ensuremath{\mathbf x}_t$ and observe cost
$f_t(\ensuremath{\mathbf x}_t)$. Update $$\begin{aligned}
\ensuremath{\mathbf x_{t+1}} = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} {\left\{\eta\sum_{s=1}^t \nabla_s^\top \ensuremath{\mathbf x}+ {R}(\ensuremath{\mathbf x})\right\}}
\end{aligned}$$
:::
:::
### The regret bound {#sec:thm1.1}
::: {#thm:RFTLmain1 .theorem}
**Theorem 5.2**. *The RFTL Algorithm
[\[alg:RFTLmain\]](#alg:RFTLmain){reference-type="ref"
reference="alg:RFTLmain"} attains for every
$\ensuremath{\mathbf u}\in \ensuremath{\mathcal K}$ the following bound
on the regret:
$$\ensuremath{\mathrm{{Regret}}}_T \le 2 \eta \sum_{t=1}^T \| \nabla_t \|_t^{* 2} + \frac{R(\ensuremath{\mathbf u}) - R(\ensuremath{\mathbf x}_1)}{\eta } .$$*
:::
If an upper bound on the local norms is known, i.e.,
$\| \nabla_t\|_t^* \leq G_R$ for all times $t$, then we can further
optimize over the choice of $\eta$ to obtain
$$\ensuremath{\mathrm{{Regret}}}_T \leq 2 D_R G_R \sqrt{ 2T } .$$
To prove Theorem [5.2](#thm:RFTLmain1){reference-type="ref"
reference="thm:RFTLmain1"}, we first relate the regret to the
"stability" in prediction. This is formally captured by the following
lemma.
::: {#lem:FTL-BTL .lemma}
**Lemma 5.3**. *Algorithm
[\[alg:RFTLmain\]](#alg:RFTLmain){reference-type="ref"
reference="alg:RFTLmain"} guarantees the following regret bound
$$\begin{aligned}
\ensuremath{\mathrm{{Regret}}}_T \leq \sum_{t=1}^T \nabla_t^\top
(\ensuremath{\mathbf x_{t}}-\ensuremath{\mathbf x_{t+1}}) + \frac{1}{\eta} D_R^2 % B_{R}(\uv||\xv[1])
\end{aligned}$$*
:::
::: proof
*Proof.* For convenience of the derivations, define the functions
$$g_0(\mathbf{x}) \stackrel{\text{\tiny def}}{=}\frac{1}{\eta}R(\mathbf{x}) \ , \ g_t(\mathbf{x}) \stackrel{\text{\tiny def}}{=}\nabla_t^\top \mathbf{x}.$$
By equation
[\[eqn:rftl-shalom\]](#eqn:rftl-shalom){reference-type="eqref"
reference="eqn:rftl-shalom"}, it suffices to bound
$\sum_{t=1}^T [ g_t(\ensuremath{\mathbf x_{t}}) - g_t (\ensuremath{\mathbf u})]$.
As a first step, we prove the following inequality:
::: {#prop:ftl-btl .lemma}
**Lemma 5.4**. *For every
$\ensuremath{\mathbf u}\in \ensuremath{\mathcal K}$,
$$\sum_{t=0}^T g_t(\ensuremath{\mathbf u}) \geq \sum_{t=0}^T g_t(\ensuremath{\mathbf x_{t+1}}) .$$*
:::
::: proof
*Proof.* by induction on $T$:\
\
By definition, we have that
$\ensuremath{\mathbf x_{1}} = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \{R(\ensuremath{\mathbf x})\}$,
and thus
$g_0(\ensuremath{\mathbf u}) \ge g_0(\ensuremath{\mathbf x_{1}})$ for
all $\ensuremath{\mathbf u}$.\
**Induction step:**\
Assume that for $T$, we have $$\begin{aligned}
\sum_{t=0}^{T} g_t(\ensuremath{\mathbf u}) \geq \sum_{t=0}^{T} g_t(\ensuremath{\mathbf x_{t+1}} )
\end{aligned}$$ and let us prove the statement for $T+1$. Since
$\ensuremath{\mathbf x_{T+2}} = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \{ \sum_{t=0}^{T+1} g_t(\ensuremath{\mathbf x})\}$
we have: $$\begin{aligned}
\sum_{t=0}^{T+1} g_t (\ensuremath{\mathbf u}) & \geq & \sum_{t=0}^{T+1} g_t (\ensuremath{\mathbf x_{T+2}}) \\
& = & \sum_{t=0}^{T} g_t (\ensuremath{\mathbf x_{T+2}}) + g_{T+1}(\ensuremath{\mathbf x_{T+2}}) \\
& \geq & \sum_{t=0}^{T} g_t (\ensuremath{\mathbf x_{t+1}}) + g_{T+1}(\ensuremath{\mathbf x_{T+2}}) \\
& = & \sum_{t=0}^{T+1} g_t (\ensuremath{\mathbf x_{t+1}}).
\end{aligned}$$ Where in the third line we used the induction hypothesis
for $\ensuremath{\mathbf u}= \ensuremath{\mathbf x_{T+2}}$. ◻
:::
We conclude that $$\begin{aligned}
\sum_{t=1}^T [ g_t(\ensuremath{\mathbf x_{t}}) - g_t (\ensuremath{\mathbf u})] & \leq & \sum_{t=1}^T [g_t(\ensuremath{\mathbf x_{t}}) - g_t (\ensuremath{\mathbf x_{t+1}})] + \left[ g_0(\ensuremath{\mathbf u}) - g_0(\ensuremath{\mathbf x_{1}}) \right] \\
& = & \sum_{t=1}^T g_t(\ensuremath{\mathbf x_{t}}) - g_t (\ensuremath{\mathbf x_{t+1}}) + \frac{1}{\eta} \left[ R(\ensuremath{\mathbf u}) - R(\ensuremath{\mathbf x_{1}}) \right] \\
& \le & \sum_{t=1}^T g_t(\ensuremath{\mathbf x_{t}}) - g_t (\ensuremath{\mathbf x_{t+1}}) + \frac{1}{\eta} D_R^2 .
\end{aligned}$$ ◻
:::
::: proof
*Proof of Theorem [5.2](#thm:RFTLmain1){reference-type="ref"
reference="thm:RFTLmain1"}.* Recall that ${R}(\ensuremath{\mathbf x})$
is a convex function and $\ensuremath{\mathcal K}$ is a convex set.
Denote:
$$\Phi_t(\ensuremath{\mathbf x}) \stackrel{\text{\tiny def}}{=}\eta\sum_{s=1}^t \nabla_s^\top \ensuremath{\mathbf x}+ {R}(\ensuremath{\mathbf x}) .$$
By the Taylor expansion (with its explicit remainder term via the
mean-value theorem) at $\ensuremath{\mathbf x_{t+1}}$, and by the
definition of the Bregman divergence, $$\begin{aligned}
\Phi_t(\ensuremath{\mathbf x_{t}}) & = & \Phi_t(\ensuremath{\mathbf x_{t+1}})
+ (\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}})^\top \nabla \Phi_t(\ensuremath{\mathbf x_{t+1}})
+ B_{\Phi_t}(\ensuremath{\mathbf x}_t||\ensuremath{\mathbf x}_{t+1} ) \\ % \frac{1}{2} \|\xv - \xv[t+1]\|^2_{\zv} \\
& \geq &
\Phi_t(\ensuremath{\mathbf x_{t+1}}) + B_{\Phi_t} (\ensuremath{\mathbf x}_t||\ensuremath{\mathbf x}_{t+1} ) \\ %\frac{1}{2}\|\xv - \xv[t+1]\|^2_{\zv}
& = &
\Phi_t(\ensuremath{\mathbf x_{t+1}}) + B_{{R}} (\ensuremath{\mathbf x}_t||\ensuremath{\mathbf x}_{t+1} ). %\frac{1}{2}\|\xv - \xv[t+1]\|^2_{\zv}
\end{aligned}$$ The inequality holds since
$\ensuremath{\mathbf x_{t+1}}$ is a minimum of $\Phi_t$ over
$\ensuremath{\mathcal K}$, as in Theorem
[2.2](#thm:optim-conditions){reference-type="ref"
reference="thm:optim-conditions"}. The last equality holds since the
component $\nabla_s^\top \ensuremath{\mathbf x}$ is linear and thus does
not affect the Bregman divergence. Thus, $$\begin{aligned}
\label{eqn:chap5shalom}
B_{R}(\ensuremath{\mathbf x}_t || \ensuremath{\mathbf x}_{t+1}) & \leq & \,\Phi_t(\ensuremath{\mathbf x_{t}}) - \,\Phi_t(\ensuremath{\mathbf x_{t+1}}) \\
& = & \,\ (\Phi_{t-1}(\ensuremath{\mathbf x_{t}}) - \Phi_{t-1}(\ensuremath{\mathbf x_{t+1}})) + \eta \nabla_t^\top (\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}}) \notag \\
& \leq & \,\eta \,\nabla_t^\top (\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}}) \quad \mbox{($\ensuremath{\mathbf x}_t$ is the minimizer)} \notag
\end{aligned}$$ To proceed, recall the shorthand for the norm induced by
the Bregman divergence with respect to ${R}$ on point
$\ensuremath{\mathbf x}_t,\ensuremath{\mathbf x}_{t+1}$ as
$\| \cdot \|_t = \| \cdot \|_{\ensuremath{\mathbf x}_t,\ensuremath{\mathbf x}_{t+1}}$.
Similarly for the dual local norm
$\| \cdot \|^*_t = \| \cdot \|^*_{\ensuremath{\mathbf x}_t,\ensuremath{\mathbf x}_{t+1}}$.
With this notation, we have
$B_{R}(\ensuremath{\mathbf x}_t||\ensuremath{\mathbf x}_{t+1}) = \frac{1}{2} \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}_{t+1}\|_t^2$.
By the generalized Cauchy-Schwarz inequality, $$\begin{aligned}
\nabla_t^\top (\ensuremath{\mathbf x_{t}}-\ensuremath{\mathbf x_{t+1}}) &\leq \|\nabla_t \|_{t}^* \cdot
\|\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}} \|_{t} & \mbox{ Cauchy-Schwarz} \\
& = \|\nabla_t \|_{t}^* \cdot
\sqrt{2 B_{R}(\ensuremath{\mathbf x}_t||\ensuremath{\mathbf x}_{t+1}) } \\
& \leq \|\nabla_t \|_{t}^* \cdot \sqrt{2\, \eta\, \nabla_t^\top (\ensuremath{\mathbf x_{t}}-
\ensuremath{\mathbf x_{t+1}}) }. & \eqref{eqn:chap5shalom} \nonumber
\end{aligned}$$ After rearranging we get $$\begin{aligned}
\nabla_t^\top (\ensuremath{\mathbf x_{t}}-\ensuremath{\mathbf x_{t+1}}) &\leq 2\, \eta \, \|\nabla_t \|^{* 2}_{t}.
\end{aligned}$$ Combining this inequality with Lemma
[5.3](#lem:FTL-BTL){reference-type="ref" reference="lem:FTL-BTL"} we
obtain the theorem statement. ◻
:::
## Online Mirror Descent
In the convex optimization literature, "Mirror Descent" refers to a
general class of first order methods generalizing gradient descent.
Online Mirror descent (OMD) is the online counterpart of this class of
methods. This relationship is analogous to the relationship of online
gradient descent to traditional (offline) gradient descent.
OMD is an iterative algorithm that computes the current decision using a
simple gradient update rule and the previous decision, much like OGD.
The generality of the method stems from the update being carried out in
a "dual" space, where the duality notion is defined by the choice of
regularization: the gradient of the regularization function defines a
mapping from ${\mathbb R}^n$ onto itself, which is a vector field. The
gradient updates are then carried out in this vector field.
For the RFTL algorithm the intuition was straightforward---the
regularization was used to ensure stability of the decision. For OMD,
regularization has an additional purpose: regularization transforms the
space in which gradient updates are performed. This transformation
enables better bounds in terms of the geometry of the space.
The OMD algorithm comes in two flavors: an agile and a lazy version. The
lazy version keeps track of a point in Euclidean space and projects onto
the convex decision set $\ensuremath{\mathcal K}$ only at decision time.
In contrast, the agile version maintains a feasible point at all times,
much like OGD.
::: algorithm
::: algorithmic
Input: parameter $\eta > 0$, regularization function
${R}(\ensuremath{\mathbf x})$. Let $\ensuremath{\mathbf y_{1}}$ be such
that $\nabla {R}(\ensuremath{\mathbf y_{1}}) = \mathbf{0}$ and
$\ensuremath{\mathbf x_{1}} = \arg\min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y_{1}})$.
Play $\ensuremath{\mathbf x_{t}}$. Observe the loss function $f_t$ and
let $\nabla_t = \nabla f_t(\ensuremath{\mathbf x}_t)$. Update
$\ensuremath{\mathbf y}_t$ according to the rule: $$\begin{aligned}
&\text{[Lazy version]}
&\nabla {R}(\ensuremath{\mathbf y_{t+1}}) = \nabla {R}(\ensuremath{\mathbf y_{t}}) - \eta\, \nabla_{t}\\
&\text{[Agile version]}
&\nabla {R}(\ensuremath{\mathbf y_{t+1}}) = \nabla {R}(\ensuremath{\mathbf x_{t}}) - \eta\, \nabla_{t}
\end{aligned}$$ Project according to $B_{R}$:
$$\ensuremath{\mathbf x_{t+1}} = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y_{t+1}})$$
:::
:::
Both versions can be analyzed to give roughly the same regret bounds as
the RFTL algorithm. In light of what we will see next, this is not
surprising: for linear cost functions, the RFTL and lazy-OMD algorithms
are equivalent!
Thus, we get regret bounds for free for the lazy version. The agile
version can be shown to attain similar regret bounds, and is in fact
superior in certain settings that require adaptivity. This issue is
further explored in chapter [10](#chap:adaptive){reference-type="ref"
reference="chap:adaptive"}. The analysis of the agile version is of
independent interest and we give it below.
### Equivalence of lazy OMD and RFTL
The OMD (lazy version) and RFTL are identical for linear cost functions,
as we show next.
::: lemma
**Lemma 5.5**. *Let $f_1,...,f_T$ be linear cost functions. The lazy OMD
and RFTL algorithms produce identical predictions, i.e.,
$$\mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{ B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y_{t}}) \right\} = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}}
\left( \eta \sum_{s=1}^{t-1} \nabla_s^\top \ensuremath{\mathbf x}+ {R}(\ensuremath{\mathbf x}) \right) .$$*
:::
::: proof
*Proof.* First, observe that the unconstrained minimum
$$\ensuremath{\mathbf x_{t}}^\star \stackrel{\text{\tiny def}}{=}\mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in {\mathbb R}^n}
\bigg\{\sum_{s=1}^{t-1} \nabla_s^\top \ensuremath{\mathbf x}+ \frac{1}{\eta} {R}(\ensuremath{\mathbf x}) \bigg\}$$
satisfies
$$\nabla {R}(\ensuremath{\mathbf x_{t}}^\star) = - \eta \sum_{s=1}^{t-1} \nabla_s.$$
By definition, $\ensuremath{\mathbf y_{t}}$ also satisfies the above
equation, but since ${R}(\ensuremath{\mathbf x})$ is strictly convex,
there is only one solution for the above equation and thus
$\ensuremath{\mathbf y_{t}}= \ensuremath{\mathbf x}^\star_t$. Hence,
$$\begin{aligned}
B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y_{t}})\ &=\ {R}(\ensuremath{\mathbf x}) - {R}(\ensuremath{\mathbf y_{t}}) - (\nabla {R}(\ensuremath{\mathbf y_{t}}))^\top (\mathbf{x}-\ensuremath{\mathbf y_{t}})\\
&=\ {R}(\ensuremath{\mathbf x}) - {R}(\ensuremath{\mathbf y_{t}}) + \eta\, \sum_{s=1}^{t-1} \nabla_s^\top (\ensuremath{\mathbf x}-\ensuremath{\mathbf y_{t}})~.
\end{aligned}$$ Since ${R}(\ensuremath{\mathbf y_{t}})$ and
$\sum_{s=1}^{t-1} \nabla_s^\top \ensuremath{\mathbf y_{t}}$ are
independent of $\ensuremath{\mathbf x}$, it follows that
$B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y_{t}})$ is minimized
at the point $\ensuremath{\mathbf x}$ that minimizes
${R}(\ensuremath{\mathbf x}) + \eta\, \sum_{s=1}^{t-1} \nabla_s^\top \ensuremath{\mathbf x}$
over $\ensuremath{\mathcal K}$ which, in turn, implies that
$$\begin{aligned}
\mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y_{t}})\ =\
\mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \bigg\{ \sum_{s=1}^{t-1} \nabla_s^\top \ensuremath{\mathbf x}+
\frac{1}{\eta} {R}(\ensuremath{\mathbf x}) \bigg\}~.
\end{aligned}$$ ◻
:::
### Regret bounds for Mirror Descent
In this subsection we prove regret bounds for the agile version of the
RFTL algorithm. The analysis is quite different than the one for the
lazy version, and of independent interest.
::: {#thm:mirrordescent .theorem}
**Theorem 5.6**. *The OMD Algorithm
[\[alg:flpl\]](#alg:flpl){reference-type="ref" reference="alg:flpl"}
attains for every $\ensuremath{\mathbf u}\in \ensuremath{\mathcal K}$
the following bound on the regret:
$$\ensuremath{\mathrm{{Regret}}}_T \le \frac{\eta}{4} \sum_{t=1}^T \| \nabla_t \|_t^{* 2} + \frac{R(\ensuremath{\mathbf u}) - R(\ensuremath{\mathbf x}_1)}{2\eta } .$$*
:::
If an upper bound on the local norms is known, i.e.,
$\| \nabla_t\|_t^* \leq G_R$ for all times $t$, then we can further
optimize over the choice of $\eta$ to obtain
$$\ensuremath{\mathrm{{Regret}}}_T \leq D_R G_R \sqrt{ T } .$$
::: proof
*Proof.* Since the functions $\ensuremath{\mathbf f_{t}}$ are convex,
for any $\ensuremath{\mathbf x}^* \in K$,
$$\ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x}_t) - \ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x}^*) \leq \nabla \ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x}_t)^\top (\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^*) .$$
The following property of Bregman divergences follows from the
definition: for any vectors
$\ensuremath{\mathbf x},\ensuremath{\mathbf y},\ensuremath{\mathbf z}$,
$$(\ensuremath{\mathbf x}- \ensuremath{\mathbf y})^\top (\nabla \ensuremath{\mathcal R}(\ensuremath{\mathbf z}) - \nabla \ensuremath{\mathcal R}(\ensuremath{\mathbf y})) = B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x},\ensuremath{\mathbf y})-B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x},\ensuremath{\mathbf z}) +
B_\ensuremath{\mathcal R}(\ensuremath{\mathbf y},\ensuremath{\mathbf z}).$$
Combining both observations, $$\begin{aligned}
\ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x}_t) - \ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x}^*) & \leq \nabla \ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x}_t)^\top (\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}^*) \\
& = \frac{1}{\eta} (\nabla \ensuremath{\mathcal R}(\ensuremath{\mathbf y}_{t+1}) - \nabla \ensuremath{\mathcal R}(\ensuremath{\mathbf x}_{t}))^\top(\ensuremath{\mathbf x}^* - \ensuremath{\mathbf x}_t) \\
& = \frac{1}{\eta} [B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}^*,\ensuremath{\mathbf x_{t}})-B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}^*,\ensuremath{\mathbf y}_{t+1}) + B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_{t+1})] \\
& \leq \frac{1}{\eta} [B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}^*,\ensuremath{\mathbf x_{t}})-B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}^*,\ensuremath{\mathbf x}_{t+1}) +
B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_{t+1})]
\end{aligned}$$ where the last inequality follows from the generalized
Pythagorean theorem, as $\ensuremath{\mathbf x}_{t+1}$ is the projection
w.r.t the Bregman divergence of $\ensuremath{\mathbf y}_{t+1}$ and
$\ensuremath{\mathbf x}^* \in K$ is in the convex set. Summing over all
iterations, $$\begin{aligned}
\label{eq:general1}
\ensuremath{\mathrm{{Regret}}}& \leq & \frac{1}{\eta} [ B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}^*,\ensuremath{\mathbf x}_1) - B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}^*,\ensuremath{\mathbf x}_T) ] + \sum_{t=1}^T \frac{1}{\eta} B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_{t+1}) \notag \\
& \leq & \frac{1}{\eta} D^2_R + \sum_{t=1}^T \frac{1}{\eta} B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_{t+1})
\end{aligned}$$
We proceed to bound
$B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_{t+1})$.
By definition of Bregman divergence, and the generalized Cauchy-Schwartz
inequality, $$\begin{aligned}
B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_{t+1}) + B_\ensuremath{\mathcal R}(\ensuremath{\mathbf y}_{t+1},\ensuremath{\mathbf x}_t) &= (\nabla \ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t) - \nabla \ensuremath{\mathcal R}(\ensuremath{\mathbf y}_{t+1}))^\top (\ensuremath{\mathbf x}_t - \ensuremath{\mathbf y}_{t+1}) \\
&= \eta \nabla \ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x}_t)^\top(\ensuremath{\mathbf x}_t - \ensuremath{\mathbf y}_{t+1}) \\
& \leq \eta \| \nabla \ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x}_t) \|^*_t \| \ensuremath{\mathbf x}_t - \ensuremath{\mathbf y}_{t+1} \|_t \\
&\leq \frac{1}{2} \eta^2 G_R^{ 2} + \frac{1}{2} \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf y}_{t+1}\|^2_t.
\end{aligned}$$ where in the last inequality follows from
$(a-b)^2 \geq 0$. Thus, we have
$$B_\ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_{t+1}) \leq \frac{1}{2} \eta^2 G_R^2 + \frac{1}{2} \|\ensuremath{\mathbf x}_t -
\ensuremath{\mathbf y}_{t+1}\|^2_t - B_\ensuremath{\mathcal R}(\ensuremath{\mathbf y}_{t+1},\ensuremath{\mathbf x}_t) = \frac{1}{2} \eta^2 G^2_R.$$
Plugging back into Equation
[\[eq:general1\]](#eq:general1){reference-type="eqref"
reference="eq:general1"}, and by non-negativity of the Bregman
divergence, we get
$$\ensuremath{\mathrm{{Regret}}}\leq \frac{1}{2} [\frac{1}{\eta} D^2_R + \frac{1}{2} \eta T G_{R}^{2} ] \leq D_R G_R \sqrt{T} \ ,$$
by taking $\eta = \frac{ D_R}{\sqrt{T} G_R}$ ◻
:::
## Application and Special Cases
In this section we illustrate the generality of the regularization
technique: we show how to derive the two most important and famous
online algorithms---the online gradient descent algorithm and the online
exponentiated gradient (based on the multiplicative update
method)---from the RFTL meta-algorithm.
Other important special cases of the RFTL meta-algorithm are derived
with matrix-norm regularization---namely, the von Neumann entropy
function, and the log-determinant function, as well as self-concordant
barrier regularization---which we shall explore in detail in the next
chapter.
### Deriving online gradient descent
To derive the online gradient descent algorithm, we take
${R}(\ensuremath{\mathbf x}) = \frac{1}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_0\|_2^2$
for an arbitrary $\ensuremath{\mathbf x}_0 \in \ensuremath{\mathcal K}$.
Projection with respect to this divergence is the standard Euclidean
projection (see exercises), and in addition,
$\nabla {R}(\ensuremath{\mathbf x}) = \ensuremath{\mathbf x}- \ensuremath{\mathbf x}_0$.
Hence, the update rule for the OMD Algorithm
[\[alg:flpl\]](#alg:flpl){reference-type="ref" reference="alg:flpl"}
becomes: $$\begin{aligned}
& \ensuremath{\mathbf x_{t}}= \mathop{\Pi}_\ensuremath{\mathcal K}( \ensuremath{\mathbf y_{t}}) , \ \ensuremath{\mathbf y_{t}}= \ensuremath{\mathbf y_{t-1}} - \eta \nabla_{t-1} & \mbox{lazy version} \\
& \ensuremath{\mathbf x_{t}}= \mathop{\Pi}_\ensuremath{\mathcal K}( \ensuremath{\mathbf y_{t}}) , \ \ensuremath{\mathbf y_{t}}= \ensuremath{\mathbf x_{t-1}} - \eta \nabla_{t-1} & \mbox{agile version}
\end{aligned}$$
The latter algorithm is exactly online gradient descent, as described in
Algorithm [\[alg:ogd\]](#alg:ogd){reference-type="ref"
reference="alg:ogd"} in chapter
[3](#chap:first order){reference-type="ref"
reference="chap:first order"}. However, both variants behave very
differently, as explored in chapter
[10](#chap:adaptive){reference-type="ref" reference="chap:adaptive"}
(see also exercises).
Theorem [5.2](#thm:RFTLmain1){reference-type="ref"
reference="thm:RFTLmain1"} gives us the following bound on the regret
(where $D_R, \| \cdot\|_t$ are the diameter and local norm defined with
respect to the regularizer $R$ as defined in the beginning of this
chapter, and $D$ is the Euclidean diameter as defined in chapter
[2](#chap:opt){reference-type="ref" reference="chap:opt"})
$$\ensuremath{\mathrm{{Regret}}}_T \le \frac{1}{\eta } D_R ^2 + 2 \eta \sum_t \| \nabla_t \|_t^{* 2} \leq \frac{1}{2 \eta} D^2 + 2 \eta \sum_t \|\nabla_t \|^2 \leq 2GD \sqrt{ T },$$
where the second inequality follows since for
${R}(\ensuremath{\mathbf x}) = \frac{1}{2} \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_0\|^2$,
the local norm $\|\cdot\|_t$ reduces to the Euclidean norm.
### Deriving multiplicative updates
Let
${R}(\ensuremath{\mathbf x_{}}) = \ensuremath{\mathbf x_{}} \log \ensuremath{\mathbf x_{}} = \sum_i \ensuremath{\mathbf x}_i \log \ensuremath{\mathbf x}_i$
be the negative entropy function, where $\log \ensuremath{\mathbf x}$ is
to be interpreted element-wise. Then
$\nabla {R}(\ensuremath{\mathbf x}) = \mathbf{1}+ \log \ensuremath{\mathbf x}$,
and hence the update rules for the OMD algorithm become:
$$\begin{aligned}
& \ensuremath{\mathbf x_{t}}= \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y_{t}}) , \ \log \ensuremath{\mathbf y_{t}}= \log \ensuremath{\mathbf y_{t-1}} - \eta \nabla_{t-1} & \mbox{lazy version} \\
& \ensuremath{\mathbf x_{t}}= \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} B_{R}(\ensuremath{\mathbf x}||\ensuremath{\mathbf y_{t}}) , \ \log \ensuremath{\mathbf y_{t}}= \log \ensuremath{\mathbf x_{t-1}} - \eta \nabla_{t-1} & \mbox{agile version}
\end{aligned}$$
With this choice of regularizer, a notable special case is the experts
problem we encountered in §[1.3](#sec:experts){reference-type="ref"
reference="sec:experts"}, for which the decision set
$\ensuremath{\mathcal K}$ is the $n$-dimensional simplex
$\Delta_n = \{ \ensuremath{\mathbf x}\in {\mathbb R}^n_+ \ | \ \sum_i \ensuremath{\mathbf x}_i = 1 \}$.
In this special case, the projection according to the negative entropy
becomes scaling by the $\ell_1$ norm (see exercises), which implies that
both update rules amount to the same algorithm:
$$\ensuremath{\mathbf x}_{t+1}(i) = \frac{\ensuremath{\mathbf x}_t(i) \cdot e^{-\eta \nabla_t(i)}}{\sum_{j=1}^n \ensuremath{\mathbf x}_t(j) \cdot e^{-\eta \nabla_t(j)} },$$
which is exactly the Hedge algorithm from the first chapter!
Theorem [5.6](#thm:mirrordescent){reference-type="ref"
reference="thm:mirrordescent"} gives us the following bound on the
regret:
$$\ensuremath{\mathrm{{Regret}}}_T \le 2 \sqrt{ 2 D_R^2 \sum_t \| \nabla_t \|_t^{* 2} } .$$
If the costs per individual expert are in the range $[0,1]$, it can be
shown that $$\|\nabla_t\|_t^* \leq \| \nabla_t \|_\infty \leq 1 = G_R.$$
In addition, when $R$ is the negative entropy function, the diameter
over the simplex can be shown to be bounded by $D_R^2 \leq \log n$ (see
exercises), giving rise to the bound
$$\ensuremath{\mathrm{{Regret}}}_T \le 2 D_R G_R \sqrt{2 T } \leq 2\sqrt{2 T \log n}.$$
For an arbitrary range of costs, we obtain the exponentiated gradient
algorithm described in Algorithm
[\[alg:eg\]](#alg:eg){reference-type="ref" reference="alg:eg"}.
::: algorithm
::: algorithmic
Input: parameter $\eta > 0$. Let
$\ensuremath{\mathbf y_{1}} = \mathbf{1}\ , \ \ensuremath{\mathbf x_{1}} = \frac{\ensuremath{\mathbf y_{1}}}{\|\ensuremath{\mathbf y_{1}}\|_1}$.
Predict $\ensuremath{\mathbf x}_t$. Observe $f_t$, update
$\ensuremath{\mathbf y_{t+1}}(i) = \ensuremath{\mathbf y_{t}}(i) e^{- \eta\, \nabla_{t}(i)}$
for all $i \in [n]$. Project:
$\ensuremath{\mathbf x_{t+1}} = \frac{\ensuremath{\mathbf y_{t+1}}}{\| \ensuremath{\mathbf y_{t+1}} \|_1 }$
:::
:::
The regret achieved by the exponentiated gradient algorithm can be
bounded using the following corollary of Theorem
[5.2](#thm:RFTLmain1){reference-type="ref" reference="thm:RFTLmain1"}:
::: {#cor:eg .corollary}
**Corollary 5.7**. *The exponentiated gradient algorithm with gradients
bounded by $\|\nabla_t\|_\infty \leq G_\infty$ and parameter
$\eta = \sqrt{ \frac{\log n}{ 2 T G_\infty^2 }}$ has regret bounded by
$$\ensuremath{\mathrm{{Regret}}}_T \leq 2 G_\infty \sqrt{2 T \log n}.$$*
:::
## Randomized Regularization {#sec:randomized-regularization}
The connection between stability in decision making and low regret has
motivated our discussion of regularization thus far. However, this
stability need not be achieved only using strongly convex regularization
functions. An alternative method to achieve stability in decisions is by
introducing randomization into the algorithm. In fact, historically,
this method preceded methods based on strongly convex regularization
(see bibliography).
In this section we first describe a deterministic algorithm for online
convex optimization that is easily amenable to speedup via
randomization. We then give an efficient randomized algorithm for the
special case of OCO with linear losses.
##### Oblivious vs. adaptive adversaries. {#oblivious-vs.-adaptive-adversaries. .unnumbered}
For simplicity, we consider ourselves in this section with a slightly
restricted version of OCO. So far, we have not restricted the cost
functions in any way, and they could depend on the choice of decision by
the online learner. However, when dealing with randomized algorithms,
this issue becomes a bit more subtle: can the cost functions depend on
the randomness of the decision making algorithm itself? Furthermore,
when analyzing the regret, which is now a random variable, dependencies
across different iterations require probabilistic machinery which adds
little to the fundamental understanding of randomized OCO algorithms. To
avoid these complications, we make the following assumption throughout
this section: the cost functions $\{\ensuremath{\mathbf f_{t}}\}$ are
adversarially chosen ahead of time, and do not depend on the actual
decisions of the online learner. This version of OCO is called the
*oblivious* setting, to distinguish it from the *adaptive* setting.
### Perturbation for convex losses
The prediction in Algorithm [\[alg:FPL\]](#alg:FPL){reference-type="ref"
reference="alg:FPL"} is according to a version of the follow-the-leader
algorithm, augmented with an additional component of randomization. It
is a deterministic algorithm that computes the expected decision
according to a random variable. The random variable is the minimizer
over the decision set according to the sum of gradients of the cost
functions and an additional random vector.
In practice, the expectation need not be computed exactly. Estimation
(via random sampling) up to a precision that depends linearly on the
number of iterations would suffice.
The algorithm accepts as input a distribution, with the probability
density function (PDF) denoted ${\mathcal D}$, over vectors in
$n$-dimensional Euclidean space
$\ensuremath{\mathbf n}\in {\mathbb R}^n$. For
$\sigma, L \in {\mathbb R}$, we say that a distribution ${\mathcal D}$
is $(\sigma,L)=(\sigma_a,L_a)$ stable with respect to the norm
$\| \cdot \|_a$ if
$$\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf n}\sim {\mathcal D}} [ \|\ensuremath{\mathbf n}\|_a^* ] = \sigma_a ,$$
and
$$\forall \ensuremath{\mathbf u}, \ \int_{\ensuremath{\mathbf n}} \left| {\mathcal D}(\ensuremath{\mathbf n}) - {\mathcal D}(\ensuremath{\mathbf n}- \ensuremath{\mathbf u}) \right| d \ensuremath{\mathbf n}\leq L_a \| \ensuremath{\mathbf u}\|_a^* .$$
Here $\ensuremath{\mathbf n}\sim {\mathcal D}$ denotes a vector
$\ensuremath{\mathbf n}\in {\mathbb R}^n$ sampled according to
distribution ${\mathcal D}$, and ${\mathcal D}(\ensuremath{\mathbf n})$
is the value of the probability density function ${\mathcal D}$ over
$\ensuremath{\mathbf n}$. The subscript $a$ is omitted if clear from the
context.
The first parameter, $\sigma$, is related to the variance of
${\mathcal D}$, while the second, $L$, is a measure of the sensitivity
of the distribution. For example, if ${\mathcal D}$ is the uniform
distribution over the hypercube $[0,1]^n$, then it holds that (see
exercises) for the Euclidean norm
$$\sigma_2 \leq \sqrt{n} \ ,\ L_2 \leq 1.$$ Reusing notation from
previous chapters, denote by $D= D_a$ the diameter of the set
$\ensuremath{\mathcal K}$ according to the norm $\| \cdot \|_a$, and by
$D^* = D_a^*$ the diameter according to its dual norm. Similarly, denote
by $G = G_a$ and $G^* = G_a^*$ an upper bound on the norm (and dual
norm) of the gradients.
::: algorithm
::: algorithmic
Input: $\eta > 0$, distribution ${\mathcal D}$ over ${\mathbb R}^n$,
decision set $\ensuremath{\mathcal K}\subseteq {\mathbb R}^n$. Let
$\ensuremath{\mathbf x}_1 = \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf n}\sim {\mathcal D}} \left[ \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}}
\left\{ \ensuremath{\mathbf n}^\top \ensuremath{\mathbf x}\right\} \right]$.
Predict $\ensuremath{\mathbf x_{t}}$. Observe the loss function $f_t$,
suffer loss $f_t(\ensuremath{\mathbf x_{t}})$ and let
$\nabla_t = \nabla f_t(\ensuremath{\mathbf x}_t)$. Update
$$\begin{aligned}
\label{eqn:fpl-oco}
\ensuremath{\mathbf x_{t+1}} = \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf n}\sim {\mathcal D}} \left[ \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}}
\left\{ \eta\sum_{s=1}^{t} \nabla_s^\top \ensuremath{\mathbf x}+
\ensuremath{\mathbf n}^\top \ensuremath{\mathbf x}\right\} \right]
\end{aligned}$$
:::
:::
::: {#thm:fpl .theorem}
**Theorem 5.8**. *Let the distribution ${\mathcal D}$ be
$(\sigma,L)$-stable with respect to norm $\|\cdot \|_a$. The FPL
algorithm attains the following bound on the regret:
$$\ensuremath{\mathrm{{Regret}}}_T \le\eta D G^{* 2} L T+ \frac{1}{\eta} \sigma D .$$*
:::
We can further optimize over the choice of $\eta$ to obtain
$$\ensuremath{\mathrm{{Regret}}}_T \leq 2 L D G^* \sqrt{ \sigma T }.$$
::: proof
*Proof.* Define the random variable
$\ensuremath{\mathbf x}_t^\ensuremath{\mathbf n}= \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}}
\left\{ \eta\sum_{s=1}^{t} \nabla_s^\top \ensuremath{\mathbf x}+
\ensuremath{\mathbf n}^\top \ensuremath{\mathbf x}\right\}$, and the
random function $g_0^\ensuremath{\mathbf n}$ as
$$g_0^\ensuremath{\mathbf n}(\mathbf{x}) \stackrel{\text{\tiny def}}{=}\frac{1}{\eta} \ensuremath{\mathbf n}^\top \mathbf{x}.$$
It follows from Lemma [5.4](#prop:ftl-btl){reference-type="ref"
reference="prop:ftl-btl"} applied to the functions
$\{g_t(\ensuremath{\mathbf x}) = \nabla_t^\top \ensuremath{\mathbf x}\}$
that $$\begin{aligned}
\mathop{\mbox{\bf E}}\left[ \sum_{t=0}^T g_t (\ensuremath{\mathbf u}) \right] & \geq \mathop{\mbox{\bf E}}\left[ g_0^\ensuremath{\mathbf n}(\ensuremath{\mathbf x}_1^\ensuremath{\mathbf n}) + \sum_{t=1}^T g_t(\ensuremath{\mathbf x}_{t+1}^\ensuremath{\mathbf n}) \right] \\
& \geq \mathop{\mbox{\bf E}}\left[ g_0^\ensuremath{\mathbf n}(\ensuremath{\mathbf x}_1^\ensuremath{\mathbf n}) \right] + \sum_{t=1}^T g_t(\mathop{\mbox{\bf E}}[ \ensuremath{\mathbf x}_{t+1}^\ensuremath{\mathbf n}] ) & \mbox{convexity} \\
& = \mathop{\mbox{\bf E}}\left[ g_0^\ensuremath{\mathbf n}(\ensuremath{\mathbf x}_1^\ensuremath{\mathbf n}) \right] + \sum_{t=1}^T g_t(\ensuremath{\mathbf x}_{t+1} )
\end{aligned}$$ and thus, $$\begin{aligned}
\label{eqn:ftl-shalom1}
& \sum_{t=1}^T \nabla_t(\ensuremath{\mathbf x_{t}}- \mathbf{x}^\star ) \\
& = \sum_{t=1}^T g_t (\ensuremath{\mathbf x}_{t}) - \sum_{t=1}^T g_t(\ensuremath{\mathbf x}^\star) \\
& \leq \sum_{t=1}^T g_t (\ensuremath{\mathbf x}_{t}) - \sum_{t=1}^T g_t(\ensuremath{\mathbf x}_{t+1}) + \mathop{\mbox{\bf E}}[ g_0^\ensuremath{\mathbf n}(\ensuremath{\mathbf x}^\star) - g_0^\ensuremath{\mathbf n}(\ensuremath{\mathbf x}_1^\ensuremath{\mathbf n}) ] \\
& \leq \sum_{t=1}^T \nabla_t(\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}} ) + \frac{1}{\eta} \mathop{\mbox{\bf E}}[ \| \ensuremath{\mathbf n}\|^* \| \ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}_1^\ensuremath{\mathbf n}\| ] & \mbox { Cauchy-Schwarz } \\
& \leq \sum_{t=1}^T \nabla_t(\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}} ) + \frac{1}{\eta}\sigma D .
\end{aligned}$$ Hence, $$\begin{aligned}
\label{eqn:ftl-shalom-main}
& \sum_{t=1}^T f_t(\ensuremath{\mathbf x}_t) - \sum_{t=1}^T f_t(\ensuremath{\mathbf x}^\star) \notag \\
& \leq \sum_{t=1}^T \nabla_t^\top (\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{}}^*) \notag \\
& \leq \sum_{t=1}^T \nabla_t^\top (\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}}) + \frac{1}{\eta} \sigma D & \mbox{above} \notag \\
& \leq G^* \sum_{t=1}^T \|\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}} \| + \frac{1}{\eta} \sigma D . & \mbox{ Cauchy-Schwarz}
\end{aligned}$$ We now argue that
$\|\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}}\| = O(\eta)$.
Let
$$h_t(\ensuremath{\mathbf n}) = \arg \min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{ \eta \sum_{s=1}^{t-1} \nabla_s^\top \ensuremath{\mathbf x}+ \ensuremath{\mathbf n}^\top \ensuremath{\mathbf x}\right\} ,$$
and hence
$\ensuremath{\mathbf x}_t = \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf n}\sim {\mathcal D}} [h_t(\ensuremath{\mathbf n})]$.
Recalling that ${\mathcal D}(\ensuremath{\mathbf n})$ denotes the value
of the probability density function ${\mathcal D}$ over
$\ensuremath{\mathbf n}\in {\mathbb R}^n$, we can write:
$$\ensuremath{\mathbf x_{t}}= \int\limits_{\ensuremath{\mathbf n}\in {\mathbb R}^n } h_t(\ensuremath{\mathbf n}) {\mathcal D}(\ensuremath{\mathbf n}) d \ensuremath{\mathbf n},$$
and:
$$\ensuremath{\mathbf x_{t+1}} = \int\limits_{\ensuremath{\mathbf n}\in {\mathbb R}^n } h_t(\ensuremath{\mathbf n}+ \eta \nabla_t) {\mathcal D}(\ensuremath{\mathbf n}) d \ensuremath{\mathbf n}= \int\limits_{\ensuremath{\mathbf n}\in {\mathbb R}^n } h_t(\ensuremath{\mathbf n}) {\mathcal D}(\ensuremath{\mathbf n}- \eta \nabla_t ) d \ensuremath{\mathbf n}.$$
Notice that $\ensuremath{\mathbf x_{t}},\ensuremath{\mathbf x_{t+1}}$
may depend on each other. However, by linearity of expectation, we have
that $$\begin{aligned}
& \| \ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}}\| \\
& = \left\| \int\limits_{\ensuremath{\mathbf n}\in {\mathbb R}^n } ( h_t(\ensuremath{\mathbf n}) - h_t (\ensuremath{\mathbf n}+ \eta \nabla_t ) ) {\mathcal D}(\ensuremath{\mathbf n}) d \ensuremath{\mathbf n}\right\| \\
& = \left\| \int\limits_{\ensuremath{\mathbf n}\in {\mathbb R}^n } h_t(\ensuremath{\mathbf n}) ({\mathcal D}( \ensuremath{\mathbf n}) - {\mathcal D}( \ensuremath{\mathbf n}- \eta \nabla_t)) d \ensuremath{\mathbf n}\right\| \\
& = \left\| \int\limits_{\ensuremath{\mathbf n}\in {\mathbb R}^n } (h_t(\ensuremath{\mathbf n}) - h_t(\mathbf{0}) ) ({\mathcal D}( \ensuremath{\mathbf n}) - {\mathcal D}( \ensuremath{\mathbf n}- \eta \nabla_t)) d \ensuremath{\mathbf n}\right\| \\
& \leq \int\limits_{\ensuremath{\mathbf n}\in {\mathbb R}^n } \|h_t(\ensuremath{\mathbf n}) - h_t(\mathbf{0}) \| |{\mathcal D}( \ensuremath{\mathbf n}) - {\mathcal D}( \ensuremath{\mathbf n}- \eta \nabla_t) | d \ensuremath{\mathbf n}\\
& \leq D \int\limits_{\ensuremath{\mathbf n}\in {\mathbb R}^n } \left| {\mathcal D}(\ensuremath{\mathbf n}) - {\mathcal D}( \ensuremath{\mathbf n}- \eta \nabla_t) \right| d \ensuremath{\mathbf n}\mbox{\ \ \ since } \|\ensuremath{\mathbf x}_t - h_t(\mathbf{0})\| \leq D \\
& \leq D L \cdot \eta \|\nabla_t\|^* \leq \eta D L G^* . \mbox{\ \ since ${\mathcal D}$ is $(\sigma,L)$-stable}.
\end{aligned}$$ Substituting this bound back into
[\[eqn:ftl-shalom-main\]](#eqn:ftl-shalom-main){reference-type="eqref"
reference="eqn:ftl-shalom-main"} we have $$\begin{aligned}
& \sum\limits_{t=1}^T f_t(\ensuremath{\mathbf x}_t) - \sum\limits_{t=1}^T f_t(\ensuremath{\mathbf x}^\star) \leq \eta L D G^{* 2} T + \frac{1}{\eta} \sigma D.
\end{aligned}$$ ◻
:::
For the choice of ${\mathcal D}$ as the uniform distribution over the
unit hypercube $[0,1]^n$, which has parameters $\sigma_2 \leq \sqrt{n}$
and $L_2 \leq 1$ for the Euclidean norm, the optimal choice of $\eta$
gives a regret bound of $DG n^{1/4} \sqrt{ T}$. This is a factor
${n}^{1/4}$ worse than the online gradient descent regret bound of
Theorem [3.1](#thm:gradient){reference-type="ref"
reference="thm:gradient"}. For certain decision sets
$\ensuremath{\mathcal K}$ a better choice of distribution ${\mathcal D}$
results in near-optimal regret bounds.
### Perturbation for linear cost functions
The case of linear cost functions
$f_t(\ensuremath{\mathbf x}) = \ensuremath{\mathbf g_{t}}^\top \ensuremath{\mathbf x}$
is of particular interest in the context of randomized regularization.
Denote
$$w_t(\ensuremath{\mathbf n}) = \arg\min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{ \eta\sum_{s=1}^{t} \ensuremath{\mathbf g_{s}}^\top \ensuremath{\mathbf x}+ \ensuremath{\mathbf n}^\top \ensuremath{\mathbf x}\right\} .$$
By linearity of expectation, we have that
$$f_t(\ensuremath{\mathbf x}_t) = f_t( \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf n}\sim {\mathcal D}} [w_t(\ensuremath{\mathbf n}) ] ) = \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf n}\sim {\mathcal D}} [ f_t(w_t(\ensuremath{\mathbf n})) ].$$
Thus, instead of computing $\ensuremath{\mathbf x}_t$ precisely, we can
sample a single vector $\ensuremath{\mathbf n}_0 \sim {\mathcal D}$, and
use it to compute $\hat{\mathbf{x}}_t = w_t(\ensuremath{\mathbf n}_0)$,
as illustrated in Algorithm
[\[alg:FPL-linear\]](#alg:FPL-linear){reference-type="ref"
reference="alg:FPL-linear"}.
::: algorithm
::: algorithmic
Input: $\eta > 0$, distribution ${\mathcal D}$ over ${\mathbb R}^n$,
decision set $\ensuremath{\mathcal K}\subseteq {\mathbb R}^n$. Sample
$\ensuremath{\mathbf n}_0 \sim {\mathcal D}$. Let
$\hat{\mathbf{x}}_1 \in \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \{ -\ensuremath{\mathbf n}_0^\top \ensuremath{\mathbf x}\}$.
Predict $\hat{\mathbf{x}}_t$. Observe the linear loss function, suffer
loss $\ensuremath{\mathbf g_{t}}^\top\ensuremath{\mathbf x_{t}}$. Update
$$\begin{aligned}
\hat{\mathbf{x}}_t = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}}
\left\{ \eta\sum_{s=1}^{t-1} \ensuremath{\mathbf g_{s}}^\top \ensuremath{\mathbf x}+
\ensuremath{\mathbf n}_0 ^\top \ensuremath{\mathbf x}\right\}
\end{aligned}$$
:::
:::
By the above arguments, we have that the expected regret for the random
variables $\hat{\mathbf{x}}_t$ is the same as that for
$\ensuremath{\mathbf x}_t$. We obtain the following Corollary:
::: {#cor:fpl-linear .corollary}
**Corollary 5.9**.
*$$\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf n}_0 \sim {\mathcal D}} \left[ \sum_{t=1}^T f_t(\hat{\mathbf{x}}_t) - \sum_{t=1}^T f_t(\ensuremath{\mathbf x}^\star) \right] \leq \eta L D G^{* 2} T + \frac{1}{\eta} \sigma D .$$*
:::
The main advantage of this algorithm is computational: with a single
linear optimization step over the decision set $\ensuremath{\mathcal K}$
(which does not even have to be convex!), we attain near optimal
expected regret bounds.
### Follow-the-perturbed-leader for expert advice
An interesting special case (and in fact the first use of perturbation
in decision making) is that of non-negative linear cost functions over
the unit $n$-dimensional simplex with costs bounded by one, or the
problem of prediction of expert advice we have considered in chapter
[1](#chap:intro){reference-type="ref" reference="chap:intro"}.
Algorithm [\[alg:FPL-linear\]](#alg:FPL-linear){reference-type="ref"
reference="alg:FPL-linear"} applied to the probability simplex and with
exponentially distributed noise is known as the
follow-the-perturbed-leader for prediction from expert advice method. We
spell it out in Algorithm
[\[alg:FPL\*\]](#alg:FPL*){reference-type="ref" reference="alg:FPL*"}.
::: algorithm
::: algorithmic
Input: $\eta > 0$ Draw $n$ exponentially distributed variables
$\ensuremath{\mathbf n}(i) \sim e^{- \eta x}$. Let
$\ensuremath{\mathbf x_{1}} = \mathop{\mathrm{\arg\min}}_{\mathbf{e}_i \in \Delta_n} \{ -\mathbf{e}_i^\top \ensuremath{\mathbf n}\}$.
Predict using expert $i_t$ such that
$\hat{\mathbf{x}}_t = \mathbf{e}_{i_t}$ Observe the loss vector and
suffer loss
$\ensuremath{\mathbf g_{t}}^\top \hat{\mathbf{x}}_t = \ensuremath{\mathbf g_{t}}(i_t)$
Update (w.l.o.g. choose $\hat{\mathbf{x}}_{t+1}$ to be a vertex)
$$\begin{aligned}
\hat{\mathbf{x}}_{t+1} = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \Delta_n}
\left\{ \sum_{s=1}^{t} \ensuremath{\mathbf g_{s}}^\top \ensuremath{\mathbf x}-
\ensuremath{\mathbf n}^\top \ensuremath{\mathbf x}\right\}
\end{aligned}$$
:::
:::
Notice that we take the perturbation to be distributed according to the
one-sided negative exponential distribution, i.e.,
$\ensuremath{\mathbf n}(i) \sim e^{-\eta x}$, or more precisely
$$\Pr[ \ensuremath{\mathbf n}(i) \leq x ] = 1 - e^{-\eta x} \quad \forall x \geq 0 .$$
Corollary [5.9](#cor:fpl-linear){reference-type="ref"
reference="cor:fpl-linear"} gives regret bounds that are suboptimal for
this special case, thus we give here an alternative analysis that gives
tight bounds up to constants amounting to the following theorem.
::: {#thm:fpl-experts .theorem}
**Theorem 5.10**. *Algorithm
[\[alg:FPL\*\]](#alg:FPL*){reference-type="ref" reference="alg:FPL*"}
outputs a sequence of predictions
$\hat{\mathbf{x}}_1,...,\hat{\mathbf{x}}_T \in \Delta_n$ such that:
$$(1 - \eta) \mathop{\mbox{\bf E}}\left[ \sum_t \ensuremath{\mathbf g_{t}}^\top \hat{\mathbf{x}}_t \right] \leq \min_{\ensuremath{\mathbf x}^\star\in \Delta_n} \sum_t \ensuremath{\mathbf g_{t}}^\top \ensuremath{\mathbf x}^\star + \frac{4 \log n}{\eta } .$$*
:::
Notice that as a special case of the above theorem, choosing
$\eta = \sqrt{\frac{\log n}{T}}$ yields a regret bound of
$$\ensuremath{\mathrm{{Regret}}}_T = O ( \sqrt{ T \log n }),$$ which is
equivalent up to constant factors to the guarantee given for the Hedge
algorithm in Theorem [1.5](#lem:hedge){reference-type="ref"
reference="lem:hedge"}.
::: proof
*Proof.* We start with the same analysis technique throughout this
chapter: let $\ensuremath{\mathbf g_{0}} = -\ensuremath{\mathbf n}$. It
follows from Lemma [5.4](#prop:ftl-btl){reference-type="ref"
reference="prop:ftl-btl"} applied to the functions
$\{f_t(\ensuremath{\mathbf x}) = \ensuremath{\mathbf g_{t}}^\top \ensuremath{\mathbf x}\}$
that
$$\mathop{\mbox{\bf E}}\left[ \sum_{t=0}^T \ensuremath{\mathbf g_{t}}^\top \ensuremath{\mathbf u}\right] \geq \mathop{\mbox{\bf E}}\left[ \sum_{t=0}^T \ensuremath{\mathbf g_{t}}^\top \hat{\mathbf{x}}_{t+1} \right] ,$$
and thus, $$\begin{aligned}
\label{eqn:ftl-shalom3}
\mathop{\mbox{\bf E}}\left[ \sum_{t=1}^T \ensuremath{\mathbf g_{t}}^\top (\hat{\mathbf{x}}_t - \ensuremath{\mathbf x}^\star ) \right]
& \leq \mathop{\mbox{\bf E}}\left[ \sum_{t=1}^T \ensuremath{\mathbf g_{t}}^\top (\hat{\mathbf{x}}_{t} - \hat{\mathbf{x}}_{t+1}) \right] + \mathop{\mbox{\bf E}}[ \ensuremath{\mathbf g_{0}}^\top (\ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}_1) ] \notag \\
& \leq \mathop{\mbox{\bf E}}\left[ \sum_{t=1}^T \ensuremath{\mathbf g_{t}}^\top (\hat{\mathbf{x}}_t - \hat{\mathbf{x}}_{t+1} ) \right] + \mathop{\mbox{\bf E}}[ \| \ensuremath{\mathbf n}\|_\infty \| \ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}_1 \|_1 ] \notag \\ % & \mbox { Cauchy-Schwarz} \notag \\
& \leq \sum_{t=1}^T \mathop{\mbox{\bf E}}\left[ \ensuremath{\mathbf g_{t}}^\top (\hat{\mathbf{x}}_t - \hat{\mathbf{x}}_{t+1} ) \ | \ \hat{\mathbf{x}}_t \right] + \frac{4}{\eta} \log n ,
\end{aligned}$$ where the second inequality follows by the generalized
Cauchy-Schwarz inequality, and the last inequality follows since (see
exercises)
$$\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf n}\sim {\mathcal D}} [ \|\ensuremath{\mathbf n}\|_\infty ] \leq \frac{ 2 \log n}{\eta} .$$
We proceed to bound
$\mathop{\mbox{\bf E}}[ \ensuremath{\mathbf g_{t}}^\top (\hat{\mathbf{x}}_t - \hat{\mathbf{x}}_{t+1} ) | \hat{\mathbf{x}}_t ]$,
which is naturally bounded by the probability that
$\hat{\mathbf{x}}_{t}$ is not equal to $\hat{\mathbf{x}}_{t+1}$
multiplied by the maximum value that $\ensuremath{\mathbf g_{t}}$ can
attain (i.e., its $\ell_\infty$ norm):
$$\mathop{\mbox{\bf E}}[ \ensuremath{\mathbf g_{t}}^\top (\hat{\mathbf{x}}_t - \hat{\mathbf{x}}_{t+1} ) \ | \ \hat{\mathbf{x}}_t ] \leq \|\ensuremath{\mathbf g_{t}}\|_\infty \cdot \Pr[ \hat{\mathbf{x}}_t \neq \hat{\mathbf{x}}_{t+1} \ |\ \hat{\mathbf{x}}_t ] \leq \Pr[ \hat{\mathbf{x}}_t \neq \hat{\mathbf{x}}_{t+1} \ |\ \hat{\mathbf{x}}_t ] .$$
Above we have that $\|\ensuremath{\mathbf g_{t}}\|_\infty \leq 1$ by
assumption that the losses are bounded by one.
To bound the latter, notice that the probability
$\hat{\mathbf{x}}_t = \mathbf{e}_{i_t}$ is the leader at time $t$ is the
probability that $- \ensuremath{\mathbf n}({i_t}) > v$ for some value
$v$ that depends on the entire loss sequence till now. On the other
hand, given $\hat{\mathbf{x}}_t$, we have that
$\hat{\mathbf{x}}_{t+1} = \hat{\mathbf{x}}_t$ remains the leader if
$- \ensuremath{\mathbf n}(i_t) > v + \ensuremath{\mathbf g_{t}}(i_t)$,
since it was a leader by a margin of more than the cost it will suffer.
Thus, $$\begin{aligned}
\Pr[ \hat{\mathbf{x}}_t \neq \hat{\mathbf{x}}_{t+1} \ |\ \hat{\mathbf{x}}_t ] & = 1 - \Pr[- \ensuremath{\mathbf n}({i_t}) > v+ \ensuremath{\mathbf g_{t}}(i_t) \ |\ -\ensuremath{\mathbf n}({i_t}) > v ] \\
& = 1 - \frac{ \int_{v + \ensuremath{\mathbf g_{t}}(i_t) }^\infty \eta e^{-\eta x } } {\int _{v}^\infty \eta e^{-\eta x}} \\
& = 1 - e^{ - \eta \ensuremath{\mathbf g_{t}}(i_t) } \\
& \leq \eta \ensuremath{\mathbf g_{t}}(i_t) = \eta \ensuremath{\mathbf g_{t}}^\top \hat{\mathbf{x}}_t .
\end{aligned}$$ Substituting this bound back into
[\[eqn:ftl-shalom3\]](#eqn:ftl-shalom3){reference-type="eqref"
reference="eqn:ftl-shalom3"} we have $$\begin{aligned}
& \mathop{\mbox{\bf E}}[ \sum_{t=1}^T \ensuremath{\mathbf g_{t}}^\top (\hat{\mathbf{x}}_t - \ensuremath{\mathbf x}^\star ) ] \leq \eta \sum_t \mathop{\mbox{\bf E}}_t[ \ensuremath{\mathbf g_{t}}^\top \hat{\mathbf{x}}_t] + \frac{4 \log n}{\eta} ,
\end{aligned}$$ which simplifies to the Theorem. ◻
:::
## \* Adaptive Gradient Descent {#sec:adagrad}
Thus far we have introduced regularization as a general methodology for
deriving online convex optimization algorithms. The main theorem of this
chapter, Theorem [5.2](#thm:RFTLmain1){reference-type="ref"
reference="thm:RFTLmain1"}, bounds the regret of the RFTL algorithm for
any strongly convex regularizer as $$\label{eqn:general-regret-form}
\ensuremath{\mathrm{{Regret}}}_T \leq \max_{\ensuremath{\mathbf u}\in \ensuremath{\mathcal K}} \sqrt{ 2 \sum_t \|\nabla_t \|_t^{* 2} B_{R}( \ensuremath{\mathbf u}||\ensuremath{\mathbf x}_1) }.$$
In addition, we have seen how to derive the online gradient descent and
the multiplicative weights algorithms as special cases of the RFTL
methodology. But are there other special cases of interest, besides
these two basic algorithms, that warrant such general and abstract
treatment?
There are surprisingly few cases of interest besides the Euclidean and
Entropic regularizations and their matrix analogues. However, in this
chapter we will give some justification of the abstract treatment of
regularization.
Our treatment is motivated by the following question: thus far we have
thought of $R$ as a strongly convex function. But which strongly convex
function should we choose to minimize regret? This is a deep and
difficult question which has been considered in the optimization
literature since its early developments. Naturally, the optimal
regularization should depend on both the convex underlying decision set,
as well as the actual cost functions (see exercises for a natural
candidate of a regularization function that depends on the convex
decision set).
We shall treat this question no differently than we treat other
optimization problems throughout this manuscript itself: we'll learn the
optimal regularization online! That is, a regularizer that adapts to the
sequence of cost functions and is in a sense the "optimal"
regularization to use in hindsight. This gives rise to the AdaGrad
(Adaptive subGradient method) algorithm
[\[alg:adagrad\]](#alg:adagrad){reference-type="ref"
reference="alg:adagrad"}, which explicitely optimizes over the
regularization choice in line
[\[eqn:adagrad1\]](#eqn:adagrad1){reference-type="eqref"
reference="eqn:adagrad1"} to minimize the gradient norms, which is the
dominant expression in
[\[eqn:general-regret-form\]](#eqn:general-regret-form){reference-type="eqref"
reference="eqn:general-regret-form"}.
::: algorithm
::: algorithmic
Input: parameters
$\eta, \ensuremath{\mathbf x}_1 \in \ensuremath{\mathcal K}$.
Initialize: $G_0 = \mathbf{0}$, Predict $\ensuremath{\mathbf x}_t$,
suffer loss $f_t(\ensuremath{\mathbf x}_t)$. []{#eqn:adagrad1
label="eqn:adagrad1"} Update $G_t = G_{t-1} + \nabla_t \nabla_t^\top$
and define $$\begin{aligned}
&\text{[Diagonal version]}
& H_t = \mathop{\mathrm{\arg\min}}_{H \succeq 0,H={\bf diag}(H)} \left\{ G_t \bullet H^{-1} + {\bf Tr}(H) \right\} = {\bf diag}({G_t}^{1/2}) \\
&\text{[Full matrix version]} & H_t = \mathop{\mathrm{\arg\min}}_{H \succeq \mathbf{0}} \left\{ G_t \bullet H^{-1} + {\bf Tr}(H) \right\} = {G_t}^{1/2}
\end{aligned}$$ Update
$$\ensuremath{\mathbf y_{t+1}} = \ensuremath{\mathbf x_{t}}- \eta H_t^{-1} \nabla_t$$
$$\ensuremath{\mathbf x_{t+1}} = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \| \ensuremath{\mathbf y_{t+1}} - \ensuremath{\mathbf x}\|^{2}_{H_t}$$
:::
:::
AdaGrad comes in two versions: diagonal and full matrix, the first being
particularly efficient to implement with negligible computational
overhead over online gradient descent. In the algorithm definition and
throughout this chapter, the notation $A^{-1}$ refers to the
Moore-Penrose pseudoinverse of the matrix $A$.
The computation in line
[\[eqn:adagrad1\]](#eqn:adagrad1){reference-type="eqref"
reference="eqn:adagrad1"} finds the regularization matrix $H$ which
minimizes the norm of the gradients from within the positive
semi-definite cone, with or without a diagonal constraint. This is
closely related, as we shall see, to optimization w.r.t. two natural
sets of matrices:
1. ${\mathcal H}_1 = \{ H = {\bf diag}(H) , H \succeq 0 \ , \ {\bf Tr}(H) \leq 1 \}$
2. ${\mathcal H}_2 = \{ H \succeq 0 \ , \ {\bf Tr}(H) \leq 1 \}$.
This results in a regularization matrix that is provably optimal in the
following sense,
::: {#lem:regularzation-optimality-adagrad .lemma}
**Lemma 5.11**. *For
${\mathcal H}_i \in \{{\mathcal H}_1,{\mathcal H}_2\}$ with the
corresponding $H_T$, $$\begin{aligned}
\sqrt{ \min_{H \in {\mathcal H}_i} \sum_{t=1}^T \|\nabla_t \|_H^{* 2} } & = {\bf Tr}(H_T) .
\end{aligned}$$*
:::
Using this lemma, we show the regret of AdaGrad is at most a constant
factor larger than the minimum regret of all RFTL algorithm with
regularization functions whose Hessian is fixed and belongs to the class
${\mathcal H}_i$. Furthermore, the regret of the diagonal version can be
a factor $\sqrt{d}$ smaller than that of online gradient descent for
certain gradient geometries. The regret bound on AdaGrad is formally
stated in the following theorem.
::: {#theorem:adagrad-main .theorem}
**Theorem 5.12**. *Let $\{\ensuremath{\mathbf x}_t\}$ be defined by
Algorithm [\[alg:adagrad\]](#alg:adagrad){reference-type="ref"
reference="alg:adagrad"} with parameters $\eta = {D}$ (full matrix) or
$\eta = D_\infty$ (diagonal). Then for any
$\ensuremath{\mathbf x}^\star \in \ensuremath{\mathcal K}$,
$$\begin{aligned}
\label{eqn:adagrad_regret}
& \ensuremath{\mathrm{{Regret}}}_{T}(\mbox{AdaGrad-diag}) \le \sqrt{2} D_\infty \sqrt{ \min_{H \in {\mathcal H}_1} \sum_t \|\nabla_t \|_H^{* 2} } , \\
& \ensuremath{\mathrm{{Regret}}}_{T}(\mbox{AdaGrad-full}) \le \sqrt{2} D \sqrt{ \min_{H \in {\mathcal H}_2} \sum_t \|\nabla_t \|_H^{* 2} } .
\end{aligned}$$*
:::
Before proceeding to the analysis, we consider when the regret bounds
for AdaGrad improve upon those of Online Gradient Descent. One such case
is when $\ensuremath{\mathcal K}$ is the unit cube in $d$-dimensional
Euclidean space. This convex set has $D_\infty =1$ and $D = \sqrt{d}$.
Lemma [5.11](#lem:regularzation-optimality-adagrad){reference-type="ref"
reference="lem:regularzation-optimality-adagrad"} and Theorems
[5.12](#theorem:adagrad-main){reference-type="ref"
reference="theorem:adagrad-main"},[5.2](#thm:RFTLmain1){reference-type="ref"
reference="thm:RFTLmain1"} imply that the regret of diagonal AdaGrad and
OGD are bounded by $$\begin{aligned}
& \ensuremath{\mathrm{{Regret}}}_{T}(\mbox{AdaGrad-diag}) \le \sqrt{2} {\bf Tr}({\bf diag}(G_T)^{1/2}) ,\\
& \ensuremath{\mathrm{{Regret}}}_{T}(\mbox{OGD}) \le \sqrt{2 d} \sqrt{ \sum_t \|\nabla_t\|^2} = \sqrt{2d {\bf Tr}({\bf diag}(G_T)) } .
\end{aligned}$$ The relationship between the two terms depends on the
matrix ${\bf diag}(G_T)$. If this matrix is sparse, then AdaGrad has a
superior bound by at most $\sqrt{d}$ factor. For other convex bodies,
such as the Euclidean ball, and when the matrix $G_T$ is dense, the
regret of OGD can be a factor $\sqrt{d}$ lower.
### Analysis of adaptive regularization
We proceed with the proof of Theorem
[5.12](#theorem:adagrad-main){reference-type="ref"
reference="theorem:adagrad-main"}. The first component is the following
Lemma, which generalizes the RFTL analysis to changing regularization.
::: {#lem:adagradlem .lemma}
**Lemma 5.13**. *Let
$H_{0} = \mathop{\mathrm{\arg\min}}_{H \succeq 0} \left\{ {\bf Tr}(H) \right\} = 0$,
$$\ensuremath{\mathrm{{Regret}}}_T(\text{GenAdaReg}) \leq \frac{\eta}{2} ( G_T \bullet H_T^{-1} + {\bf Tr}(H_T))
+ \frac{1}{2 \eta} \sum_{t=0}^T \| \mathbf{x}_t - \mathbf{x}^\star\|^2_{
H_t - H_{t-1}} .$$*
:::
::: proof
*Proof.* By the definition of $\mathbf{y}_{t+1}$: $$\begin{aligned}
& \mathbf{y}_{t+1} - \mathbf{x}^\star = \mathbf{x}_{t} - \mathbf{x}^\star - \eta {H_t}^{-1}
\nabla_t \\
& H_t (\mathbf{y}_{t+1} - \mathbf{x}^\star) = H_t (\mathbf{x}_t - \mathbf{x}^\star) - \eta
\nabla_t.
\end{aligned}$$ Multiplying the transpose of the first equation by the
second we get $$\begin{gathered}
(\mathbf{y}_{t+1} - \mathbf{x}^\star)^\top H_t(\mathbf{y}_{t+1} - \mathbf{x}^\star) = \notag \\
(\mathbf{x}_t\! -\! \mathbf{x}^\star)^\top H_t(\mathbf{x}_t\! -\! \mathbf{x}^\star) -
2 \eta \nabla_t^\top (\mathbf{x}_t\! -\! \mathbf{x}^\star) +
\eta^2 \nabla_t^\top H_t^{-1} \nabla_t.
\label{eq:multiplied-adagrad}
\end{gathered}$$ Since $\mathbf{x}_{t+1}$ is the projection of
$\mathbf{y}_{t+1}$ in the norm induced by $H_t$, we have (see
§[2.1.1](#sec:projections){reference-type="ref"
reference="sec:projections"}) $$\begin{aligned}
(\mathbf{y}_{t+1} - \mathbf{x}^\star)^\top H_t(\mathbf{y}_{t+1} - \mathbf{x}^\star) & = \| \mathbf{y}_{t+1} - \mathbf{x}^\star \|_{H_t}^2 \ge \| \mathbf{x}_{t+1} - \mathbf{x}^\star \|_{H_t}^2 .
%& = (\bx_{t+1} - \bx^\star)^\top G_t(\bx_{t+1} - \bx^\star ).
\end{aligned}$$ This inequality is the reason for using generalized
projections as opposed to standard projections, which were used in the
analysis of online gradient descent(see
§[3.1](#section:ogd){reference-type="ref" reference="section:ogd"}
Equation [\[eqn:ogdtriangle\]](#eqn:ogdtriangle){reference-type="eqref"
reference="eqn:ogdtriangle"}). This fact together with
[\[eq:multiplied-adagrad\]](#eq:multiplied-adagrad){reference-type="eqref"
reference="eq:multiplied-adagrad"} gives $$\begin{aligned}
\nabla_t^\top (\mathbf{x}_t \! -\! \mathbf{x}^\star) &\leq \ \frac{\eta}{2}
\nabla_t^\top H_t^{-1} \nabla_t + \frac{1}{2 \eta} \left( \| \mathbf{x}_{t} - \mathbf{x}^\star \|_{H_t}^2 - \| \mathbf{x}_{t+1} - \mathbf{x}^\star \|_{H_{t}}^2 \right) .
\end{aligned}$$ Now, summing up over $t=1$ to $T$ we get that
$$\begin{aligned}
\label{eqn:adagrad-shalom}
&\sum_{t=1}^T \nabla_t^\top (\mathbf{x}_t - \mathbf{x}^\star)
\leq \frac{\eta}{2} \sum_{t=1}^T \nabla_t^\top H_t^{-1} \nabla_t +
\frac{1}{2\eta} \| \mathbf{x}_{1} - \mathbf{x}^\star \|_{H_{0}}^2 \\
& + \frac{1}{2 \eta} \sum_{t=1}^T \left( \| \mathbf{x}_{t} - \mathbf{x}^\star \|_{H_t}^2 - \| \mathbf{x}_{t} - \mathbf{x}^\star \|_{H_{t-1}}^2 \right) - \frac{1}{2 \eta} \| \mathbf{x}_{T+1} - \mathbf{x}^\star \|_{H_{T}}^2 \notag \\
&\leq \frac{\eta}{2} \sum_{t=1}^T \nabla_t^\top H_t^{-1}
\nabla_t + \frac{1}{2\eta} \sum_{t=0}^{T} \| \mathbf{x}_t\! -\! \mathbf{x}^\star\|^2_{
H_t - H_{t-1}} . \notag
\end{aligned}$$ In the last inequality we use the definition
$H_{0} = 0$. We proceed to bound the first term. To this end, define the
functions
$$\Psi_t(H) = \nabla_t \nabla_t^\top \bullet H^{-1} \ , \ \Psi_0(H) = {\bf Tr}(H) .$$
By definition, $H_t$ is the minimizer of $\sum_{i=0}^{t} \Psi_i$ over
${\mathcal H}$. Therefore, using the BTL Lemma
[5.4](#prop:ftl-btl){reference-type="ref" reference="prop:ftl-btl"}, we
have that $$\begin{aligned}
\sum_{t=1}^T \nabla_t^\top H_t^{-1} \nabla_t & = \sum_{t=1}^T \Psi_t(H_t) \\
& \leq \sum_{t=1}^T \Psi_t(H_T) + \Psi_0(H_T) - \Psi_0(H_0) \\
& = G_T \bullet H_T^{-1} + {\bf Tr}(H_T) . % = 2 \trace(H_T) ,
\end{aligned}$$ ◻
:::
We can now continue with the proof of Theorem
[5.12](#theorem:adagrad-main){reference-type="ref"
reference="theorem:adagrad-main"}.
::: proof
*Proof of Theorem [5.12](#theorem:adagrad-main){reference-type="ref"
reference="theorem:adagrad-main"}.* We bound both parts of Lemma
[5.13](#lem:adagradlem){reference-type="ref"
reference="lem:adagradlem"}, with the following two lemmas,
::: {#lemma:opt-distance-bound-adagrad .lemma}
**Lemma 5.14**. *For both the diagonal and full matrix versions of
AdaGrad, the following holds
$$G_T \bullet H_T^{-1} \leq {\bf Tr}(H_T) .$$*
:::
::: {#lemma:opt-reg-bound2-adagrad .lemma}
**Lemma 5.15**. *Let $D_\infty$ denote the $\ell_\infty$ diameter of
$\ensuremath{\mathcal K}$, and $D$ the Euclidean diameter. Then the
following bounds hold, $$\begin{aligned}
& \mbox{Diagonal AdaGrad: } & \sum_{t=1}^{T} \| \mathbf{x}_t - \mathbf{x}^\star\|_{H_t - H_{t-1}}^2 \leq D^2_\infty {\bf Tr}(H_T). \\
& \mbox{Full matrix AdaGrad: } & \sum_{t=1}^{T} \| \mathbf{x}_t - \mathbf{x}^\star\|_{H_t - H_{t-1}}^2 \leq D^2 {\bf Tr}(H_T).
\end{aligned}$$*
:::
Now combining Lemma [5.13](#lem:adagradlem){reference-type="ref"
reference="lem:adagradlem"} with the above two lemmas, and using
$\eta = \frac{D}{\sqrt{2}}$ or $\eta = \frac{D_\infty}{\sqrt{2}}$
appropriately, we obtain the theorem. ◻
:::
We proceed to complete the proof of the two lemmas above.
::: proof
*Proof of Lemma
[5.14](#lemma:opt-distance-bound-adagrad){reference-type="ref"
reference="lemma:opt-distance-bound-adagrad"}.* The optimization problem
of choosing $H_t$ in line
[\[eqn:adagrad1\]](#eqn:adagrad1){reference-type="eqref"
reference="eqn:adagrad1"} of Algorithm
[\[alg:adagrad\]](#alg:adagrad){reference-type="ref"
reference="alg:adagrad"} has an explicit solution, given in the
following proposition (whose proof is left as an exercise).
::: {#proposition:solution-inv-trace .proposition}
**Proposition 5.16**. *Consider the following optimization problems, for
$A \succcurlyeq 0$: $$\begin{aligned}
\min_{X \succeq 0 , {\bf Tr}(X) \leq 1} \left\{ X^{-1} \bullet A \right\} \quad \quad \min_{X \succeq 0} \left\{ A \bullet X^{-1} + {\bf Tr}(X) \right\} .
\end{aligned}$$ Then the global optimizer to these problems is obtained
at $X = \frac{A^{1/2}} { {\bf Tr}(A^{1/2})}$ and $X = A^{1/2}$
respectively. Over the set of diagonal matrices, the global optimizer is
obtained at $X = \frac{{\bf diag}(A)^{1/2}} { {\bf Tr}(A^{1/2})}$ and
$X = {\bf diag}(A)^{1/2}$ respectively.*
:::
A direct corollary of this proposition gives Lemma
[5.11](#lem:regularzation-optimality-adagrad){reference-type="ref"
reference="lem:regularzation-optimality-adagrad"} as follows:
::: corollary
**Corollary 5.17**. *$$\begin{aligned}
\sqrt{ \min_{H \in {\mathcal H}} \sum_t \|\nabla_t \|_H^{* 2} } & = \sqrt{ \min_{H \in {\mathcal H}} {\bf Tr}( H^{-1} \sum_t \nabla_t \nabla_t^\top ) } \\
& = {\bf Tr}{ \sqrt{ \sum_t \nabla_t \nabla_t^\top } } = {\bf Tr}(H_T) % = \trace(\sqrt{S_T - \delta n}) \\
%& = \trace(G_T) - \delta n.
\end{aligned}$$*
:::
◻
:::
The remaining term from Lemma
[5.13](#lem:adagradlem){reference-type="ref" reference="lem:adagradlem"}
is the expression
$\sum_{t=0}^T \| \mathbf{x}_t - \mathbf{x}^\star\|^2_{ H_t - H_{t-1}}$,
which we proceed to bound.
::: proof
*Proof of Lemma
[5.15](#lemma:opt-reg-bound2-adagrad){reference-type="ref"
reference="lemma:opt-reg-bound2-adagrad"}.* By definition
$G_t \succcurlyeq G_{t-1}$, and hence using proposition
[5.16](#proposition:solution-inv-trace){reference-type="ref"
reference="proposition:solution-inv-trace"} and the definition of $H_t$
in line [\[eqn:adagrad1\]](#eqn:adagrad1){reference-type="eqref"
reference="eqn:adagrad1"}, we have that
$H_t = {\bf diag}(G_t^{1/2} ) \succcurlyeq {\bf diag}(G_{t-1}^{1/2} ) = H_{t-1}$.
Since for a diagonal matrix $H$ it holds that
$\ensuremath{\mathbf x}^\top H \ensuremath{\mathbf x}\leq \|\ensuremath{\mathbf x}\|_\infty^2 {\bf Tr}(H)$,
we have $$\begin{aligned}
& \sum_{t=1}^{T} (\mathbf{x}_t\! -\! \mathbf{x}^\star)^\top (H_t - H_{t-1} ) (\mathbf{x}_t\! -\! \mathbf{x}^\star) \\
& \leq \sum_{t=1}^{T} D^2_\infty {\bf Tr}( H_t - H_{t-1} ) & \mbox{diagonal structure, $H_t - H_{t-1} \succeq 0$}\\
%& \leq D^2_\infty \sum_{t=1}^{T} \trace (H_t - H_{t-1}) & A \succcurlyeq 0 \ \Rightarrow \ \lambda_{\max}(A) \leq \trace(A) \\
& = D^2_\infty \sum_{t=1}^{T} ({\bf Tr}(H_t ) - {\bf Tr}( H_{t-1})) & \mbox{ linearity of the trace} \\
& \leq D^2_\infty {\bf Tr}(H_T).
\end{aligned}$$
Next, we consider the full matrix case. By definition
$G_t \succcurlyeq G_{t-1}$, and hence $H_t \succcurlyeq H_{t-1}$. Thus,
$$\begin{aligned}
& \sum_{t=1}^{T} (\mathbf{x}_t\! -\! \mathbf{x}^\star)^\top (H_t - H_{t-1} ) (\mathbf{x}_t\! -\! \mathbf{x}^\star) \\
& \leq \sum_{t=1}^{T} D^2 \lambda_{\max}( H_t - H_{t-1} ) \\
& \leq D^2 \sum_{t=1}^{T} {\bf Tr}(H_t - H_{t-1}) & A \succcurlyeq 0 \ \Rightarrow \ \lambda_{\max}(A) \leq {\bf Tr}(A) \\
& = D^2 \sum_{t=1}^{T} ({\bf Tr}(H_t ) - {\bf Tr}( H_{t-1})) & \mbox{ linearity of the trace} \\
& \leq D^2 {\bf Tr}(H_T).
\end{aligned}$$ ◻
:::
## Bibliographic Remarks {#bibliographic-remarks-2}
Regularization in the context of online learning was first studied in
[@GroveLS01] and [@KivinenW01]. The influential paper of @KV-FTL coined
the term "follow-the-leader" and introduced many of the techniques that
followed in OCO. The latter paper studies random perturbation as a
regularization and analyzes the follow-the-perturbed-leader algorithm,
following an early development by @Hannan57 that was overlooked in
learning for many years.
In the context of OCO, the term follow-the-regularized-leader was coined
in [@ShwartzS07; @ShalevThesis], and at roughly the same time an
essentially identical algorithm was called "RFTL" in [@AbernethyHR08].
The equivalence of RFTL and Online Mirror Descent was observed by
[@DBLP:conf/colt/HazanK08]. The AdaGrad algorithm was introduced in
[@DuchiHS10; @duchi2011adaptive], its diagonal version was also
discovered in parallel in [@McMahanS10]. The analysis of AdaGrad
presented in this chapter is due to [@gupta2017unified].
Adaptive regularization has received significant attention due to its
success in training deep neural networks, and notably the development of
adaptive algorithms that incorporate momentum and other heuristics, most
popular of which are AdaGrad, RMSprop [@tieleman2012lecture] and Adam
[@kingma2014adam]. For a survey of optimization for deep learning, see
the comprehensive text of @Goodfellow-et-al-2016.
There is a strong connection between randomized perturbation and
deterministic regularization. For some special cases, adding
randomization can be thought of as a special case of deterministic
strongly convex regularization, see
[@abernethy2014online; @abernethy16perturbation].
## Exercises
# Bandit Convex Optimization {#chap:bandits}
In many real-world scenarios the feedback available to the decision
maker is noisy, partial or incomplete. Such is the case in online
routing in data networks, in which an online decision maker iteratively
chooses a path through a known network, and her loss is measured by the
length (in time) of the path chosen. In data networks, the decision
maker can measure the RTD (round trip delay) of a packet through the
network, but rarely has access to the congestion pattern of the entire
network.
Another useful example is that of online ad placement in web search. The
decision maker iteratively chooses an ordered set of ads from an
existing pool. Her reward is measured by the viewer's response---if the
user clicks a certain ad, a reward is generated according to the weight
assigned to the particular ad. In this scenario, the search engine can
inspect which ads were clicked through, but cannot know whether
different ads, had they been chosen to be displayed, would have been
clicked through or not.
The examples above can readily be modeled in the OCO framework, with the
underlying sets being the convex hull of decisions. The pitfall of the
general OCO model is the feedback; it is unrealistic to expect that the
decision maker has access to a gradient oracle at any point in the space
for every iteration of the game.
## The Bandit Convex Optimization Setting
The Bandit Convex Optimization (short: BCO) model is identical to the
general OCO model we have explored in previous chapters with the only
difference being the feedback available to the decision maker.
To be more precise, the BCO framework can be seen as a structured
repeated game. The protocol of this learning framework is as follows: At
iteration $t$, the online player chooses
$\ensuremath{\mathbf x}_t \in \ensuremath{\mathcal K}.$ After committing
to this choice, a convex cost function
$f_t \in {\mathcal F}: \ensuremath{\mathcal K}\mapsto {\mathbb R}$ is
revealed. Here ${\mathcal F}$ is the bounded family of cost functions
available to the adversary. The cost incurred to the online player is
the value of the cost function at the point she committed to
$f_t(\ensuremath{\mathbf x}_t)$. As opposed to the OCO model, in which
the decision maker has access to a gradient oracle for $f_t$ over
$\ensuremath{\mathcal K}$, in BCO **the loss
$f_t(\ensuremath{\mathbf x}_t)$ is the only feedback available to the
online player at iteration $t$.** In particular, the decision maker does
not know the loss had she chosen a different point
$\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}$ at iteration $t$.
As before, let $T$ denote the total number of game iterations (i.e.,
predictions and their incurred loss). Let ${\mathcal A}$ be an algorithm
for BCO, which maps a certain game history to a decision in the decision
set. We formally define the regret of ${\mathcal A}$ that predicted
$x_1,...,x_T$ to be
$$\ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) = \sup_{\{f_1,...,f_T\} \subseteq {\mathcal F}} \left\{ {\textstyle \sum}_{t=1}^T f_t(\ensuremath{\mathbf x}_t) -\min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} {\textstyle \sum}_{t=1}^T f_t(\ensuremath{\mathbf x}) \right\}.$$
## The Multiarmed Bandit (MAB) Problem
A classical model for decision making under uncertainty is the
multiarmed bandit (MAB) model. The term MAB nowadays refers to a
multitude of different variants and sub-scenarios that are too large to
survey. This section addresses perhaps the simplest variant---the
non-stochastic MAB problem---which is defined as follows:
Iteratively, a decision maker chooses between $n$ different actions
$i_t \in \{1,2,...,n\}$, while, at the same time, an adversary assigns
each action a loss in the range $[0,1]$. The decision maker receives the
loss for $i_t$ and observes this loss, and nothing else. The goal of the
decision maker is to minimize her regret.
The reader undoubtedly observes this setting is identical to the setting
of prediction from expert advice, the only difference being the feedback
available to the decision maker: whereas in the expert setting the
decision maker can observe the rewards or losses for all experts in
retrospect, in the MAB setting, only the losses of the decisions
actually chosen are known.
It is instructive to explicitly model this problem as a special case of
BCO. Take the decision set to be the set of all distributions over $n$
actions, i.e., $\ensuremath{\mathcal K}= \Delta_n$ is the
$n$-dimensional simplex. The loss function is taken to be the
linearization of the costs of the individual actions, that is:
$$f_t(\ensuremath{\mathbf x}) = \ell_t^\top \ensuremath{\mathbf x}= \sum_{i=1}^n \ell_t(i) \ensuremath{\mathbf x}(i) \quad \forall \ensuremath{\mathbf x}\in \ensuremath{\mathcal K},$$
where $\ell_t(i)$ is the loss associated with the $i$'th action at the
$t$'th iteration. Thus, the cost functions are linear functions in the
BCO model.
The MAB problem exhibits an exploration-exploitation tradeoff: an
efficient (low regret) algorithm has to explore the value of the
different actions in order to make the best decision. On the other hand,
having gained sufficient information about the environment, a reasonable
algorithm needs to exploit this action by picking the best action.
The simplest way to attain a MAB algorithm would be to separate
exploration and exploitation. Such a method would proceed by
1. With some probability, explore the action space (i.e., by choosing
an action uniformly at random). Use the feedback to construct an
estimate of the actions' losses.
2. Otherwise, use the estimates to apply a full-information experts
algorithm as if the estimates are the true historical costs.
This simple scheme already gives a sublinear regret algorithm, presented
in algorithm [\[alg:simpleMAB\]](#alg:simpleMAB){reference-type="ref"
reference="alg:simpleMAB"}.
::: algorithm
::: algorithmic
Input: OCO algorithm ${\mathcal A}$, parameter $\delta$. Let $b_t$ be a
Bernoulli random variable that equals 1 with probability $\delta$.
Choose $i_t \in \{1,2,...,n\}$ uniformly at random and play $i_t$.\
Let $$\hat{\ell}_t(i)= {
\left\{
\begin{array}{ll}
{ \frac{n}{\delta} \cdot \ell_t (i_t)}, & { i = i_t} \\\\
{0 }, & {\text{otherwise}}
\end{array}
\right. } .$$ Let
${\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}) = \hat{\ell}_t^\top \ensuremath{\mathbf x}$
and update
$\ensuremath{\mathbf x}_{t+1} = {\mathcal A}({\ensuremath{\hat{f}}}_1,...,{\ensuremath{\hat{f}}}_t)$.
Choose $i_t \sim \ensuremath{\mathbf x}_t$ and play $i_t$. Update
$\hat{f}_t = 0, \hat{\ell}_t = \mathbf{0}$,
$\ensuremath{\mathbf x}_{t+1} = \ensuremath{\mathbf x}_t$.
:::
:::
::: lemma
**Lemma 6.1**. *Algorithm
[\[alg:simpleMAB\]](#alg:simpleMAB){reference-type="ref"
reference="alg:simpleMAB"}, with $\mathcal{A}$ being the the online
gradient descent algorithm, guarantees the following regret bound:
$$\mathop{\mbox{\bf E}}\left[\sum_{t=1}^T {\ell_t(i_t)}-\min_i{\sum_{t=1}^T {\ell_t(i)}}\right] \leq O( T^{\frac{2}{3}} n^{\frac{2}{3}} ) \nonumber$$*
:::
::: proof
*Proof.* For the random functions $\{\hat{\ell}_t\}$ defined in
algorithm [\[alg:simpleMAB\]](#alg:simpleMAB){reference-type="ref"
reference="alg:simpleMAB"}, notice that
1. $\mathop{\mbox{\bf E}}[ \hat{\ell}_t (i) ] = \Pr[ b_t = 1] \cdot \Pr[ i_t = i | b_t=1] \cdot \frac{n}{\delta} \ell_t(i) = \ell_t(i)$.
2. $\| \hat{\ell}_t \|_2 \leq \frac{n}{\delta} \cdot |\ell_t(i_t)| \leq \frac{n}{\delta}$.
Therefore the regret of the simple algorithm can be related to that of
$\mathcal{A}$ on the estimated functions.
On the other hand, the simple MAB algorithm does not always play
according to the distribution generated by $\mathcal{A}$: with
probability $\delta$ it plays uniformly at random, which may lead to a
regret of one on these exploration iterations. Let $S_t \subseteq [T]$
be those iterations in which $b_t=1$. This is captured by the following
lemma:
::: {#lem:shalom3 .lemma}
**Lemma 6.2**.
*$$\mathop{\mbox{\bf E}}[ \ell_t(i_t) ] \leq \mathop{\mbox{\bf E}}[ \hat{\ell}_t^\top x_t ] + \delta$$*
:::
::: proof
*Proof.* $$\begin{aligned}
&\mathop{\mbox{\bf E}}[\ell_t(i_t)]\\
&= \Pr[b_t=1] \cdot \mathop{\mbox{\bf E}}[ \ell_t(i_t)|b_t=1] \\ & + \Pr[b_t = 0] \cdot \mathop{\mbox{\bf E}}[\ell_t(i_t)|b_t=0] \\
& \le \delta + \Pr[b_t = 0] \cdot \mathop{\mbox{\bf E}}[\ell_t(i_t)|b_t=0] \\
& = \delta + (1-\delta) \mathop{\mbox{\bf E}}[ \ell_t^\top \ensuremath{\mathbf x}_t | b_t = 0 ] & \mbox{ $b_t = 0 \rightarrow i_t \sim \ensuremath{\mathbf x}_t$, independent of $l_t$} \\
& \leq \delta + \mathop{\mbox{\bf E}}[ \ell_t^\top \ensuremath{\mathbf x}_t ] & \mbox{non-negative random variables } \\
& = \delta + \mathop{\mbox{\bf E}}[ \hat{\ell}_t^\top \ensuremath{\mathbf x}_t ] & \mbox{$\hat{\ell}_t$ is independent of $\ensuremath{\mathbf x}_t$}
\end{aligned}$$ ◻
:::
We thus have, $$\begin{aligned}
& \mathop{\mbox{\bf E}}[ \ensuremath{\mathrm{{Regret}}}_T ] \\
& = \mathop{\mbox{\bf E}}[ \sum_{t=1}^T{\ell_t(i_t)}-{\sum_{t=1}^T{\ell_t(i^\star)}}] \\
& = \mathop{\mbox{\bf E}}[ \sum_{t }{\ell_t(i_t)}-{\sum_{t } {\hat{\ell}_t(i^\star)}} ] & \mbox{ $i^\star$ is indep. of $\hat{\ell}_t$} \\
& \leq \mathop{\mbox{\bf E}}[ \sum_{t }{\hat{\ell}_t(\ensuremath{\mathbf x}_t)}-\min_i{\sum_{t } {\hat{\ell}_t(i)}} ] + \delta T & \mbox{Lemma \ref{lem:shalom3} } \\
& = \mathop{\mbox{\bf E}}[ \ensuremath{\mathrm{{Regret}}}_{S_T}(\mathcal{A}) ] + \delta \cdot T \\
& \leq \frac{3}{2} GD \sqrt{\delta T} + \delta \cdot T & \mbox{ Theorem \ref{thm:gradient}}, \mathop{\mbox{\bf E}}[ |S_T|] = \delta T \\
& \leq 3 \frac{n}{ \sqrt{\delta}} \sqrt{T } + \delta \cdot T & \mbox{ For $\Delta_n$, $D \leq 2$ , $\|\hat{\ell}_t\|\leq \frac{n}{\delta} $} \\
& = O( T^{\frac{2}{3}} n^{\frac{2}{3}}) . & \delta= n^{\frac{2}{3}} T^{-\frac{1}{3}}
\end{aligned}$$ ◻
:::
### EXP3: simultaneous exploration and exploitation
The simple algorithm of the previous section can be improved by
combining the exploration and exploitation steps. This gives a
near-optimal regret algorithm, called EXP3, presented below.
::: algorithm
::: algorithmic
Input: parameter $\varepsilon> 0$. Set
$\ensuremath{\mathbf x}_1 = ({1}/{n}) \mathbf{1}$. Choose
$i_t \sim \ensuremath{\mathbf x}_t$ and play $i_t$. Let
$$\hat{\ell}_t(i)= {
\left\{
\begin{array}{ll}
{ \frac{1}{\ensuremath{\mathbf x}_t(i_t)} \cdot \ell_t (i_t)}, & { i = i_t} \\\\
{0 }, & {\text{otherwise}}
\end{array}
\right. }$$ Update
$\ensuremath{\mathbf y}_{t+1} (i) = \ensuremath{\mathbf x}_t(i) e^{-\varepsilon\hat{\ell}_t(i)} \ , \ \ensuremath{\mathbf x}_{t+1} = \frac{\ensuremath{\mathbf y}_{t+1} }{\|\ensuremath{\mathbf y}_{t+1}\|_1 }$
:::
:::
As opposed to the simple multiarmed bandit algorithm, the EXP3 algorithm
explores every iteration by always creating an unbiased estimator of the
entire loss vector. This results in a possibly large magnitude of the
vectors $\hat{\ell}$ and a large gradient bound for use with online
gradient descent. However, the large magnitude vectors are created with
low probability (proportional to their magnitude), which allows for a
finer analysis.
Ultimately, the EXP3 algorithm attains a worst case regret bound of
$O(\sqrt{T n \log n})$, which is nearly optimal (up to a logarithmic
term in the number of actions).
::: {#Lemma:exp3regret .lemma}
**Lemma 6.3**. *Algorithm [\[alg:EXP3\]](#alg:EXP3){reference-type="ref"
reference="alg:EXP3"} with non-negative losses and
$\varepsilon= \sqrt{\frac{\log n}{T n} }$ guarantees the following
regret bound:
$$\mathop{\mbox{\bf E}}[\sum{\ell_t(i_t)}-\min_i{\sum{\ell_t(i)}}] \leq 2 \sqrt{ T n \log n } .\nonumber$$*
:::
::: proof
*Proof.* For the random losses $\{\hat{\ell}_t\}$ defined in algorithm
[\[alg:EXP3\]](#alg:EXP3){reference-type="ref" reference="alg:EXP3"},
notice that $$\begin{aligned}
& \mathop{\mbox{\bf E}}[ \hat{\ell}_t (i) ] = \Pr[ i_t = i] \cdot \frac{ \ell_t(i)}{ \ensuremath{\mathbf x}_t(i) } = \ensuremath{\mathbf x}_t(i) \cdot \frac{ \ell_t(i)}{ \ensuremath{\mathbf x}_t(i) } = \ell_t(i) . \notag \\
& \mathop{\mbox{\bf E}}[ \ensuremath{\mathbf x}_t^\top \hat{\ell}_t^2 ] = \sum_i \Pr[ i_t = i] \cdot \ensuremath{\mathbf x}_t(i) \hat{\ell}_t(i)^2 \notag \\
& = \sum_i \ensuremath{\mathbf x}_t(i)^2 \hat{\ell}_t(i)^2
= \sum_i \ell_t(i)^2 \leq {n} . \label{eqn:shalom1234}
\end{aligned}$$ Therefore we have $E[\hat{f}_t]=f_t$, and the expected
regret with respect to the functions $\{\hat{f}_t\}$ is equal to that
with respect to the functions $\{f_t\}$. Thus, the regret with respect
to $\hat{\ell}_t$ can be related to that of $\ell_t$.
The EXP3 algorithm applies Hedge to the losses given by $\hat{\ell}_t$,
which are all non-negative and thus satisfy the conditions of Theorem
[1.5](#lem:hedge){reference-type="ref" reference="lem:hedge"}. Thus, the
expected regret with respect to $\hat{\ell}_t$, can be bounded by,
$$\begin{aligned}
& \mathop{\mbox{\bf E}}[ \ensuremath{\mathrm{{Regret}}}_T ] = \mathop{\mbox{\bf E}}[ \sum_{t=1}^T{\ell_t(i_t)}-\min_i{\sum_{t=1}^T{\ell_t(i)}}] \\
& = \mathop{\mbox{\bf E}}[ \sum_{t=1}^T{\ell_t(i_t)}-{\sum_{t=1}^T{\ell_t(i^\star)}}] \\
& \leq \mathop{\mbox{\bf E}}[ \sum_{t=1}^{T} {\hat{\ell}_t(\ensuremath{\mathbf x_{t}})}-{\sum_{t=1}^{T} {\hat{\ell}_t(i^\star)}} ] & \mbox{ $i^\star$ is indep. of $\hat{\ell}_t$} \\
& \leq \mathop{\mbox{\bf E}}[ \varepsilon\sum_{t=1}^T \sum_{i=1}^n \hat{\ell}_t(i)^2 \ensuremath{\mathbf x}_t(i) + \frac{\log n}{\varepsilon} ] & \mbox{ Theorem \ref{lem:hedge} } \\
& \leq \varepsilon T n + \frac{\log n}{\varepsilon} & \mbox{ equation \eqref{eqn:shalom1234} } \\
& \leq 2 \sqrt{T n \log n }. & \mbox { by choice of $\varepsilon$ }
\end{aligned}$$ ◻
:::
We proceed to derive an algorithm for the more general setting of bandit
convex optimization that attains near-optimal regret.
## A Reduction from Limited Information to Full Information
In this section we derive a low regret algorithm for the general setting
of bandit convex optimization. In fact, we shall describe a general
technique for designing bandit algorithms, which is composed of two
parts:
1. A general technique for taking an online convex optimization
algorithm that uses only the gradients of the cost functions
(formally defined below), and applying it to a family of vector
random variables with carefully chosen properties.
2. Designing the random variables that allow the template reduction to
produce meaningful regret guarantees.
We proceed to describe the two parts of this reduction, and in the
remainder of this chapter we describe two examples of using this
reduction to design bandit convex optimization algorithms.
### Part 1: using unbiased estimators
The key idea behind many of the efficient algorithms for bandit convex
optimization is the following: although we cannot calculate
$\nabla f_t(\ensuremath{\mathbf x}_t)$ explicitly, it is possible to
find an *observable* random variable $\ensuremath{\mathbf g_{t}}$ that
satisfies
$\mathop{\mbox{\bf E}}[\ensuremath{\mathbf g_{t}}] \approx \nabla f_t (\ensuremath{\mathbf x}_t) = \nabla_t$.
Thus, $\ensuremath{\mathbf g_{t}}$ can be seen as an estimator of the
gradient. By substituting $\ensuremath{\mathbf g_{t}}$ for $\nabla_t$ in
an OCO algorithm, we will show that many times it retains its sublinear
regret bound.
Formally, the family of regret minimization algorithms for which this
reduction works is captured in the following definition.
::: definition
**Definition 6.4**. *(**first order OCO Algorithm**) Let ${\mathcal A}$
be an OCO (deterministic) algorithm receiving an arbitrary sequence of
differential loss functions $f_1,\ldots,f_T$, and producing decisions
$\ensuremath{\mathbf x}_1 \gets {\mathcal A}(\emptyset), \ensuremath{\mathbf x}_t \gets {\mathcal A}(f_1,\ldots,f_{t-1})$.
${\mathcal A}$ is called a *first order online algorithm* if the
following holds:*
- *The family of loss functions $\mathcal{F}$ is closed under addition
of linear functions: if $f\in \mathcal{F}$ and
$\ensuremath{\mathbf u}\in {\mathbb R}^n$ then
$f+ \ensuremath{\mathbf u}^\top \ensuremath{\mathbf x}\in \mathcal{F}$.*
- *Let $\hat{f}_t$ be the linear function
$\hat{f}_t(\ensuremath{\mathbf x}) = \nabla f_t(\ensuremath{\mathbf x}_t) ^\top \ensuremath{\mathbf x}$,
then for every iteration $t\in[T]$:
$${\mathcal A}(f_1,\ldots,f_{t-1}) = {\mathcal A}(\hat{f}_1,...,\hat{f}_{t-1})$$*
:::
We can now consider a formal reduction from any first order online
algorithm to a bandit convex optimization algorithm as follows.
::: algorithm
[]{#BCO2OCO label="BCO2OCO"}
::: algorithmic
Input: convex set $\ensuremath{\mathcal K}\subset {\mathbb R}^n$, first
order online algorithm ${\mathcal A}$. Let
$\ensuremath{\mathbf x}_1 = {\mathcal A}( \emptyset )$. Generate
distribution ${\mathcal D}_t$, sample
$\ensuremath{\mathbf y}_t \sim {\mathcal D}_t$ with
$\mathop{\mbox{\bf E}}[\ensuremath{\mathbf y}_t] = \ensuremath{\mathbf x}_t$.
Play $\ensuremath{\mathbf y}_t$. Observe
$f_t(\ensuremath{\mathbf y}_t)$, generate $\ensuremath{\mathbf g_{t}}$
with
$\mathop{\mbox{\bf E}}[\ensuremath{\mathbf g_{t}}] = \nabla f_t (\ensuremath{\mathbf x}_t)$.
Let
$\ensuremath{\mathbf x_{t+1}} = {\mathcal A}(\ensuremath{\mathbf g_{1}},...,\ensuremath{\mathbf g_{t}})$.
:::
:::
Perhaps surprisingly, under very mild conditions the reduction above
guarantees the same regret bounds as the original first order algorithm
up to the magnitude of the estimated gradients. This is captured in the
following lemma.
::: {#Lemma:Flaxman_FirstOrderAlgos .lemma}
**Lemma 6.5**. *Let $\ensuremath{\mathbf u}$ be a *fixed* point in
$\ensuremath{\mathcal K}$. Let
$f_1,\ldots,f_T:\ensuremath{\mathcal K}\to {\mathbb R}$ be a sequence of
differentiable functions. Let ${\mathcal A}$ be a first order online
algorithm that ensures a regret bound of the form
$\ensuremath{\mathrm{{Regret}}}_T({{\mathcal A}}) \leq B_{{\mathcal A}}( \nabla f_1(\ensuremath{\mathbf x}_1),\ldots,\nabla f_T(\ensuremath{\mathbf x}_T))$
in the full information setting. Define the points
$\{ \ensuremath{\mathbf x}_t \}$ as:
$\ensuremath{\mathbf x}_1\gets{\mathcal A}(\emptyset)$,
$\ensuremath{\mathbf x}_t \gets {\mathcal A}(\ensuremath{\mathbf g_{1}},\ldots,\ensuremath{\mathbf g_{t-1}})$
where each $\ensuremath{\mathbf g_{t}}$ is a vector valued random
variable such that:
$$\mathop{\mbox{\bf E}}[\ensuremath{\mathbf g_{t}}\big \vert \ensuremath{\mathbf x}_1,f_1,\ldots, \ensuremath{\mathbf x}_t,f_t]=\nabla f_t(\ensuremath{\mathbf x}_t) .$$
Then the following holds for all
$\ensuremath{\mathbf u}\in \ensuremath{\mathcal K}$: $$\begin{aligned}
\mathop{\mbox{\bf E}}[\sum_{t=1}^T f_t(\ensuremath{\mathbf x}_t)] - \sum_{t=1}^T f_t(\ensuremath{\mathbf u}) \leq \mathop{\mbox{\bf E}}[B_{{\mathcal A}}(\ensuremath{\mathbf g_{1}},\ldots,\ensuremath{\mathbf g_{T}})] .
\end{aligned}$$*
:::
::: proof
*Proof.* Define the functions
$h_t:\ensuremath{\mathcal K}\to{\mathbb R}$ as follows:
$$h_t(\ensuremath{\mathbf x}) = f_t(\ensuremath{\mathbf x}) + \boldsymbol\xi_t^\top \ensuremath{\mathbf x}, \; \text{where } \boldsymbol\xi_t = \ensuremath{\mathbf g_{t}}-\nabla f_t(\ensuremath{\mathbf x}_t).$$
Note that
$$\nabla h_t(\ensuremath{\mathbf x}_t) =\nabla f_t(\ensuremath{\mathbf x}_t)+ \ensuremath{\mathbf g_{t}}-\nabla f_t(\ensuremath{\mathbf x}_t)=\ensuremath{\mathbf g_{t}}.$$
Therefore, deterministically applying a first order method
${\mathcal A}$ on the random functions $h_t$ is equivalent to applying
${\mathcal A}$ on a stochastic first order approximation of the
deterministic functions $f_t$. Thus by the full-information regret bound
of ${\mathcal A}$ we have: $$\begin{aligned}
\label{equation:regretBeforeExpectation}
\sum_{t=1}^T h_t(\ensuremath{\mathbf x}_t) - \sum_{t=1}^T h_t(\ensuremath{\mathbf u}) \leq B_{{\mathcal A}}(\ensuremath{\mathbf g_{1}},\ldots,\ensuremath{\mathbf g_{T}}).
\end{aligned}$$ Also note that: $$\begin{aligned}
\mathop{\mbox{\bf E}}[h_t(\ensuremath{\mathbf x}_t)]&=\mathop{\mbox{\bf E}}[f_t(\ensuremath{\mathbf x}_t)]+\mathop{\mbox{\bf E}}[\boldsymbol\xi_t^\top \ensuremath{\mathbf x}_t] \\
& = \mathop{\mbox{\bf E}}[f_t(\ensuremath{\mathbf x}_t)]+\mathop{\mbox{\bf E}}[\mathop{\mbox{\bf E}}[\boldsymbol\xi_t^\top \ensuremath{\mathbf x}_t\big\vert \ensuremath{\mathbf x}_1,f_1,\ldots,\ensuremath{\mathbf x}_t,f_t] ] \\
&= \mathop{\mbox{\bf E}}[f_t(\ensuremath{\mathbf x}_t)]+\mathop{\mbox{\bf E}}[\mathop{\mbox{\bf E}}[\boldsymbol\xi_t \big\vert \ensuremath{\mathbf x}_1,f_1,\ldots,\ensuremath{\mathbf x}_t,f_t] ^\top \ensuremath{\mathbf x}_t] \\
& = \mathop{\mbox{\bf E}}[f_t(\ensuremath{\mathbf x}_t)].
\end{aligned}$$ where we used
$\mathop{\mbox{\bf E}}[\boldsymbol\xi_t\vert \ensuremath{\mathbf x}_1,f_1,\ldots,\ensuremath{\mathbf x}_t,f_t]=0$.
Similarly, since $\ensuremath{\mathbf u}\in\ensuremath{\mathcal K}$ is
fixed we have that
$\mathop{\mbox{\bf E}}[h_t(\ensuremath{\mathbf u})] = f_t(\ensuremath{\mathbf u})$.
The lemma follows from taking the expectation of Equation
[\[equation:regretBeforeExpectation\]](#equation:regretBeforeExpectation){reference-type="eqref"
reference="equation:regretBeforeExpectation"}. ◻
:::
### Part 2: point-wise gradient estimators
In the preceding part we have described how to convert a first order
algorithm for OCO to one that uses bandit information, using specially
tailored random variables. We now describe how to create these vector
random variables.
Although we cannot calculate $\nabla f_t(\ensuremath{\mathbf x}_t)$
explicitly, it is possible to find an *observable* random variable
$\ensuremath{\mathbf g_{t}}$ that satisfies
$\mathop{\mbox{\bf E}}[\ensuremath{\mathbf g_{t}}] \approx \nabla f_t$,
and serves as an estimator of the gradient.
The question is how to find an appropriate $\ensuremath{\mathbf g_{t}}$,
and in order to answer it we begin with an example in a 1-dimensional
case.
::: example
**Example 6.6**. *A 1-dimensional gradient estimate Recall the
definition of the derivative: $$\label{derivative}
f'(x)=\lim_{\delta \rightarrow 0}{\frac{f(x+\delta)-f(x-\delta)}{2 \delta}}. \nonumber$$
The above shows that for a 1-dimensional derivative, two evaluations of
$f$ are required. Since in our problem we can perform only one
evaluation, let us define $g(x)$ as follows: $$g(x) = {
\left\{
\begin{array}{ll}
{\frac{f(x+\delta)}{\delta}}, & {\text{with probability } \frac{1}{2}} \\\\
{ - \frac{f(x-\delta)}{\delta}}, & { \text{with probability } \frac{1}{2}}
\end{array}
\right. }.
\label{gt}$$ It is clear that
$$\mathop{\mbox{\bf E}}[g(x)]={\frac{f(x+\delta)-f(x-\delta)}{2 \delta}}.$$
Thus, **in expectation**, for small $\delta$, $g(x)$ approximates
$f'(x)$.*
:::
#### The sphere sampling estimator
We will now show how the gradient estimator
[\[gt\]](#gt){reference-type="eqref" reference="gt"} can be extended to
the multidimensional case. Let $\ensuremath{\mathbf x}\in \mathbb{R}^n$,
and let $B_{\delta}$ and $S_{\delta}$ denote the $n$-dimensional ball
and sphere with radius $\delta:$
$$B_{\delta}=\left\{\ensuremath{\mathbf x}|\left\|\ensuremath{\mathbf x}\right\| \leq \delta \right\},$$
$$S_{\delta}=\left\{\ensuremath{\mathbf x}|\left\|\ensuremath{\mathbf x}\right\| = \delta \right\}.$$
We define
$\hat{f}(\ensuremath{\mathbf x})= \hat{f}_\delta(\ensuremath{\mathbf x})$
to be a $\delta$-smoothed version of $f(\ensuremath{\mathbf x})$:
$$\label{fhat}
\hat{f}_\delta \left(\ensuremath{\mathbf x}\right)=\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\in \mathbb{B}}\left[f\left(\ensuremath{\mathbf x}+\delta \ensuremath{\mathbf v}\right)\right],$$
where $\ensuremath{\mathbf v}$ is drawn from a uniform distribution over
the unit ball. This construction is very similar to the one used in
Lemma [2.8](#lem:SmoothingLemma){reference-type="ref"
reference="lem:SmoothingLemma"} in context of convergence analysis for
convex optimization. However, our goal here is very different.
Note that when $f$ is linear, we have
$\hat{f}_\delta(\ensuremath{\mathbf x})=f(\ensuremath{\mathbf x})$. We
shall address the case in which $f$ is indeed linear as a special case,
and show how to estimate the gradient of
$\hat{f}(\ensuremath{\mathbf x})$, which, under the assumption, is also
the gradient of $f(\ensuremath{\mathbf x})$. The following lemma shows a
simple relation between the gradient $\nabla \hat{f}_\delta$ and a
uniformly drawn unit vector.
::: {#lem_stokes .lemma}
**Lemma 6.7**. *Fix $\delta>0$. Let
$\hat{f}_\delta(\ensuremath{\mathbf x})$ be as defined in
[\[fhat\]](#fhat){reference-type="eqref" reference="fhat"}, and let
$\ensuremath{\mathbf u}$ be a uniformly drawn unit vector
$\ensuremath{\mathbf u}\sim \ensuremath{\mathbb {S}}$. Then
$$\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf u}\in \ensuremath{\mathbb {S}}}\left[f\left(\ensuremath{\mathbf x}+\delta \ensuremath{\mathbf u}\right) \ensuremath{\mathbf u}\right]=\frac{\delta}{n}\nabla\hat{f}_\delta \left( \ensuremath{\mathbf x}\right).$$*
:::
::: proof
*Proof.* Using Stokes' theorem from calculus, we have
$$\nabla\underset{B_{\delta}}{\int}f\left(\ensuremath{\mathbf x}+\ensuremath{\mathbf v}\right)d \ensuremath{\mathbf v}=\underset{S_{\delta}}{\int}f\left(\ensuremath{\mathbf x}+\ensuremath{\mathbf u}\right)\frac{\ensuremath{\mathbf u}}{\left\Vert \ensuremath{\mathbf u}\right\Vert }d \ensuremath{\mathbf u}.\label{stokes}$$
From [\[fhat\]](#fhat){reference-type="eqref" reference="fhat"}, and by
definition of expectation, we have
$$\hat{f}_\delta(\ensuremath{\mathbf x})=\frac{\underset{B_{\delta}}{\int}f\left(\ensuremath{\mathbf x}+ \ensuremath{\mathbf v}\right)d \ensuremath{\mathbf v}}{\mbox{vol}( B_{\delta})} . \label{vol1}$$
where $\mbox{vol}(B_{\delta})$ is the volume of an n-dimensional ball of
radius $\delta$. Similarly,
$$\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf u}\in S}\left[f\left(\ensuremath{\mathbf x}+\delta \ensuremath{\mathbf u}\right)\ensuremath{\mathbf u}\right]=\frac{\underset{S_{\delta}}{\int}f\left(\ensuremath{\mathbf x}+ \ensuremath{\mathbf u}\right)\frac{\ensuremath{\mathbf u}}{\left\Vert \ensuremath{\mathbf u}\right\Vert }du}{\mbox{vol}(S_{\delta} ) } . \label{vol2}$$
Combining [\[fhat\]](#fhat){reference-type="eqref" reference="fhat"},
[\[stokes\]](#stokes){reference-type="eqref" reference="stokes"},
[\[vol1\]](#vol1){reference-type="eqref" reference="vol1"}, and
[\[vol2\]](#vol2){reference-type="eqref" reference="vol2"}, and the fact
that the ratio of the volume of a ball in $n$ dimensions and the sphere
of dimension $n-1$ is
$\textrm{vol}_{n}B_{\delta}/\textrm{vol}_{n-1}S_{\delta}=\delta/n$ gives
the desired result. ◻
:::
Under the assumption that $f$ is linear, Lemma
[6.7](#lem_stokes){reference-type="ref" reference="lem_stokes"} suggests
a simple estimator for the gradient $\nabla f$. Draw a random unit
vector $\ensuremath{\mathbf u}$, and let
$g\left(\ensuremath{\mathbf x}\right)=\frac{n}{\delta}f\left(\ensuremath{\mathbf x}+\delta \ensuremath{\mathbf u}\right)\ensuremath{\mathbf u}$.
#### The ellipsoidal sampling estimator
The sphere estimator above is at times difficult to use: when the center
of the sphere is very close to the boundary of the decision set only a
very small sphere can fit completely inside. This results in a gradient
estimator with large variance.
In such cases, it is useful to consider ellipsoids rather than spheres.
Luckily, the generalisation to ellipsoidal sampling for gradient
estimation is a simple corollary of our derivation above:
::: {#Corollary:Gradient_Estimate_SinglePoint .corollary}
**Corollary 6.8**. *Consider a continuous function
$f:{\mathbb R}^n\to {\mathbb R}$, an invertible matrix
$A\in {\mathbb R}^{n \times n}$, and let
$\ensuremath{\mathbf v}\sim \mathbb{B}^n$ and
$\ensuremath{\mathbf u}\sim \ensuremath{\mathbb {S}}^n$. Define the
smoothed version of $f$ with respect to $A$: $$\begin{aligned}
\hat{f}(\ensuremath{\mathbf x}) = \mathop{\mbox{\bf E}}[ f(\ensuremath{\mathbf x}+A \ensuremath{\mathbf v}) ].
\end{aligned}$$ Then the following holds: $$\begin{aligned}
\nabla \hat{f}(\ensuremath{\mathbf x}) = n \mathop{\mbox{\bf E}}[ f(\ensuremath{\mathbf x}+A \ensuremath{\mathbf u}) A^{-1} \ensuremath{\mathbf u}].
\end{aligned}$$*
:::
::: proof
*Proof.* Let $g(\ensuremath{\mathbf x}) = f(A \ensuremath{\mathbf x})$,
and
$\hat{g}(\ensuremath{\mathbf x}) = \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\in \mathbb{B}} [g(\ensuremath{\mathbf x}+ \ensuremath{\mathbf v})]$.
$$\begin{aligned}
n \mathop{\mbox{\bf E}}[ f(\ensuremath{\mathbf x}+A \ensuremath{\mathbf u}) A^{-1} \ensuremath{\mathbf u}] & = n A^{-1} \mathop{\mbox{\bf E}}[ f(\ensuremath{\mathbf x}+A \ensuremath{\mathbf u}) \ensuremath{\mathbf u}] \\
& = n A^{-1} \mathop{\mbox{\bf E}}[ g ( A^{-1} \ensuremath{\mathbf x}+ \ensuremath{\mathbf u}) \ensuremath{\mathbf u}] \\
& = A^{-1} \nabla \hat{g}(A^{-1} \ensuremath{\mathbf x}) & \mbox { Lemma \ref{lem_stokes} } \\
& = A^{-1} A \nabla \hat{f}( \ensuremath{\mathbf x}) = \nabla \hat{f}(\ensuremath{\mathbf x}).
\end{aligned}$$ ◻
:::
## Online Gradient Descent without a Gradient
The simplest and historically earliest application of the BCO-to-OCO
reduction outlined before is the application of the online gradient
descent algorithm to the bandit setting. The FKM algorithm (named after
its inventors, see bibliographic section) is outlined in algorithm
[\[FKM_alg\]](#FKM_alg){reference-type="ref" reference="FKM_alg"}.
For simplicity, we assume that the set $\ensuremath{\mathcal K}$
contains the unit ball centered at the zero vector, denoted
$\mathbf{0}$. Denote
$\ensuremath{\mathcal K}_\delta = \{ \ensuremath{\mathbf x}\ | \ \frac{1}{1-\delta} \ensuremath{\mathbf x}\in \ensuremath{\mathcal K}\}$.
It is left as an exercise to show that $\ensuremath{\mathcal K}_\delta$
is convex for any $0 < \delta < 1$ and that all balls of radius
$\delta$ around points in $\ensuremath{\mathcal K}_\delta$ are contained
in $\ensuremath{\mathcal K}$.
We also assume for simplicity that the adversarially chosen cost
functions are bounded by one over $\ensuremath{\mathcal K}$, i.e., that
$| \ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x}) | \leq 1$ for all
$\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}$.
::: center
![The Minkowski set $\ensuremath{\mathcal K}_\delta$
](images/fig_mink.png){#fig:Minkowski width="3.5in"}
:::
::: algorithm
::: algorithmic
Input: decision set $\ensuremath{\mathcal K}$ containing $\mathbf{0}$,
set $\ensuremath{\mathbf x}_1 = \mathbf{0}$, parameters $\delta,\eta$.
Draw $\ensuremath{\mathbf u}_t \in \ensuremath{\mathbb {S}}_1$ uniformly
at random, set
$\ensuremath{\mathbf y}_t = \ensuremath{\mathbf x}_t + \delta \ensuremath{\mathbf u}_t$.
Play $\ensuremath{\mathbf y}_t$, observe and incur loss
$f_t \left( \ensuremath{\mathbf y}_t \right)$. Let
$\ensuremath{\mathbf g_{t}}= \frac{n}{\delta} f_{t}\left(\ensuremath{\mathbf y}_{t}\right)\ensuremath{\mathbf u}_{t}$.
Update
$\ensuremath{\mathbf x}_{t+1}= \underset{\ensuremath{\mathcal K}_\delta}{\mathop{\Pi}}\left[\ensuremath{\mathbf x}_{t}- \eta \ensuremath{\mathbf g_{t}}\right]$.
:::
:::
The FKM algorithm is an instantiation of the generic reduction from
bandit convex optimization to online convex optimization with spherical
gradient estimators over the set $\ensuremath{\mathcal K}_\delta$. It
iteratively projects onto $\ensuremath{\mathcal K}_\delta$, in order to
have enough space for spherical gradient estimation. This degrades its
performance by a controlled quantity. Its regret is bounded as follows.
::: {#FKM_prop .theorem}
**Theorem 6.9**. *Algorithm [\[algFKM\]](#algFKM){reference-type="ref"
reference="algFKM"} with parameters
$\ \eta = \frac{D}{n T^{3/4} } , \delta = \frac{1}{T^{1/4}}$ guarantees
the following expected regret bound
$$\sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf y}_{t}) ]-\min_{\ensuremath{\mathbf x}\in\mathcal{K}} \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}) \leq 9 n D G T^{3/4} = O(T^{3/4}) .$$*
:::
::: proof
*Proof.* Recall our notation of
$\ensuremath{\mathbf x}^\star = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \sum_{t=1}^T f_t(\ensuremath{\mathbf x})$.
Denote
$$\ensuremath{\mathbf x}_{\delta}^{\star}= \mathop{\Pi}_{\ensuremath{\mathcal K}_\delta} (\ensuremath{\mathbf x}^\star ) .$$
Then by properties of projections we have
$\|\ensuremath{\mathbf x}_\delta^\star - \ensuremath{\mathbf x}^\star\| \leq \delta D$,
where $D$ is the diameter of $\ensuremath{\mathcal K}$. Thus, assuming
that the cost functions $\{f_t\}$ are $G$-Lipschitz, we have
$$\label{Lip_step}
\sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf y}_{t}) ]- \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}^\star)
\leq \sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf y}_{t}) ]- \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}_\delta^\star) + \delta T G D .$$
Denote
$\hat{f}_t = \hat{f}_{\delta,t} = \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf u}\sim \mathbb{B}} [f(\ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf u}) ]$
for shorthand. We can now bound the regret by $$\begin{aligned}
& \sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf y}_{t}) ]- \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}^\star) \\
& \leq \sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf x}_{t}) ]- \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}^\star) + \delta D GT & \mbox{$f_t$ is $G$-Lipschitz } \\
& \leq \sum_{t=1}^T \mathop{\mbox{\bf E}}[ {f}_{t}(\ensuremath{\mathbf x}_{t}) ]- \sum_{t=1}^T {f}_{t} (\ensuremath{\mathbf x}^\star_\delta) + 2 \delta D G T & \mbox{Inequality (\ref{Lip_step}}) \\
& \leq \sum_{t=1}^T \mathop{\mbox{\bf E}}[ \hat{f}_{t}(\ensuremath{\mathbf x}_{t}) ]- \sum_{t=1}^T \hat{f}_{t} (\ensuremath{\mathbf x}^\star_\delta) + 4 \delta D G T & \mbox{ Lemma \ref{lem:SmoothingLemma} } \\
& \leq \ensuremath{\mathrm{{Regret}}}_{OGD}( \ensuremath{\mathbf g_{1}} , ..., \ensuremath{\mathbf g_{T}} ) + 4 \delta D G T & \mbox{ Lemma \ref{Lemma:Flaxman_FirstOrderAlgos} } \\
& \leq \eta \sum_{t=1}^T \| \ensuremath{\mathbf g_{t}}\|^2 + \frac{D^2}{\eta} + 4 \delta D G T & \mbox{ OGD regret, Theorem \ref{thm:gradient} } \\
& \leq \eta \frac{n^2}{\delta^2} T + \frac{D^2}{\eta} + 4 \delta D G T & |\ensuremath{\mathbf f_{t}}(\ensuremath{\mathbf x})| \leq 1 \\
& \leq 9 n D G T^{3/4} . & \eta = \frac{D}{n T^{3/4} } , \delta = \frac{1}{T^{1/4}}
\end{aligned}$$ ◻
:::
## \* Optimal Regret Algorithms for Bandit Linear Optimization
A special case of BCO that is of considerable interest is BLO---Bandit
Linear Optimization. This setting has linear cost functions, and
captures the network routing and ad placement examples discussed in the
beginning of this chapter, as well as the non-stochastic MAB problem.
In this section we give near-optimal regret bounds for BLO using
techniques from interior point methods for convex optimization.
The generic OGD method of the previous section suffers from three
pitfalls:
1. The gradient estimators are biased, and estimate the gradient of a
smoothed version of the real cost function.
2. The gradient estimators require enough "wiggle room" and are thus
ill-defined on the boundary of the decision set.
3. The gradient estimates have potentially large magnitude,
proportional to the distance from the boundary.
Fortunately, the first issue is non-existent for linear functions - the
gradient estimators turn out to be unbiased for linear functions. In the
notation of the previous chapters, we have for linear functions:
$$\hat{f}_\delta(\ensuremath{\mathbf x}) = \mathop{\mbox{\bf E}}_{\ensuremath{\mathbf v}\sim \mathbb{B}} [f(\ensuremath{\mathbf x}+ \delta \ensuremath{\mathbf v}) ] = f(\ensuremath{\mathbf x}) .$$
Thus, Lemma [6.7](#lem_stokes){reference-type="ref"
reference="lem_stokes"} gives us a stronger guarantee:
$$\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf u}\in \ensuremath{\mathbb {S}}}\left[f\left(\ensuremath{\mathbf x}+\delta \ensuremath{\mathbf u}\right) \ensuremath{\mathbf u}\right]=\frac{\delta}{n}\nabla\hat{f}_\delta \left( \ensuremath{\mathbf x}\right) = \frac{\delta}{n} \nabla f(\ensuremath{\mathbf x}) .$$
To resolve the second and third issues we use self-concordant barrier
functions, a rather advanced technique from interior point methods for
convex optimization.
### Self-concordant barriers
Self-concordant barrier functions were devised in the context of
interior point methods for optimization as a way of ensuring that the
Newton method converges in polynomial time over bounded convex sets. In
this brief introduction we survey some of their beautiful properties
that will allow us to derive an optimal regret algorithm for BLO.
::: definition
**Definition 6.10**. *Let $\ensuremath{\mathcal K}\in {\mathbb R}^n$ be
a convex set with a nonempty interior
$\text{int}(\ensuremath{\mathcal K})$. A function
$\ensuremath{\mathcal R}:\text{int}(\ensuremath{\mathcal K})\to {\mathbb R}$
is called $\nu$-self-concordant if:*
1. *$\ensuremath{\mathcal R}$ is three times continuously
differentiable and convex, and approaches infinity along any
sequence of points approaching the boundary of
$\ensuremath{\mathcal K}$.*
2. *For every $\ensuremath{\mathbf h}\in {\mathbb R}^n$ and
$\ensuremath{\mathbf x}\in \text{int}(\ensuremath{\mathcal K})$ the
following holds: $$\begin{aligned}
&|\nabla^3\ensuremath{\mathcal R}(\ensuremath{\mathbf x})[\ensuremath{\mathbf h},\ensuremath{\mathbf h},\ensuremath{\mathbf h}] |\leq 2( \nabla^2\ensuremath{\mathcal R}(\ensuremath{\mathbf x})[\ensuremath{\mathbf h},\ensuremath{\mathbf h}])^{3/2} ,\\
&|\nabla\ensuremath{\mathcal R}(\ensuremath{\mathbf x})[\ensuremath{\mathbf h}] |\leq \nu^{1/2}( \nabla^2\ensuremath{\mathcal R}(\ensuremath{\mathbf x})[\ensuremath{\mathbf h},\ensuremath{\mathbf h}])^{1/2}
\end{aligned}$$*
:::
where the third order differential is defined as: $$\begin{aligned}
\nabla^3\ensuremath{\mathcal R}(\ensuremath{\mathbf x})[\ensuremath{\mathbf h},\ensuremath{\mathbf h},\ensuremath{\mathbf h}] \stackrel{\text{\tiny def}}{=}\left. \frac{\partial^3}{\partial t_1 \partial t_2 \partial t_3} \ensuremath{\mathcal R}(\ensuremath{\mathbf x}+t_1 \ensuremath{\mathbf h}+t_2 \ensuremath{\mathbf h}+t_3 \ensuremath{\mathbf h})\right \vert_{t_1=t_2=t_3=0}
\end{aligned}$$ The Hessian of a self-concordant barrier induces a local
norm at every
$\ensuremath{\mathbf x}\in \text{int}(\ensuremath{\mathcal K})$, we
denote this norm by $||\cdot||_\ensuremath{\mathbf x}$ and its dual by
$||\cdot||_\ensuremath{\mathbf x}^{*},$ which are defined
$\forall \ensuremath{\mathbf h}\in {\mathbb R}^n$ by $$\begin{aligned}
\|\ensuremath{\mathbf h}\|_\ensuremath{\mathbf x}= \sqrt{\ensuremath{\mathbf h}^\top \nabla^2\ensuremath{\mathcal R}(\ensuremath{\mathbf x}) \ensuremath{\mathbf h}}, \qquad \|\ensuremath{\mathbf h}\|_\ensuremath{\mathbf x}^{*} = \sqrt{\ensuremath{\mathbf h}^\top (\nabla^2\ensuremath{\mathcal R}(\ensuremath{\mathbf x}))^{-1} \ensuremath{\mathbf h}}.
\end{aligned}$$ We assume that
$\nabla^2\ensuremath{\mathcal R}(\ensuremath{\mathbf x})$ always has
full rank. In BCO applications this is easy to ensure by adding a
fictitious quadratic function to the barrier, which does not affect the
overall regret by more than a constant.
Let $\ensuremath{\mathcal R}$ be a self-concordant barrier and
$\ensuremath{\mathbf x}\in \text{int}(\ensuremath{\mathcal K})$. The
*Dikin ellipsoid* is $$\begin{aligned}
%\label{Definition:Dikin_ellipsoid}
{\mathcal E}_1(\ensuremath{\mathbf x}) :=\{\ensuremath{\mathbf y}\in{\mathbb R}^n : \|\ensuremath{\mathbf y}-\ensuremath{\mathbf x}\|_\ensuremath{\mathbf x}\leq 1\},
\end{aligned}$$ i.e., the $\|\cdot\|_\ensuremath{\mathbf x}$-unit ball
centered around $\ensuremath{\mathbf x}$, is completely contained in
$\ensuremath{\mathcal K}$.
In our next analysis we will need to bound
$\ensuremath{\mathcal R}(\ensuremath{\mathbf y}) - \ensuremath{\mathcal R}(\ensuremath{\mathbf x})$
for
$\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \text{int}(\ensuremath{\mathcal K})$,
for which the following lemma is useful:
::: {#Lemma:MinkowskiBarrier .lemma}
**Lemma 6.11**. *Let $\ensuremath{\mathcal R}$ be a $\nu$-self
concordant function over $\ensuremath{\mathcal K}$, then for all
$\ensuremath{\mathbf x},\ensuremath{\mathbf y}\in \text{int}(\ensuremath{\mathcal K})$:
$$\ensuremath{\mathcal R}(\ensuremath{\mathbf y})-\ensuremath{\mathcal R}(\ensuremath{\mathbf x})\leq \nu \log \frac{1}{1-\pi_{\ensuremath{\mathbf x}}(\ensuremath{\mathbf y})},$$
where
$\pi_\ensuremath{\mathbf x}(\ensuremath{\mathbf y}) = \inf\{t\geq 0: \ensuremath{\mathbf x}+t^{-1}(\ensuremath{\mathbf y}-\ensuremath{\mathbf x})\in\ensuremath{\mathcal K}\} .$*
:::
The function $\pi_\ensuremath{\mathbf x}(\ensuremath{\mathbf y})$ is
called the Minkowski function for $\ensuremath{\mathcal K}$, and its
output is always in the interval $[0,1]$. Moreover, as $y$ approaches
the boundary of $\ensuremath{\mathcal K}$ then
$\pi_\ensuremath{\mathbf x}(\ensuremath{\mathbf y})\to 1$.
Another important property of self-concordant functions is the
relationship between a point and the optimum, and the norm of the
gradient at the point, according to the local norm, as given by the
following lemma.
::: {#Lemma:DistanceAndGradients .lemma}
**Lemma 6.12**. *Let
$\ensuremath{\mathbf x}\in \text{int}(\ensuremath{\mathcal K})$ be such
that
$\|\nabla \ensuremath{\mathcal R}(\ensuremath{\mathbf x}) \| _\ensuremath{\mathbf x}^* \leq \frac{1}{4}$,
and let
$\ensuremath{\mathbf x}^\star = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \ensuremath{\mathcal R}(\ensuremath{\mathbf x})$.
Then
$$\| \ensuremath{\mathbf x}- \ensuremath{\mathbf x}^\star\|_x \leq 2 \|\nabla \ensuremath{\mathcal R}(\ensuremath{\mathbf x}) \| _\ensuremath{\mathbf x}^* .$$*
:::
### A near-optimal algorithm
We have now set up all the necessary tools to derive a near-optimal BLO
algorithm, presented in algorithm
[\[alg:scrible\]](#alg:scrible){reference-type="ref"
reference="alg:scrible"}.
::: algorithm
[]{#alg:egmincut label="alg:egmincut"}
::: algorithmic
Input: decision set $\ensuremath{\mathcal K}$ with self concordant
barrier $\ensuremath{\mathcal R}$, set
$\ensuremath{\mathbf x}_1 \in \text{int}(\ensuremath{\mathcal K})$ such
that $\nabla \ensuremath{\mathcal R}(\ensuremath{\mathbf x}_1) = 0$,
parameters $\eta,\delta$. Let
$\ensuremath{\mathbf A_{t}}= \left[\nabla^2 \ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t) \right]^{-1/2}$
. Pick $\ensuremath{\mathbf u}_t \in \ensuremath{\mathbb {S}}$
uniformly, and set
$\ensuremath{\mathbf y}_t = \ensuremath{\mathbf x}_t + \ensuremath{\mathbf A_{t}}\ensuremath{\mathbf u}_t$.
Play $\ensuremath{\mathbf y}_t$, observe and suffer loss
$f_t \left( \ensuremath{\mathbf y}_t \right)$. let
$\ensuremath{\mathbf g_{t}}= n f_{t}\left(\ensuremath{\mathbf y}_{t}\right) \ensuremath{\mathbf A_{t}}^{-1 }\ensuremath{\mathbf u}_{t}$.
[]{#line:rftl label="line:rftl"} Update
$$\ensuremath{\mathbf x}_{t+1}= \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}_\delta} \left\{ \eta \sum_{\tau =1 }^t \ensuremath{\mathbf g_{\tau}}^\top \ensuremath{\mathbf x}+ \ensuremath{\mathcal R}(\ensuremath{\mathbf x}) \right\} .$$
:::
:::
::: theorem
**Theorem 6.13**. *For appropriate choice of $\eta,\delta$, the SCRIBLE
algorithm guarantees
$$\sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf y}_{t}) ]-\min_{\ensuremath{\mathbf x}\in\mathcal{K}} \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}) \leq O\left(\sqrt{T} \log T \right).$$*
:::
::: proof
*Proof.* First, we note that
$\ensuremath{\mathbf y}_t \in \ensuremath{\mathcal K}$ never steps
outside of the decision set. The reason is that
$\ensuremath{\mathbf x}_t \in \ensuremath{\mathcal K}$ and
$\ensuremath{\mathbf y}_t$ lies in the Dikin ellipsoid centered at
$\ensuremath{\mathbf x}_t$.
Further, by Corollary
[6.8](#Corollary:Gradient_Estimate_SinglePoint){reference-type="ref"
reference="Corollary:Gradient_Estimate_SinglePoint"}, we have that
$$\mathop{\mbox{\bf E}}[ \ensuremath{\mathbf g_{t}}] = \nabla \hat{f}_t (\ensuremath{\mathbf x}_t) = \nabla f_t(\ensuremath{\mathbf x}_t),$$
where the latter equality follows since $f_t$ is linear, and thus its
smoothed version is identical to itself.
A final observation is that line
[\[line:rftl\]](#line:rftl){reference-type="ref" reference="line:rftl"}
in the algorithm is an invocation of the RFTL algorithm with the
self-concordant barrier $\ensuremath{\mathcal R}$ serving as a
regularisation function. The RFTL algorithm for linear functions is a
first order OCO algorithm and thus Lemma
[6.5](#Lemma:Flaxman_FirstOrderAlgos){reference-type="ref"
reference="Lemma:Flaxman_FirstOrderAlgos"} applies.
We can now bound the regret by $$\begin{aligned}
& \sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf y}_{t}) ]- \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}^\star) \\
& \leq \sum_{t=1}^T \mathop{\mbox{\bf E}}[ \hat{f}_{t}(\ensuremath{\mathbf x}_{t}) ]- \sum_{t=1}^T \hat{f}_{t} (\ensuremath{\mathbf x}^\star) & \mbox{ $\hat{f}_t = f_t$, $\mathop{\mbox{\bf E}}[\ensuremath{\mathbf y}_t] = \ensuremath{\mathbf x}_t $ } \\
& \leq \ensuremath{\mathrm{{Regret}}}_{RFTL}( \ensuremath{\mathbf g_{1}} , ..., \ensuremath{\mathbf g_{T}}) & \mbox{ Lemma \ref{Lemma:Flaxman_FirstOrderAlgos} } \\
& \leq \sum_{t=1}^T \ensuremath{\mathbf g_{t}}^\top (\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}}) + \frac{\ensuremath{\mathcal R}(\ensuremath{\mathbf x}^\star) - \ensuremath{\mathcal R}( \ensuremath{\mathbf x}_1)}{\eta} & \mbox{ Lemma \ref{lem:FTL-BTL}} \\
& \leq \sum_{t=1}^T \| \ensuremath{\mathbf g_{t}}\|_{t}^* \| \ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}} \|_{t} + \frac{\ensuremath{\mathcal R}(\ensuremath{\mathbf x}^\star) - \ensuremath{\mathcal R}( \ensuremath{\mathbf x}_1)}{\eta} . & \mbox{ Cauchy-Schwarz}
%& \leq 2 \eta \sum_{t=1}^T \| \gv\|_t^{* \ 2} + \frac{\R(\x^\star) - \R( \x_1)}{\eta} & \mbox{ Theorem \ref{thm:RFTLmain1}} \\
%& \leq 2 \eta n^2 T + \frac{\R(\x^\star) - \R( \x_1) }{\eta} & \mbox {$\|\gv\|^{* \ 2}_t \leq n^2 $} .
\end{aligned}$$ Here we use our notation from the previous chapter for
the local norm
$\| \ensuremath{\mathbf h}\|_t = \| \ensuremath{\mathbf h}\|_{\ensuremath{\mathbf x}_t} = \sqrt{\ensuremath{\mathbf h}^\top \nabla^2 \ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t) \ensuremath{\mathbf h}}$.
To bound the last expression, we use Lemma
[6.12](#Lemma:DistanceAndGradients){reference-type="ref"
reference="Lemma:DistanceAndGradients"}, and the definition of
$\ensuremath{\mathbf x_{t+1}} = \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \Phi_t(\ensuremath{\mathbf x})$
where
$\Phi_t(\ensuremath{\mathbf x}) = \eta \sum_{\tau =1 }^t \ensuremath{\mathbf g_{\tau}}^\top \ensuremath{\mathbf x}+ \ensuremath{\mathcal R}(\ensuremath{\mathbf x})$
is a self-concordant barrier. Thus,
$$\| \ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x_{t+1}} \|_{t} \leq 2 \| \nabla \Phi_t(\ensuremath{\mathbf x_{t}}) \|_{t}^{*} = 2 \| \nabla \Phi_{t-1}(\ensuremath{\mathbf x_{t}}) + \eta \ensuremath{\mathbf g_{t}}\| _{t}^* = 2 \eta \| \ensuremath{\mathbf g_{t}}\|_{t}^* ,$$
since $\nabla \Phi_{t-1}(\ensuremath{\mathbf x_{t}}) = 0$ by definition
of $\ensuremath{\mathbf x_{t}}$. Recall that to use Lemma
[6.12](#Lemma:DistanceAndGradients){reference-type="ref"
reference="Lemma:DistanceAndGradients"}, we need
$\| \nabla \Phi_t(\ensuremath{\mathbf x_{t}})\|_{t}^* = \eta \| \ensuremath{\mathbf g_{t}}\|_{t} ^* \leq \frac{1}{4}$,
which is true by choice of $\eta$ and since
$$\|\ensuremath{\mathbf g_{t}}\|^{* \ 2}_{t} \leq n^2 \ensuremath{\mathbf u}_t^T \ensuremath{\mathbf A_{t}}^{-T} \nabla^{-2} \ensuremath{\mathcal R}(\ensuremath{\mathbf x}_t) \ensuremath{\mathbf A_{t}}^{-1} \ensuremath{\mathbf u}_t \leq n^2 .$$
Thus, $$\begin{aligned}
& \sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf y}_{t}) ]- \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}^\star) \leq 2 \eta \sum_{t=1}^T \| \ensuremath{\mathbf g_{t}}\|_{t}^{* \ 2} + \frac{\ensuremath{\mathcal R}(\ensuremath{\mathbf x}^\star) - \ensuremath{\mathcal R}( \ensuremath{\mathbf x}_1)}{\eta} \\
& \leq 2 \eta n^2 T + \frac{ \ensuremath{\mathcal R}(\ensuremath{\mathbf x}^\star) - \ensuremath{\mathcal R}(\ensuremath{\mathbf x}_1) }{\eta} .
\end{aligned}$$ It remains to bound the Bregman divergence with respect
to $\ensuremath{\mathbf x}^\star$, for which we use a similar technique
as in the analysis of algorithm
[\[algFKM\]](#algFKM){reference-type="ref" reference="algFKM"}, and
bound the regret with respect to $\ensuremath{\mathbf x}^\star_\delta$,
which is the projection of $\ensuremath{\mathbf x}^\star$ onto
$\ensuremath{\mathcal K}_\delta$. Using equation
[\[Lip_step\]](#Lip_step){reference-type="eqref" reference="Lip_step"},
we can bound the overall regret by: $$\begin{aligned}
& \sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf y}_{t}) ]- \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}^\star) \\
& \leq \sum_{t=1}^T \mathop{\mbox{\bf E}}[ f_{t}(\ensuremath{\mathbf y}_{t}) ]- \sum_{t=1}^T f_{t} (\ensuremath{\mathbf x}_\delta^*) + \delta T G D & \mbox{ equation \eqref{Lip_step} } \\
%& \leq \eta n T + \frac{B_\R(\x_1,\x^\star_\delta) }{\eta} + \delta T G D & \mbox { above derivation} \\
%& = 2 \eta n T + \frac{ \R(\x^\star_\delta) - \R(\x_1) }{\eta} + \delta T G D & \mbox { Since $\nabla(\x_1) = 0$ } \\
& = 2 \eta n^2 T + \frac{ \ensuremath{\mathcal R}(\ensuremath{\mathbf x}^\star_\delta) - \ensuremath{\mathcal R}(\ensuremath{\mathbf x}_1) }{\eta} + \delta T G D &\mbox { above derivation} \\
& \leq 2 \eta n^2 T + \frac{\nu \log \frac{1}{1-\pi_{\ensuremath{\mathbf x}_1}(\ensuremath{\mathbf x}^\star_\delta)} }{\eta} + \delta T G D & \mbox { Lemma \ref{Lemma:MinkowskiBarrier} } \\
& \leq 2 \eta n^2 T + \frac{\nu \log \frac{1}{\delta }}{\eta} + \delta T G D & \ensuremath{\mathbf x}^\star_\delta \in \ensuremath{\mathcal K}_\delta .
\end{aligned}$$ Taking $\eta = O(\frac{1}{\sqrt{T}})$ and
$\delta = O(\frac{1}{T})$, the above bound implies our theorem. ◻
:::
## Bibliographic Remarks {#bibliographic-remarks-3}
The Multi-Armed Bandit problem has history going back more than fifty
years to the work of @Robbins52, see the survey of @BubeckC12 for a much
more detailed history. The non-stochastic MAB problem and the EXP3
algorithm, as well as tight lower bounds were given in the seminal paper
of @AueCesFreSch03nonstochastic. The logarithmic gap in attainable
regret for non-stochastic MAB was resolved in [@AudibertB09].
Bandit Convex Optimization for the special case of linear cost functions
and the flow polytope, was introduced and studied by @AweKle08 in the
context of online routing. The full generality BCO setting was
introduced by @FlaxmanKM05, who gave the first efficient and low-regret
algorithm for BCO. Tight bounds for BCO were obtained by
@bubeck2015bandit for the one dimensional case, via an inefficient
algorithm by @hazan2016optimal, and finally with a polynomial time
algorithm in @bubeck2017kernel.
The special case in which the cost functions are linear, called Bandit
Linear Optimization, received significant attention. @DanHayKak07price
gave an optimal regret algorithm up to constants depending on the
dimension. @AbernethyHR08 gave an efficient algorithm and introduced
self-concordant barriers to the bandit setting. Self-concordant barrier
functions were devised in the context of polynomial-time algorithms for
convex optimization in the seminal work of @NesterovNemirovskii94siam.
Lower bounds for regret in the bandit linear optimization setting were
studied by [@shamir2015complexity].
In this chapter we have considered the expected regret as a performance
metric. Significant literature is devoted to high probability guarantees
on the regret. High probability bounds for the MAB problem were given in
[@AueCesFreSch03nonstochastic], and for bandit linear optimization in
[@AbernethyR09]. Other more refined metrics have been recently explored
in [@DekelTA12] and in the context of adaptive adversaries in
[@NeuGSA14; @YuMa09; @EvenDarKaMa09; @MannorSh03; @YuMaSh09].
For a recent comprehensive text on bandit algorithms see
[@lattimore2020bandit].
## Exercises
# Projection-Free Algorithms {#chap:FW}
In many computational and learning scenarios the main bottleneck of
optimization, both online and offline, is the computation of projections
onto the underlying decision set (see
§[2.1.1](#sec:projections){reference-type="ref"
reference="sec:projections"}). In this chapter we introduce
projection-free methods for online convex optimization, that yield more
efficient algorithms in these scenarios.
The motivating example throughout this chapter is the problem of matrix
completion, which is a widely used and accepted model in the
construction of recommendation systems. For matrix completion and
related problems, projections amount to expensive linear algebraic
operations and avoiding them is crucial in big data applications.
We start with a detour into classical offline convex optimization and
describe the conditional gradient algorithm, also known as the
Frank-Wolfe algorithm. Afterwards, we describe problems for which linear
optimization can be carried out much more efficiently than projections.
We conclude with an OCO algorithm that eschews projections in favor of
linear optimization, in much the same flavor as its offline counterpart.
## Review: Relevant Concepts from Linear Algebra
This chapter addresses rectangular matrices, which model applications
such as recommendation systems naturally. Consider a matrix
$X \in {\mathbb R}^{n \times m}$. A non-negative number
$\sigma \in {\mathbb R}_+$ is said to be a singular value for $X$ if
there are two vectors
$\ensuremath{\mathbf u}\in {\mathbb R}^n, \ensuremath{\mathbf v}\in {\mathbb R}^m$
such that
$$X^\top \ensuremath{\mathbf u}= \sigma \ensuremath{\mathbf v}, \quad X \ensuremath{\mathbf v}= \sigma \ensuremath{\mathbf u}.$$
The vectors $\ensuremath{\mathbf u},\ensuremath{\mathbf v}$ are called
the left and right singular vectors respectively. The non-zero singular
values are the square roots of the eigenvalues of the matrix $X X^\top$
(and $X^\top X$). The matrix $X$ can be written as
$$X = U \Sigma V^\top \ , \ U \in {\mathbb R}^{n \times \rho} \ , \ V^\top \in {\mathbb R}^{ \rho \times m} ,$$
where $\rho = \min\{n,m\}$, the matrix $U$ is an orthogonal basis of the
left singular vectors of $X$, the matrix $V$ is an orthogonal basis of
right singular vectors, and $\Sigma$ is a diagonal matrix of singular
values. This form is called the singular value decomposition for $X$.
The number of non-zero singular values for $X$ is called its rank, which
we denote by $k \leq \rho$. The nuclear norm of $X$ is defined as the
$\ell_1$ norm of its singular values, and denoted by
$$\|X \|_* = \sum_{i=1}^\rho \sigma_i .$$ It can be shown (see
exercises) that the nuclear norm is equal to the trace of the square
root of the matrix times its transpose, i.e.,
$$\|X\|_* = {\bf Tr}( \sqrt{ X^\top X} )$$ We denote by $A \bullet B$
the inner product of two matrices as vectors in
${\mathbb R}^{n \times m}$, that is
$$A \bullet B = \sum_{i = 1}^n \sum_{j=1}^m A_{ij} B_{ij} = {\bf Tr}(AB^\top) .$$
## Motivation: Recommender Systems {#sec:recommendation_systems}
Media recommendations have changed significantly with the advent of the
Internet and rise of online media stores. The large amounts of data
collected allow for efficient clustering and accurate prediction of
users' preferences for a variety of media. A well-known example is the
so called "Netflix challenge"---a competition of automated tools for
recommendation from a large dataset of users' motion picture
preferences.
One of the most successful approaches for automated recommendation
systems, as proven in the Netflix competition, is matrix completion.
Perhaps the simplest version of the problem can be described as follows.
The entire dataset of user-media preference pairs is thought of as a
partially-observed matrix. Thus, every person is represented by a row in
the matrix, and every column represents a media item (movie). For
simplicity, let us think of the observations as binary---a person either
likes or dislikes a particular movie. Thus, we have a matrix
$M \in \{0,1,*\}^{n \times m}$ where $n$ is the number of persons
considered, $m$ is the number of movies at our library, and $0/1$ and
$*$ signify "dislike", "like" and "unknown" respectively: $$M_{ij} = {
\left\{
\begin{array}{ll}
{0}, & {\mbox{person $i$ dislikes movie $j$}} \\\\
{1}, & {\mbox{person $i$ likes movie $j$}} \\\\
{*}, & {\mbox{preference unknown}}
\end{array}
\right. } .$$
The natural goal is to complete the matrix, i.e., correctly assign $0$
or $1$ to the unknown entries. As defined so far, the problem is
ill-posed, since any completion would be equally good (or bad), and no
restrictions have been placed on the completions.
The common restriction on completions is that the "true" matrix has low
rank. Recall that a matrix $X \in {\mathbb R}^{n \times m}$ has rank
$k < \rho = \min \{n,m\}$ if and only if it can be written as
$$X = U V \ , \ U \in {\mathbb R}^{n \times k} , V \in {\mathbb R}^{k \times m}.$$
The intuitive interpretation of this property is that each entry in $M$
can be explained by only $k$ numbers. In matrix completion this means,
intuitively, that there are only $k$ factors that determine a persons
preference over movies, such as genre, director, actors and so on.
Now the simplistic matrix completion problem can be well-formulated as
in the following mathematical program. Denote by $\| \cdot \|_{ob}$ the
Euclidean norm only on the observed (non starred) entries of $M$, i.e.,
$$\|X\|_{ob}^2 = \sum_{M_{ij} \neq *} X_{ij}^2.$$ The mathematical
program for matrix completion is given by $$\begin{aligned}
& \min_{X \in {\mathbb R}^{n \times m} } \frac{1}{2} \| X - M \|_{ob}^2 \\
& \text{s.t.} \quad \mathop{\mbox{\rm rank}}(X) \leq k.
\end{aligned}$$
Since the constraint over the rank of a matrix is non-convex, it is
standard to consider a relaxation that replaces the rank constraint by
the nuclear norm. It is known that the nuclear norm is a lower bound on
the matrix rank if the singular values are bounded by one (see
exercises). Thus, we arrive at the following convex program for matrix
completion: $$\begin{aligned}
\label{eqn:matrix-completion}
& \min_{X \in {\mathbb R}^{n \times m} } \frac{1}{2} \| X - M \|_{ob}^2 \\
& \text{s.t.} \quad \|X\|_* \leq k. \notag
\end{aligned}$$
We consider algorithms to solve this convex optimization problem next.
## The Conditional Gradient Method {#subsec:cond_grad_intro}
In this section we return to the basics of convex
optimization---minimization of a convex function over a convex domain as
studied in chapter [2](#chap:opt){reference-type="ref"
reference="chap:opt"}.
The conditional gradient (CG) method, or Frank-Wolfe algorithm, is a
simple algorithm for minimizing a smooth convex function $f$ over a
convex set $\ensuremath{\mathcal K}\subseteq {\mathbb R}^n$. The appeal
of the method is that it is a first order interior point method - the
iterates always lie inside the convex set, and thus no projections are
needed, and the update step on each iteration simply requires to
minimize a linear objective over the set. The basic method is given in
algorithm [\[alg:condgrad\]](#alg:condgrad){reference-type="ref"
reference="alg:condgrad"}.
::: algorithm
::: algorithmic
Input: step sizes $\{ \eta_t \in (0,1] , \ t \in [T]\}$, initial point
$\ensuremath{\mathbf x}_1 \in \ensuremath{\mathcal K}$.
$\ensuremath{\mathbf v}_{t} \gets \arg \min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{\ensuremath{\mathbf x}^\top \nabla{}f(\ensuremath{\mathbf x}_t) \right\}$.
[]{#algstep:linearopt label="algstep:linearopt"}
$\ensuremath{\mathbf x}_{t+1} \gets \ensuremath{\mathbf x}_t + \eta_t(\ensuremath{\mathbf v}_t - \ensuremath{\mathbf x}_t)$.
:::
:::
Note that in the CG method, the update to the iterate
$\ensuremath{\mathbf x}_t$ may be not be in the direction of the
gradient, as $\ensuremath{\mathbf v}_t$ is the result of a linear
optimization procedure in the direction of the negative gradient. This
is depicted in figure [7.1](#fig:OFW){reference-type="ref"
reference="fig:OFW"}.
::: center
![Direction of progression of the CG algorithm
](images/fig_fw2.jpg){#fig:OFW width="3.5in"}
:::
The following theorem gives an essentially tight performance guarantee
of this algorithm over smooth functions. Recall our notation from
chapter [2](#chap:opt){reference-type="ref" reference="chap:opt"}:
$\ensuremath{\mathbf x}^\star$ denotes the global minimizer of $f$ over
$\ensuremath{\mathcal K}$, $D$ denotes the diameter of the set
$\ensuremath{\mathcal K}$, and
$h_t = f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^\star)$
denotes the suboptimality of the objective value in iteration $t$.
::: {#thm:offlineFW .theorem}
**Theorem 7.1**. *The CG algorithm applied to $\beta$-smooth functions
with step sizes $\eta_t = \min\{1,\frac{2}{t}\}$ attains the following
convergence guarantee $$h_t \leq \frac{2 \beta D^2 }{t}$$*
:::
::: proof
*Proof.* As done before in this manuscript, we denote
$\nabla_t = \nabla f(\ensuremath{\mathbf x}_t)$. For any set of step
sizes, we have $$\begin{aligned}
\label{old_fw_anal}
& f(\ensuremath{\mathbf x}_{t+1}) - f(\ensuremath{\mathbf x}^\star)
= f(\ensuremath{\mathbf x}_t + \eta_t(\ensuremath{\mathbf v}_t - \ensuremath{\mathbf x}_t)) - f(\ensuremath{\mathbf x}^\star) \notag \\
&\leq f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^\star) + \eta_t(\ensuremath{\mathbf v}_t-\ensuremath{\mathbf x}_t)^{\top}\nabla_t + \eta_t^2 \frac{\beta}{2}\Vert{\ensuremath{\mathbf v}_t-\ensuremath{\mathbf x}_t}\Vert^2 & \textrm{smoothness } \nonumber \\
&\leq f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^\star) + \eta_t(\ensuremath{\mathbf x}^\star-\ensuremath{\mathbf x}_t)^{\top}\nabla_t + \eta_t^2 \frac{\beta}{2}\Vert{\ensuremath{\mathbf v}_t-\ensuremath{\mathbf x}_t}\Vert^2 & \textrm{$\ensuremath{\mathbf v}_t$ choice} \nonumber \\
&\leq f(\ensuremath{\mathbf x}_t) - f(\ensuremath{\mathbf x}^\star) + \eta_t(f(\ensuremath{\mathbf x}^\star)-f(\ensuremath{\mathbf x}_t)) + \eta_t^2 \frac{\beta}{2}\Vert{\ensuremath{\mathbf v}_t-\ensuremath{\mathbf x}_t}\Vert^2 & \textrm{convexity} \nonumber \\
&\leq (1-\eta_t)(f(\ensuremath{\mathbf x}_t)-f(\ensuremath{\mathbf x}^\star)) + \frac{\eta_t^2\beta}{2} D^2.
\end{aligned}$$ We reached the recursion
$h_{t+1} \leq (1- \eta_t) h_t + \eta_t^2\frac{ \beta D^2}{2}$, and by
Lemma [7.2](#lemma:FW-recursion){reference-type="ref"
reference="lemma:FW-recursion"} we obtain,
$$h_{t} \leq \frac{2 \beta D^2 }{t} .$$ ◻
:::
::: {#lemma:FW-recursion .lemma}
**Lemma 7.2**. *Let $\{ h_t \}$ be a sequence that satisfies the
recurence $$h_{t+1} \leq h_t (1 - \eta_t) + \eta_t^2 c .$$ Then taking
$\eta_t = \min\{1,\frac{2}{t}\}$ implies $$h_t \leq \frac{4c}{t} .$$*
:::
::: proof
*Proof.* This is proved by induction on $t$.
**Induction base.** For $t=1$, we have
$$h_2 \leq h_1 (1-\eta_1 ) + \eta_1^2 c = c \leq 4c .$$
**Induction step.** $$\begin{aligned}
h_{t+1} & \leq (1- \eta_t) h_t + \eta_t^2 c \\
& \leq \left(1- \frac{2}{t} \right) \frac{4c}{ t} + \frac{4c}{t^2} & \mbox{induction hypothesis}\\
& = \frac{4c}{t} \left( 1 - \frac{1}{t} \right) \\
& \leq \frac{4c}{t} \cdot \frac{t}{t+1} & \mbox{$\frac{t-1}{t} \leq \frac{t}{t+1} $ } \\
& = \frac{4c}{t+1} .
\end{aligned}$$ ◻
:::
### Example: matrix completion via CG
As an example of an application for the conditional gradient algorithm,
recall the mathematical program given by
[\[eqn:matrix-completion\]](#eqn:matrix-completion){reference-type="eqref"
reference="eqn:matrix-completion"}. The gradient of the objective
function at point $X^t$ is $$\label{eqn:matrix-gradient}
\nabla f(X^t) = (X^t - M)_{ob} = {
\left\{
\begin{array}{ll}
{ X_{ij}^t - M_{ij} }, & {(i,j) \in OB} \\\\
{0}, & {\text{otherwise}}
\end{array}
\right. } .$$ Over the set of bounded-nuclear norm matrices, the linear
optimization of line
[\[algstep:linearopt\]](#algstep:linearopt){reference-type="ref"
reference="algstep:linearopt"} in algorithm
[\[alg:condgrad\]](#alg:condgrad){reference-type="ref"
reference="alg:condgrad"} becomes, $$\begin{aligned}
& \min X \bullet \nabla_t \ , \quad \nabla_t = \nabla f(X_t) \\
& \mbox{s.t. } \|X\|_* \leq k.
\end{aligned}$$ For simplicity, let's consider square symmetric
matrices, for which the nuclear norm is equivalent to the trace norm,
and the above optimization problem becomes $$\begin{aligned}
& \min X \bullet \nabla_t \\
& \mbox{s.t. } {\bf Tr}(X) \leq k.
\end{aligned}$$ It can be shown that this program is equivalent to the
following (see exercises): $$\begin{aligned}
& \min_{\ensuremath{\mathbf x}\in {\mathbb R}^n} \ensuremath{\mathbf x}^\top \nabla_t \ensuremath{\mathbf x}\\
& \mbox{s.t. } \|\ensuremath{\mathbf x}\|_2^2 \leq k.
\end{aligned}$$ Hence, this is an eigenvector computation in disguise!
Computing the largest eigenvector of a matrix takes linear time via the
power method, which also applies more generally to computing the largest
singular value of rectangular matrices. With this, step
[\[algstep:linearopt\]](#algstep:linearopt){reference-type="ref"
reference="algstep:linearopt"} of algorithm
[\[alg:condgrad\]](#alg:condgrad){reference-type="ref"
reference="alg:condgrad"}, which amounts to mathematical program
[\[eqn:matrix-completion\]](#eqn:matrix-completion){reference-type="eqref"
reference="eqn:matrix-completion"}, becomes computing
$v_{\max}(- \nabla f(X^t))$, the largest eigenvector of
$- \nabla f(X^t)$. Algorithm
[\[alg:condgrad\]](#alg:condgrad){reference-type="ref"
reference="alg:condgrad"} takes on the modified form described in
Algorithm
[\[alg:condgrad4matrixcompletion\]](#alg:condgrad4matrixcompletion){reference-type="ref"
reference="alg:condgrad4matrixcompletion"}.
::: algorithm
::: algorithmic
Let $X^1$ be an arbitrary matrix of trace $k$ in
$\ensuremath{\mathcal K}$.
$\ensuremath{\mathbf v}_{t} = \sqrt{k} \cdot v_{\max}(-\nabla_t )$.
$X^{t+1} = X^t + \eta_t(\ensuremath{\mathbf v}_t \ensuremath{\mathbf v}_t^\top - X^t)$
for $\eta_t\in(0,1)$.
:::
:::
##### Comparison to other gradient-based methods. {#comparison-to-other-gradient-based-methods. .unnumbered}
How does this compare to previous convex optimization methods for
solving the same matrix completion problem? As a convex program, we can
apply gradient descent, or even more advantageously in this setting,
stochastic gradient descent as in §[3.4](#sec:sgd){reference-type="ref"
reference="sec:sgd"}. Recall that the gradient of the objective function
at point $X^t$ takes the simple form
[\[eqn:matrix-gradient\]](#eqn:matrix-gradient){reference-type="eqref"
reference="eqn:matrix-gradient"}. A stochastic estimate for the gradient
can be attained by observing just a single entry of the matrix $M$, and
the update itself takes constant time as the gradient estimator is
sparse. However, the projection step is significantly more difficult.
In this setting, the convex set $\ensuremath{\mathcal K}$ is the set of
bounded nuclear norm matrices. Projecting a matrix onto this set amounts
to calculating the SVD of the matrix, which is similar in computational
complexity to algorithms for matrix diagonalization or inversion. The
best known algorithms for matrix diagonalization are superlinear in the
matrices' size, and thus impractical for large datasets that are common
in applications.
In contrast, the CG method does not require projections at all, and
replaces them with linear optimization steps over the convex set, which
we have observed to amount to singular vector computations. The latter
can be implemented to take linear time via the power method or the
Lanczos algorithm (see bibliography).
Thus, the Conditional Gradient method allows for optimization of the
mathematical program
[\[eqn:matrix-completion\]](#eqn:matrix-completion){reference-type="eqref"
reference="eqn:matrix-completion"} with a linear-time operation
(eigenvector using power method) per iteration, rather than a
significantly more expensive computation (SVD) needed for gradient
descent.
## Projections versus Linear Optimization
The conditional gradient (Frank-Wolfe) algorithm described before does
not resort to projections, but rather computes a linear optimization
problem of the form $$\label{eqn:linopt}
\arg \min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \left\{\ensuremath{\mathbf x}^\top \ensuremath{\mathbf u}\right\}.$$
When is the CG method computationally preferable? The overall
computational complexity of an iterative optimization algorithm is the
product of the number of iterations and the computational cost per
iteration. The CG method does not converge as well as the most efficient
gradient descent algorithms, meaning it requires more iterations to
produce a solution of a comparable level of accuracy. However, for many
interesting scenarios the computational cost of a linear optimization
step [\[eqn:linopt\]](#eqn:linopt){reference-type="eqref"
reference="eqn:linopt"} is *significantly* lower than that of a
projection step.
Let us point out several examples of problems for which we have very
efficient linear optimization algorithms, whereas our state-of-the-art
algorithms for computing projections are significantly slower.
##### Recommendation systems and matrix prediction. {#recommendation-systems-and-matrix-prediction. .unnumbered}
In the example pointed out in the preceding section of matrix
completion, known methods for projection onto the spectahedron, or more
generally the bounded nuclear-norm ball, require singular value
decompositions, which take superlinear time via our best known methods.
In contrast, the CG method requires maximal eigenvector computations
which can be carried out in linear time via the power method (or the
more sophisticated Lanczos algorithm).
##### Network routing and convex graph problems. {#network-routing-and-convex-graph-problems. .unnumbered}
Various routing and graph problems can be modeled as convex optimization
problems over a convex set called the flow polytope.
Consider a directed acyclic graph with $m$ edges, a source node marked
$s$ and a target node marked $t$. Every path from $s$ to $t$ in the
graph can be represented by its identifying vector, that is a vector in
$\lbrace{0,1}\rbrace^m$ in which the entries that are set to 1
correspond to edges of the path. The flow polytope of the graph is the
convex hull of all such identifying vectors of the simple paths from $s$
to $t$. This polytope is also exactly the set of all unit $s$--$t$ flows
in the graph if we assume that each edge has a unit flow capacity (a
flow is represented here as a vector in $\mathbb{R}^m$ in which each
entry is the amount of flow through the corresponding edge).
Since the flow polytope is just the convex hull of $s$--$t$ paths in the
graph, minimizing a linear objective over it amounts to finding a
minimum weight path given weights for the edges. For the shortest path
problem we have very efficient combinatorial optimization algorithms,
namely Dijkstra's algorithm.
Thus, applying the CG algorithm to solve **any** convex optimization
problem over the flow polytope will only require iterative shortest path
computations.
##### Ranking and permutations. {#ranking-and-permutations. .unnumbered}
A common way to represent a permutation or ordering is by a permutation
matrix. Such are square matrices over $\{0,1\}^{n \times n}$ that
contain exactly one $1$ entry in each row and column.
Doubly-stochastic matrices are square, real-valued matrices with
non-negative entries, in which the sum of entries of each row and each
column amounts to 1. The polytope that defines all doubly-stochastic
matrices is called the Birkhoff-von Neumann polytope. The Birkhoff-von
Neumann theorem states that this polytope is the convex hull of exactly
all $n\times{n}$ permutation matrices.
Since a permutation matrix corresponds to a perfect matching in a fully
connected bipartite graph, linear minimization over this polytope
corresponds to finding a minimum weight perfect matching in a bipartite
graph.
Consider a convex optimization problem over the Birkhoff-von Neumann
polytope. The CG algorithm will iteratively solve a linear optimization
problem over the BVN polytope, thus iteratively solving a minimum weight
perfect matching in a bipartite graph problem, which is a well-studied
combinatorial optimization problem for which we know of efficient
algorithms. In contrast, other gradient based methods will require
projections, which are quadratic optimization problems over the BVN
polytope.
##### Matroid polytopes. {#matroid-polytopes. .unnumbered}
A matroid is pair $(E,I)$ where $E$ is a set of elements and $I$ is a
set of subsets of $E$ called the independent sets which satisfy various
interesting proprieties that resemble the concept of linear independence
in vector spaces. Matroids have been studied extensively in
combinatorial optimization and a key example of a matroid is the
graphical matroid in which the set $E$ is the set of edges of a given
graph and the set $I$ is the set of all subsets of $E$ which are
cycle-free. In this case, $I$ contains all the spanning trees of the
graph. A subset $S\in{I}$ could be represented by its identifying vector
which lies in $\lbrace{0,1}\rbrace^{\vert{E}\vert}$ which also gives
rise to the matroid polytope which is just the convex hull of all
identifying vectors of sets in $I$. It can be shown that some matroid
polytopes are defined by exponentially many linear inequalities
(exponential in $\vert{E}\vert$), which makes optimization over them
difficult.
On the other hand, linear optimization over matroid polytopes is easy
using a simple greedy procedure which runs in nearly linear time. Thus,
the CG method serves as an efficient algorithm to solve any convex
optimization problem over matroids iteratively using only a simple
greedy procedure.
## The Online Conditional Gradient Algorithm
In this section we give a projection-free algorithm for OCO based on the
conditional gradient method, which is projection-free and thus carries
the computational advantages of the CG method to the online setting.
It is tempting to apply the CG method straightforwardly to the online
appearance of functions in the OCO setting, such as the OGD algorithm in
§[3.1](#section:ogd){reference-type="ref" reference="section:ogd"}.
However, it can be shown that an approach that only takes into account
the last cost function is doomed to fail. The reason is that the
conditional gradient method takes into account the *direction* of the
gradient, and is insensitive to its *magnitude*.
Instead, we apply the CG algorithm step to the aggregate sum of all
previous cost functions with added Euclidean regularization. The
resulting algorithm is given formally in Algorithm
[\[alg:ocg\]](#alg:ocg){reference-type="ref" reference="alg:ocg"}.
::: algorithm
::: algorithmic
Input: convex set $\ensuremath{\mathcal K}$, $T$,
$\ensuremath{\mathbf x}_1 \in \mathcal{K}$, parameters
$\eta , \ \{\sigma_t\}$. Play $\mathbf{x}_t$ and observe $f_t$.
[]{#eq:F_t-def label="eq:F_t-def"} Let
$F_t(\ensuremath{\mathbf x}) = \eta {\textstyle \sum}_{\tau=1}^{t-1} \nabla_\tau^\top \ensuremath{\mathbf x}+ \|\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_1\|^2$.
Compute
$\mathbf{v}_t = \arg \min_{\mathbf{x}\in \ensuremath{\mathcal K}} \{\nabla F_t(\mathbf{x}_t) \cdot \mathbf{x}\}$.
Set
$\mathbf{x}_{t+1} = (1 - \sigma_t)\mathbf{x}_{t} + \sigma_t \mathbf{v}_t$.
:::
:::
We can prove the following regret bound for this algorithm. While this
regret bound is suboptimal in light of the previous upper bounds we have
seen, its suboptimality is compensated by the algorithm's lower
computational cost.
::: {#thm:FWonline-main .theorem}
**Theorem 7.3**. *Online conditional gradient (Algorithm
[\[alg:ocg\]](#alg:ocg){reference-type="ref" reference="alg:ocg"}) with
parameters
$\eta = \frac{D}{2 G T^{3/4} }, \sigma_t = \min\{1,\frac{2}{t^{1/2}}\}$,
attains the following guarantee
$$\ensuremath{\mathrm{{Regret}}}_T = \sum_{t=1}^{T} f_t(\mathbf{x}_t) -\min_{\mathbf{x}^\star \in \ensuremath{\mathcal K}}\sum_{t=1}^{T}
f_t(\mathbf{x}^\star)\ \leq 8 D G T^{3/4}$$*
:::
As a first step in analyzing Algorithm
[\[alg:ocg\]](#alg:ocg){reference-type="ref" reference="alg:ocg"},
consider the points
$$\ensuremath{\mathbf x_{t}}^\star = \arg \min _{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} F_t(\ensuremath{\mathbf x}) .$$
These are exactly the iterates of the RFTL algorithm from chapter
[5](#chap:regularization){reference-type="ref"
reference="chap:regularization"}, namely Algorithm
[\[alg:RFTLmain\]](#alg:RFTLmain){reference-type="ref"
reference="alg:RFTLmain"} with the regularization being
$R(\ensuremath{\mathbf x}) = \| \ensuremath{\mathbf x}- \ensuremath{\mathbf x_{1}}\|^2$,
applied to cost functions with a shift, namely:
$$\tilde{f}_t = f_t( \ensuremath{\mathbf x}+ (\ensuremath{\mathbf x}_t^\star - \ensuremath{\mathbf x}_t) ) .$$
The reason is that $\nabla_t$ in Algorithm
[\[alg:ocg\]](#alg:ocg){reference-type="ref" reference="alg:ocg"} refers
to $\nabla f_t(\ensuremath{\mathbf x}_t)$, whereas in the RFTL algorithm
we have $\nabla_t = \nabla f_t(\ensuremath{\mathbf x}_t^\star)$. Notice
that for any point $\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}$
we have
$| f_t(\ensuremath{\mathbf x}) - \tilde{f}_t(\ensuremath{\mathbf x}) | \leq G \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}_t^\star\|$.
Thus, according to Theorem [5.2](#thm:RFTLmain1){reference-type="ref"
reference="thm:RFTLmain1"}, we have that $$\begin{aligned}
\label{eqn:FW1}
& \sum_{t=1}^{T} f_t(\ensuremath{\mathbf x}_t^\star) - \sum_{t=1}^{T} f_t(\mathbf{x}^\star) \notag \\
& \leq 2 G\sum_t \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}_t^\star\| + \sum_{t=1}^{T} \tilde{f}_t(\ensuremath{\mathbf x}_t^\star) - \sum_{t=1}^{T} \tilde{f}_t(\mathbf{x}^\star ) \notag \\
& \leq 2 G\sum_t \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}_t^\star\| + 2 \eta G T + \frac{1}{\eta} D .
\end{aligned}$$
Using our previous notation, denote by
$h_t(\ensuremath{\mathbf x}) = {F_t(\ensuremath{\mathbf x}) - F_t(\ensuremath{\mathbf x}^\star_t)}$,
and by $h_t = h_t(\ensuremath{\mathbf x}_t)$. The main lemma we require
to proceed is the following, which relates the iterates
$\ensuremath{\mathbf x}_t$ to the optimal point according to the
aggregate function $F_t$.
::: {#lem:mainFW .lemma}
**Lemma 7.4**. *The iterates $\ensuremath{\mathbf x}_t$ of
Algorithm [\[alg:ocg\]](#alg:ocg){reference-type="ref"
reference="alg:ocg"} satisfy for all $t \ge 1$
$$h_t \leq { 2 D^2} \sigma_t.$$*
:::
::: proof
*Proof.* As the functions $F_t$ are $1$-smooth, applying the offline
Frank-Wolfe analysis technique, and in particular Equation
[\[old_fw_anal\]](#old_fw_anal){reference-type="eqref"
reference="old_fw_anal"} to the function $F_t$ we obtain:
$$\begin{aligned}
h_{t}(\ensuremath{\mathbf x}_{t+1}) & = F_{t} (\ensuremath{\mathbf x}_{t+1}) - F_{t} (\ensuremath{\mathbf x}^\star_{t}) \\
&\leq (1-\sigma_t)( F_t (\ensuremath{\mathbf x}_t)- F_t (\ensuremath{\mathbf x}^\star_t)) + \frac{D^2}{2} \sigma_t^2 & \mbox { Equation \eqref{old_fw_anal}} \\
& = (1-\sigma_t) h_t + \frac{D^2}{2} \sigma_t^2.
\end{aligned}$$
In addition, by definition of $F_t$ and $h_t$ we have $$\begin{aligned}
& h_{t+1} \\
& = F_t(\ensuremath{\mathbf x}_{t+1}) - F_t(\ensuremath{\mathbf x}_{t+1}^\star) + \eta \nabla_{t+1}(\ensuremath{\mathbf x}_{t+1} - \ensuremath{\mathbf x}^\star_{t+1} ) \\
& \leq h_t(\ensuremath{\mathbf x}_{t+1}) + \eta \nabla_{t+1}(\ensuremath{\mathbf x}_{t+1} - \ensuremath{\mathbf x}^\star_{t+1}) & \mbox{$F_t(\ensuremath{\mathbf x}_t^\star) \leq F_t(\ensuremath{\mathbf x}_{t+1}^\star)$} \\
& \leq\ h_t(\ensuremath{\mathbf x}_{t+1}) + \eta G \| \ensuremath{\mathbf x}_{t+1} - \ensuremath{\mathbf x}_{t+1}^\star\| . & \mbox{Cauchy-Schwarz}
\end{aligned}$$ Since $F_t$ is $1$-strongly convex, we have
$$\| \ensuremath{\mathbf x}- \ensuremath{\mathbf x_{t}}^\star \|^2 \leq F_t(\ensuremath{\mathbf x}) - F_t(\ensuremath{\mathbf x_{t}}^\star) .$$
Thus, $$\begin{aligned}
h_{t+1} & \leq\ h_t(\ensuremath{\mathbf x}_{t+1}) + \eta G \| \ensuremath{\mathbf x}_{t+1} - \ensuremath{\mathbf x}_{t+1}^\star\| \\
& \leq h_t(\ensuremath{\mathbf x}_{t+1}) + \eta G \sqrt{h_{t+1}} \\
%& \leq h_t(\x_{t+1}) + \eta^{4/3} G^2 + \eta^{2/3} \|\x_{t+1} - \x_{t+1}^\star\|^2 \\
%& \leq h_t(\x_{t+1}) + \eta^{4/3} G^2 + \eta^{2/3} h_{t+1} \\
%& \leq h_t(\x_{t+1}) + G^2 \sigma_t^2 + \sigma_t h_{t+1}
& \leq h_t (1 - \sigma_t) + \frac{1}{2} {D^2 } \sigma_t^2 + \eta G \sqrt{h_{t+1}} & \mbox{above derivation} \\
& \leq h_t (1 - \frac{5}{6} \sigma_t) + \frac{5}{8}{D^2 } \sigma_t^2. & \mbox{ equation \eqref{prop:fwhelperprop} below}
\end{aligned}$$
Above we used the following derivation, that holds by choice of
parameters $\eta = \frac{D}{2 G T^{3/4} }$ and
$\sigma_t = \min\{1,\frac{2}{t^{1/2}}\}$: since $\eta,G,h_t$ are all
non-negative, we have $$\begin{aligned}
\eta G \sqrt{h_{t+1}} & = \left( \sqrt{D} {G \eta} \right)^{2/3} \left( \frac{G \eta}{D} \right)^{1/3} \sqrt{h_{t+1}} \notag \\
& \leq \frac{1}{2} \left( { \sqrt{D} G \eta}{} \right)^{4/3} + \frac{1}{2} \left( \frac{G \eta}{D} \right)^{2/3} h_{t+1} \notag \\
& \leq \frac{1}{8} D^2 \sigma_t^2 + \frac{1}{6} \sigma_t h_{t+1} \label{prop:fwhelperprop}
\end{aligned}$$
We now claim that the theorem follows inductively. The base of the
induction holds since, for $t = 1$, the definition of $F_1$ implies
$$h_1 = F_1(\ensuremath{\mathbf x}_1) - F_1(\ensuremath{\mathbf x}^\star) = \| \ensuremath{\mathbf x}_1 - \ensuremath{\mathbf x}^\star\|^2 \leq D^2 \leq 2 D^2 \sigma_1 .$$
Assuming the bound is true for $t$, we now show it holds for $t+1$ as
well: $$\begin{aligned}
h_{t+1} & \leq & h_t (1 - \frac{5}{6} \sigma_t) + \frac{5}{8} {D^2 } \sigma_t^2 \\
& \leq & 2 D^2 \sigma_t \left(1 - \frac{5}{6} \sigma_t \right) + \frac{5}{8} { D^2}\sigma_t^2 \\
& \leq & { 2 D^2 }\sigma_t \left(1 - \frac{\sigma_t}{2} \right) \\
& \le & { 2 D^2} \sigma_{t+1},
\end{aligned}$$ as required. The last inequality follows by the
definition of $\sigma_t$ (see exercises). ◻
:::
We proceed to use this lemma in order to prove our theorem:
::: proof
*Proof of Theorem [7.3](#thm:FWonline-main){reference-type="ref"
reference="thm:FWonline-main"}.* By definition, the functions $F_t$ are
$1$-strongly convex. Thus, we have for
$\ensuremath{\mathbf x_{t}}^\star = \arg \min_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} F_t(\ensuremath{\mathbf x})$:
$$\| \ensuremath{\mathbf x}- \ensuremath{\mathbf x_{t}}^\star \|^2 \leq F_t(\ensuremath{\mathbf x}) - F_t(\ensuremath{\mathbf x_{t}}^\star) .$$
Let $\eta = \frac{D}{2G T^{3/4}}$, and notice that this satisfies the
constraint of Lemma [7.4](#lem:mainFW){reference-type="ref"
reference="lem:mainFW"}, which requires
$\eta G \sqrt{h_{t+1}} \leq \frac{D^2}{2} \sigma_t^2$. In addition,
$\eta < 1$ for $T$ large enough. Hence, $$\begin{aligned}
f_t(\ensuremath{\mathbf x_{t}}) - f_t(\ensuremath{\mathbf x}^\star_t) & \leq G \| \ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x}^\star_t \| \notag \\
& \leq {G} \sqrt{ F_t(\ensuremath{\mathbf x_{t}}) - F_t(\ensuremath{\mathbf x_{t}}^\star) } \notag \\
& \leq { 2 G D } \sqrt{\sigma_t} . & \mbox{ Lemma \ref{lem:mainFW} } \label{eqn:FW2}
\end{aligned}$$ Putting everything together we obtain:
$$\begin{aligned}
& \text{\em Regret}_T(\text{\em OCG}) = \sum_{t=1}^{T} f_t(\mathbf{x}_t) - \sum_{t=1}^{T}
f_t(\mathbf{x}^\star) \\
& = \sum_{t=1}^T \left[ f_t(\mathbf{x}_t) - f_t(\ensuremath{\mathbf x_{t}}^\star) + f_t(\ensuremath{\mathbf x_{t}}^\star) - f_t(\ensuremath{\mathbf x}^\star) \right] \\
& \leq \sum_{t=1}^{T} 2 G {D} \sqrt{\sigma_t} + \sum_t \left[ f_t(\ensuremath{\mathbf x_{t}}^\star) - f_t(\ensuremath{\mathbf x}^\star) \right] & \mbox{by \eqref{eqn:FW2}} \\
& \leq 4 G D {T}^{3/4} + \sum_t \left[ f_t(\ensuremath{\mathbf x_{t}}^\star) - f_t(\ensuremath{\mathbf x}^\star) \right] \\
& \leq 4 G D {T}^{3/4} + 2 G\sum_t \|\ensuremath{\mathbf x}_t - \ensuremath{\mathbf x}_t^\star \| + 2 \eta G T + \frac{1}{\eta} D . & \mbox{by \eqref{eqn:FW1}} \\
%2 \eta G^2 T + \frac{D^2}{\eta}. & \mbox { \eqref{eqn:FW1}}
\end{aligned}$$
We thus obtain: $$\begin{aligned}
\ensuremath{\mathrm{{Regret}}}_T(\text{\em OCG})
& \leq 4 G D {T}^{3/4} + 2 \eta G^2 T + \frac{D^2}{\eta} \\
& \leq 4 G D T^{2/3} + DG T^{1/4} + 2 DG T^{3/4} \leq 8 D G T^{3/4}.
\end{aligned}$$ ◻
:::
## Bibliographic Remarks {#bibliographic-remarks-4}
The matrix completion model has been extremely popular since its
inception in the context of recommendation systems
[@SrebroThesis; @Rennie:2005; @salakhutdinov:collaborative; @lee:practical; @CandesR09; @ShamirS11].
The conditional gradient algorithm was devised in the seminal paper by
@FrankWolfe. Due to the applicability of the FW algorithm to large-scale
constrained problems, it has been a method of choice in recent machine
learning applications, to name a few:
[@Hazan08; @Jaggi10; @Jaggi13a; @Jaggi13b; @Dudik12a; @Dudik12b; @Hazan12; @ShalevShwartz11; @Bach12; @Tewari11; @Garber11; @Garber13; @Florina14].
In the context of matrix completion and recommendation systems, several
faster variants of the Frank-Wolfe method were proposed
[@garber2016faster; @allen2017linear]
The online conditional gradient algorithm is due to @Hazan12. An optimal
regret algorithm, attaining the $O(\sqrt{T})$ bound, for the special
case of polyhedral sets was devised in [@Garber13].
Recent works consider accelerating projection-free optimization using
variance reduction [@lan2016conditional; @hazan2016variance], and the
case of projection-free algorithms with stochastic gradient oracles
[@mokhtari2018stochastic; @chen2018projection; @xie2019stochastic].
For an analysis of the running time of the power and Lanczos methods for
computing eigenvectors see [@kuczynski1992estimating]. For modern
algorithms for fast computation of the singular value decomposition see
[@allen2016lazysvd; @musco2015randomized].
## Exercises
# Games, Duality, and Regret {#chap:games}
In this chapter we tie the material covered thus far to some of the most
intriguing concepts in optimization and game theory. We shall use the
existence of online convex optimization algorithms with sublinear regret
to prove two fundamental properties: convex duality in mathematical
optimization, and von Neumann's minimax theorem in game theory.
Historically, the theory of games was developed by von Neumann in the
early 1930's. In an entirely different scientific thread, the theory of
linear programming (LP) was advanced by Dantzig a decade later. Dantzig
describes in his memoir a notable meeting between himself and von
Neumann at Princeton in 1947. In this meeting, according to Dantzig,
after describing the geometric and algebraic versions of linear
programming, von Neumann essentially formulated and proved linear
programming duality:
> "I don't want you to think I am pulling all this out of my sleeve at
> the spur of the moment like a magician. I have just recently completed
> a book with Oscar Morgenstern on the theory of games. What I am doing
> is conjecturing that the two problems are equivalent. The theory that
> I am outlining for your problem is an analogue to the one we have
> developed for games.\"
At that time, the topic of discussion was not the existence and
uniqueness of equilibrium in zero-sum games, which is captured by the
minimax theorem. Both concepts were originally captured and proved using
very different mathematical techniques: the minimax theorem was
originally proved using machinery from mathematical topology, whereas
linear programming duality was shown using convexity and geometric
tools.
More than half a century later, Freund and Schapire tied both concepts,
which were by then known to be strongly related, to regret minimization.
We shall follow their lead in this chapter, introduce the relevant
concepts and give concise proofs using the machinery developed earlier
in this manuscript.
The chapter can be read with basic familiarity with linear programming
and little or no background in game theory. We define linear programming
and zero-sum games succinctly, barely enough to prove the duality
theorem and the minimax theorem. The reader is referred to the numerous
wonderful texts available on linear programming and game theory for a
much more thorough introduction and definitions.
## Linear Programming and Duality
Linear programming is a widely successful and practical convex
optimization framework. Amongst its numerous successes is the Nobel
prize award given on account of its application to economics. It is a
special case of the convex optimization problem from chapter
[2](#chap:opt){reference-type="ref" reference="chap:opt"} in which
$\ensuremath{\mathcal K}$ is a polyhedron (i.e., an intersection of a
finite set of halfspaces) and the objective function is a linear
function. Thus, a linear program can be described as follows, where
$(A \in \mathbb{R}^{n\times m})$: $$\begin{aligned}
\min \quad & c^{\top} \ensuremath{\mathbf x}& \\
\text{s.t.} \quad & A \ensuremath{\mathbf x}\geq b & .
\end{aligned}$$ The above formulation can be transformed into several
different forms via basic manipulations. For example, any LP can be
transformed to an equivalent LP with the variables taking only
non-negative values. This can be accomplished by writing every variable
$x$ as $x = x^{+} - x^{-}$, with $x^{+}, x^{-} \geq 0$. It can be
verified that this transformation leaves us with another LP, whose
variables are non-negative, and contains at most twice as many variables
(see exercises section for more details).
We are now ready to define a central notion in LP and state the duality
theorem:
::: theorem
**Theorem 8.1** (The duality theorem). *Given a linear program:
$$\begin{aligned}
\min \quad & & c^{\top} \ensuremath{\mathbf x}\\
\text{s.t.} \quad & & A \ensuremath{\mathbf x}\geq b , \\
& & \ensuremath{\mathbf x}\geq 0 ,
\end{aligned}$$ its dual program is given by: $$\begin{aligned}
\max \quad & & b^\top \ensuremath{\mathbf y}\\
\text{s.t.} \quad & & A^{\top} \ensuremath{\mathbf y}\leq c, \\
& & \ensuremath{\mathbf y}\geq 0 .
\end{aligned}$$ and the objectives of both problems are either equal or
unbounded.*
:::
Instead of studying duality directly, we proceed to define zero-sum
games and an analogous concept to duality.
## Zero-sum Games and Equilibria
The theory of games is an established research field in economic theory.
We give here brief definitions of the main concepts studied in this
chapter.
Let us start with an example of a zero-sum game we all know: the
rock-paper-scissors game. In this game each of the two players chooses a
strategy: either rock, scissors or paper. The winner is determined
according to the following table, where $0$ denotes a draw, $-1$ denotes
that the row player wins, and $1$ denotes a column player victory.
The rock-paper-scissors game is called a "zero-sum" game since one can
think of the numbers as losses for the row player (loss of $-1$
resembles victory, $1$ loss and $0$ draw), in which case the column
player receives a loss which is exactly the negation of the loss of the
row player. Thus the sum of losses which both players suffer is zero in
every outcome of the game.
Noticed that we termed one player as the "row player" and the other as
the "column player" corresponding to the matrix losses. Such a matrix
representation is far more general:
::: {#defn:zsg .definition}
**Definition 8.2**. *A two-player zero-sum-game in normal form is given
by a matrix $A \in [-1,1]^{n \times m}$. The loss for the row player
playing strategy $i \in [n]$ is equal to the negative loss (reward) of
the column player playing strategy $j \in [m]$ and equal to $A_{ij}$.*
:::
The fact that the losses were defined in the range $[-1,1]$ is
arbitrary, as the concept of main importance we define next is invariant
to scaling and shifting by a constant.
A central concept in game theory is equilibrium. There are many
different notions of equilibria. In two-player zero-sum games, a pure
equilibrium is a pair of strategies $(i,j) \in [n] \times [m]$ with the
following property: given that the column player plays $j$, there is no
strategy that dominates $i$ - i.e., every other strategy $k \in [n]$
gives higher or equal loss to the row player. Equilibrium also requires
that a symmetric property for strategy $j$ holds - it is not dominated
by any other strategy given that the row player plays $i$.
It can be shown that some games do not have a pure equilibrium as
defined above, e.g., the rock-paper-scissors game. However, we can
extend the notion of a strategy to a *mixed* strategy - a distribution
over pure strategies. The loss of a mixed strategy is the expected loss
according to the distribution over pure strategies. More formally, if
the row player chooses $\ensuremath{\mathbf x}\in \Delta_n$ and column
player chooses $\ensuremath{\mathbf y}\in \Delta_m,$ then the expected
loss of the row player (which is the negative reward to the column
player) is given by:
$$\textbf{E}[\text{loss}] = \sum_{i \in [n]}{\ensuremath{\mathbf x}_i \sum_{j \in [m]}{\ensuremath{\mathbf y}_j A_{ij}}} = \ensuremath{\mathbf x}^{\top} A \ensuremath{\mathbf y}.$$
We can now generalize the notion of equilibrium to mixed strategies.
Given a row strategy $\ensuremath{\mathbf x}$, it is dominated by
$\tilde{\ensuremath{\mathbf x}}$ with respect to a column strategy
$\ensuremath{\mathbf y}$ if and only if
$$\ensuremath{\mathbf x}^\top A \ensuremath{\mathbf y}> \tilde{\ensuremath{\mathbf x}}^\top A \ensuremath{\mathbf y}.$$
We say that $\ensuremath{\mathbf x}$ is dominant with respect to
$\ensuremath{\mathbf y}$ if and only if it is not dominated by any other
mixed strategy. A pair $(\ensuremath{\mathbf x},\ensuremath{\mathbf y})$
is an equilibrium for game $A$ if and only if both
$\ensuremath{\mathbf x}$ and $\ensuremath{\mathbf y}$ are dominant with
respect to each other. It is a good exercise for the reader at this
point to find an equilibrium for the rock-paper-scissors game.
At this point, some natural questions arise: Is there always an
equilibrium in a given zero-sum game? Can it be computed efficiently?
Are there natural repeated-game-playing strategies that reach it?
As we shall see, the answer to all questions above is affirmative. Let
us rephrase these questions in a different way. Consider the optimal row
strategy, i.e., a mixed strategy $\ensuremath{\mathbf x}$, such that the
expected loss is minimized, no matter what the column player does. The
optimal strategy for the row player would be:
$$\ensuremath{\mathbf x}^\star \in \mathop{\mathrm{\arg\min}}_{\ensuremath{\mathbf x}\in \Delta_n} {\max_{\ensuremath{\mathbf y}\in \Delta_m} \ensuremath{\mathbf x}^{\top}A \ensuremath{\mathbf y}}.$$
Notice that we use the notation $\ensuremath{\mathbf x}^\star \in$
rather than $\ensuremath{\mathbf x}^\star =$, since in general the set
of strategies attaining the minimal loss over worst-case column
strategies can contain more than a single strategy. Similarly, the
optimal strategy for the column player would be:
$$\ensuremath{\mathbf y}^\star \in \mathop{\mathrm{\arg\max}}_{\ensuremath{\mathbf y}\in \Delta_m} {\min_{\ensuremath{\mathbf x}\in \Delta_n} \ensuremath{\mathbf x}^{\top}A \ensuremath{\mathbf y}}.$$
Playing these strategies, no matter what the column player does, the row
player would pay no more than
$$\lambda_R = \min_{\ensuremath{\mathbf x}\in \Delta_n} \max_{\ensuremath{\mathbf y}\in \Delta_m} \ensuremath{\mathbf x}^{\top} A \ensuremath{\mathbf y}= \max_{\ensuremath{\mathbf y}\in \Delta_m} {\ensuremath{\mathbf x}^{\star}}^{\top} A \ensuremath{\mathbf y},$$
and column player would earn at least
$$\lambda_C = \max_{\ensuremath{\mathbf y}\in \Delta_m} \min_{\ensuremath{\mathbf x}\in \Delta_n} \ensuremath{\mathbf x}^{\top} A \ensuremath{\mathbf y}= \min_{\ensuremath{\mathbf x}\in \Delta_n} {\ensuremath{\mathbf x}^{\top}} A \ensuremath{\mathbf y}^\star .$$
With these definitions we can state von Neumann's famous minimax
theorem:
::: theorem
**Theorem 8.3** (von Neumann minimax theorem). *In any zero-sum game, it
holds that $\lambda_R = \lambda_C$.*
:::
This theorem answers all our above questions on the affirmative. The
value $\lambda^\star = \lambda_C = \lambda_R$ is called the **value of
the game**, and its existence and uniqueness imply that any
$\ensuremath{\mathbf x}^\star$ and $\ensuremath{\mathbf y}^\star$ in the
appropriate optimality sets are an equilibrium.
We proceed to give a constructive proof of von Neumann's theorem which
also yields an efficient algorithm as well as natural repeated-game
playing strategies that converge to it.
### Equivalence of von Neumann Theorem and LP duality
The von Neumann theorem is equivalent to the duality theorem of linear
programming in a very strong sense, and either implies the other via
simple reduction. Thus, it suffices to prove only von Neumann's theorem
to prove the duality theorem.
The first part of this equivalence is shown by representing a zero-sum
game as a primal-dual linear program instance, as we do now.
Observe that the definition of an optimal row strategy and value is
equivalent to the following LP: $$\begin{aligned}
\min \quad & & \lambda \\
\text{s.t.} \quad & & \sum{\ensuremath{\mathbf x}_i}=1 \\
& & \forall i \in [m] \ . \ \ensuremath{\mathbf x}^{\top}A e_i \leq \lambda \\
& & \forall i \in [n] \ . \ \ensuremath{\mathbf x}_i \geq 0 .
\end{aligned}$$ To see that the optimum of the above $LP$ is attained at
$\lambda_R$, note that the constraint
$\ensuremath{\mathbf x}^{\top}A e_i \leq \lambda \quad \forall i \in [m]$
is equivalent to the constraint
$\forall \ensuremath{\mathbf y}\in \Delta_m \ . \ \ensuremath{\mathbf x}^\top A \ensuremath{\mathbf y}\leq \lambda$,
since: $$\begin{aligned}
\forall \ensuremath{\mathbf y}\in \Delta_m \ . \quad \ensuremath{\mathbf x}^\top A \ensuremath{\mathbf y}= \sum_{j=1}^m {\ensuremath{\mathbf x}^{\top}A e_j} \cdot \ensuremath{\mathbf y}_j \leq \lambda \sum_{j=1}^m {\ensuremath{\mathbf y}_j} = \lambda
\end{aligned}$$
The dual program to the above LP is given by $$\begin{aligned}
\max \quad & & \mu \\
\text{s.t.} \quad & & \sum{\ensuremath{\mathbf y}_i}=1 \\
& & \forall i \in [n] \ . \ e_i^\top A \ensuremath{\mathbf y}\geq \mu \\
& & \forall i \in [m] \ . \ \ensuremath{\mathbf y}_i\geq0 .
\end{aligned}$$
By similar arguments, the dual program precisely defines $\lambda_C$ and
$\ensuremath{\mathbf y}^\star$. The duality theorem asserts that
$\lambda_R = \lambda_C = \lambda^\star$, which gives von Neumann's
theorem.
The other direction, i.e., showing that von Neumann's theorem implies LP
duality, is slightly more involved. Basically, one can convert any LP
into the format of a zero-sum game. Special care is needed to ensure
that the original LP is indeed feasible, as zero-sum games are always
feasible and linear programs need not be. The details are left as an
exercise at the end of this chapter.
## Proof of von Neumann Theorem
In this section we give a proof of von Neumann's theorem using online
convex optimization algorithms with sublinear regret.
The first part of the theorem, which is also known as weak duality in
the LP context, is rather straightforward:
**Direction 1 ($\lambda_R \geq \lambda_C$):**
::: proof
*Proof.* $$\begin{aligned}
\lambda_R & = \min_{\ensuremath{\mathbf x}\in \Delta_n} \max_{\ensuremath{\mathbf y}\in \Delta_m} \ensuremath{\mathbf x}^{\top} A \ensuremath{\mathbf y}\\
& = \max_{\ensuremath{\mathbf y}\in \Delta_m} {\ensuremath{\mathbf x}^{\star}}^{\top} A \ensuremath{\mathbf y}& \mbox{ definition of $\ensuremath{\mathbf x}^\star$} \\
& \geq \max_{\ensuremath{\mathbf y}\in \Delta_m} \min_{\ensuremath{\mathbf x}\in \Delta_n} \ensuremath{\mathbf x}^\top A \ensuremath{\mathbf y}\\
& = \lambda_C.
\end{aligned}$$ ◻
:::
The second and main direction, known as strong duality in the LP
context, requires the technology of online convex optimization we have
proved thus far:
**Direction 2 ($\lambda_R \leq \lambda_C$):**
::: proof
*Proof.* We consider a repeated game defined by the $n \times m$ matrix
$A$. For $t=1,2,...,T$, the row player provides a mixed strategy
$\ensuremath{\mathbf x}_t \in \Delta_n$, column player plays mixed
strategy $\ensuremath{\mathbf y}_t \in \Delta_m$, and the loss of the
row player, which equals to the reward of the column player, equals
$\ensuremath{\mathbf x}_t^\top A \ensuremath{\mathbf y}_t$.
The row player generates the mixed strategies $\ensuremath{\mathbf x}_t$
according to an OCO algorithm --- specifically using the Exponentiated
Gradient algorithm [\[alg:eg\]](#alg:eg){reference-type="ref"
reference="alg:eg"} from chapter
[5](#chap:regularization){reference-type="ref"
reference="chap:regularization"}. The convex decision set is taken to be
the $n$ dimensional simplex
$\mathcal{K} = \Delta_n = \{ \ensuremath{\mathbf x}\in \mathbb{R}^n \; | \; \ensuremath{\mathbf x}(i) \geq 0, \sum{\ensuremath{\mathbf x}(i)}=1 \}$.
The loss function at time $t$ is given by
$$f_t(\ensuremath{\mathbf x}) = \ensuremath{\mathbf x}^{\top}A\ensuremath{\mathbf y}_t \mbox{\ \ \ ($f_t$ is linear with respect to $\ensuremath{\mathbf x}$) } .$$
Spelling out the EG strategy for this particular instance, we have
$$\ensuremath{\mathbf x}_{t+1}(i) \gets \frac{ \ensuremath{\mathbf x}_t(i) e^{ -\eta A_i \ensuremath{\mathbf y}_t } } { \sum_j \ensuremath{\mathbf x}_{t}(i) e^{ -\eta A_j \ensuremath{\mathbf y}_t} } \;.$$
Then, by appropriate choice of $\eta$ and Corollary
[5.7](#cor:eg){reference-type="ref" reference="cor:eg"}, we have
$$\begin{aligned}
\label{eq:shalom5}
\sum_t{f_t (\ensuremath{\mathbf x}_t)} & \leq & \min_{\ensuremath{\mathbf x}^\star \in \mathcal{K}}{\sum_t{f_t (\ensuremath{\mathbf x}^\star)}} + {\sqrt{2 T \log n }} \;.
\end{aligned}$$
The column player plays her best response to the row player's strategy,
that is: $$\begin{aligned}
\label{shalom2}
\ensuremath{\mathbf y}_t = \arg \max_{\ensuremath{\mathbf y}\in \Delta_m} \ensuremath{\mathbf x}_t^\top A \ensuremath{\mathbf y}.
\end{aligned}$$
Denote the average mixed strategies by:
$$\bar{\ensuremath{\mathbf x}} = \frac{1}{t} \sum_{\tau=1}^t {\ensuremath{\mathbf x}_\tau} \quad,\quad \bar{\ensuremath{\mathbf y}} = \frac{1}{t} \sum_{\tau=1}^t {\ensuremath{\mathbf y}_\tau} \;.$$
Then, we have $$\begin{aligned}
\lambda_R & = \min_\ensuremath{\mathbf x}\max_\ensuremath{\mathbf y}\ \ensuremath{\mathbf x}^\top A \ensuremath{\mathbf y}\\
& \leq \max_\ensuremath{\mathbf y}\bar{\ensuremath{\mathbf x}}^\top A \ensuremath{\mathbf y}& \mbox{special case}\\
& = \frac{1}{T} \sum_t \ensuremath{\mathbf x}_t A \ensuremath{\mathbf y}^\star \\
& \leq \frac{1}{T} \sum_t \ensuremath{\mathbf x}_t A \ensuremath{\mathbf y}_t & \mbox{ by \eqref{shalom2} }\\
& \leq \frac{1}{T} \min_\ensuremath{\mathbf x}\sum_t \ensuremath{\mathbf x}^\top A \ensuremath{\mathbf y}_t + \sqrt{2 \log n /T} & \mbox{ by \eqref{eq:shalom5} } \\
& = \min_\ensuremath{\mathbf x}\ensuremath{\mathbf x}^\top A \bar{\ensuremath{\mathbf y}} + \sqrt{2 \log n /T} \\
& \leq \max_\ensuremath{\mathbf y}\min_\ensuremath{\mathbf x}\ensuremath{\mathbf x}^\top A \ensuremath{\mathbf y}+ \sqrt{2 \log n /T} & \mbox{special case}\\
& = \lambda_C + \sqrt{2 \log n /T}.
\end{aligned}$$ Thus $\lambda_R \leq \lambda_C + \sqrt{2 \log n /T}$. As
$T \rightarrow \infty$, we obtain part 2 of the theorem. ◻
:::
Notice that besides the basic definitions, the only tool used in the
proof is the existence of sublinear regret algorithms for online convex
optimization. The fact that the regret bounds for OCO algorithms were
defined without restricting the cost functions, and that they can be
adversarially chosen, is crucial for the proof. The functions $f_t$ are
defined according to $\ensuremath{\mathbf y}_t$, which is chosen based
on $\ensuremath{\mathbf x}_t$. Thus, the cost functions we constructed
are adversarially chosen after the decision $\ensuremath{\mathbf x}_t$
was made by the row player.
## Approximating Linear Programs
The technique in the preceding section not only proves the minimax
theorem, and thus linear programming duality, but also entails an
efficient algorithm. Using the equivalence of zero-sum games and linear
programs, this efficient algorithm can be used to solve linear
programming. We now spell out the details of this algorithm in the
context of zero-sum games.
Consider the following algorithm:
::: algorithm
::: algorithmic
Input: linear program in zero-sum game format, by matrix
$A \in {\mathbb R}^{n \times m}$. Let
$\ensuremath{\mathbf x}_1 = [ 1/n ,1/n,...,1/n]$ Compute
$\ensuremath{\mathbf y}_t = \max_{\ensuremath{\mathbf y}\in \Delta_m} {\ensuremath{\mathbf x}_t^\top A \ensuremath{\mathbf y}}$
Update
$\forall i \ . \ \ensuremath{\mathbf x}_{t+1}(i) \gets \frac{ \ensuremath{\mathbf x}_t(i) e^{ -\eta A_i \ensuremath{\mathbf y}_t } } { \sum_j \ensuremath{\mathbf x}_{t}(j) e^{ -\eta A_j \ensuremath{\mathbf y}_t} }$
$\bar{\ensuremath{\mathbf x}} = \frac{1}{T} \sum_{t=1}^T \ensuremath{\mathbf x}_t$
:::
:::
Almost immediately we obtain from the previous derivation the following:
::: lemma
**Lemma 8.4**. *The returned vector $\bar{\ensuremath{\mathbf x}}$ of
Algorithm [\[alg:simpleLP\]](#alg:simpleLP){reference-type="ref"
reference="alg:simpleLP"} is a
$\frac{\sqrt{2 \log n}}{\sqrt{T}}$-approximate solution to the zero-sum
game and linear program it describes.*
:::
::: proof
*Proof.* Following the exact same steps of the previous derivation, we
have $$\begin{aligned}
\max_\ensuremath{\mathbf y}\bar{\ensuremath{\mathbf x}}^\top A \ensuremath{\mathbf y}& = \frac{1}{T} \sum_t \ensuremath{\mathbf x}_t A \ensuremath{\mathbf y}^\star \\
& \leq \frac{1}{T} \sum_t \ensuremath{\mathbf x}_t A \ensuremath{\mathbf y}_t & \mbox{ by \eqref{shalom2} }\\
& \leq \frac{1}{T} \min_\ensuremath{\mathbf x}\sum_t \ensuremath{\mathbf x}^\top A \ensuremath{\mathbf y}_t + \sqrt{2 \log n /T} & \mbox{ by \eqref{eq:shalom5} } \\
& = \min_\ensuremath{\mathbf x}\ensuremath{\mathbf x}^\top A \bar{\ensuremath{\mathbf y}} + \sqrt{2 \log n /T} \\
& \leq \max_\ensuremath{\mathbf y}\min_\ensuremath{\mathbf x}\ensuremath{\mathbf x}^\top A \ensuremath{\mathbf y}+ \sqrt{2 \log n /T} & \mbox{special case}\\
& = \lambda^\star + \sqrt{2 \log n /T} .
\end{aligned}$$ Therefore, for each $i \in [m]$:
$$\bar{\ensuremath{\mathbf x}}^\top A e_i \leq \lambda^\star + \frac{\sqrt{2 \log n}}{\sqrt{T}}$$ ◻
:::
Thus, to obtain an $\varepsilon$-approximate solution, one would need
$\frac{2 \log n}{\varepsilon^2}$ iterations, each involving a simple
update procedure.
## Bibliographic Remarks {#bibliographic-remarks-5}
Game theory was founded in the late 1920's-early '30s, whose cornerstone
was laid in the classic text "Theory of Games and Economic Behavior" by
@neumann44a.
Linear programming is a fundamental mathematical optimization and
modeling tool, dating back to the 1940's and the work of @kantorovich40
and @dantzig51. Duality for linear programming was conceived by von
Neumann, as described by Dantzig in an interview [@dantzig]. For in
depth treatment of the theory of linear programming there are numerous
comprehensive texts, e.g., [@BertsimasLP; @matousek2007understanding].
The beautiful connection between low-regret algorithms and solving
zero-sum games was discovered by @Freund199979. More general connections
of convergence of low-regret algorithms to equilibria in games were
studied by @hart2000simple, and more recently in
[@Even-dar:2009; @Roughgarden:2015].
Approximation algorithms that arise via simple Lagrangian relaxation
techniques were pioneered by @PST. See also the survey [@AHK-MW] and
more recent developments that give rise to sublinear time algorithms
[@CHW; @hazan2011beating].
## Exercises
# Learning Theory, Generalization, and Online Convex Optimization {#chap:online2batch}
In our treatment of online convex optimization so far we have only
implicitly discussed learning theory. The framework of OCO was shown to
capture applications such as learning classifiers online, prediction
with expert advice, online portfolio selection and matrix completion,
all of which have a learning aspect. We have introduced the metric of
regret and gave efficient algorithms to minimize regret in various
settings. We have also argued that minimizing regret is a meaningful
approach for many online prediction problems. However, the relation to
other theories of learning was not discussed thus far.
In this section we draw a formal and strong connection between OCO and
the theory of statistical learning. We begin by giving the basic
definitions of statistical learning theory, and proceed to describe how
the applications studied in this manuscript relate to this model. We
then continue to show how regret minimization in the setting of online
convex optimization gives rise to computationally efficient statistical
learning algorithms.
## Statistical Learning Theory
The theory of statistical learning addresses the problem of learning a
concept from examples. A concept is a mapping from domain ${\mathcal X}$
to labels ${\mathcal Y}$, denoted
$C : {\mathcal X}\mapsto {\mathcal Y}$.
As an example, consider the problem of optical character recognition. In
this setting, the domain ${\mathcal X}$ can be all $n \times n$ bitmap
images, the label set ${\mathcal Y}$ is the Latin (or other) alphabet,
and the concept $C$ maps a bitmap into the character depicted in the
image.
Statistical theory models the problem of learning a concept by allowing
access to labelled examples from the target distribution. The learning
algorithm has access to pairs, or samples, from an unknown distribution
$$(\mathbf{x},y) \sim {\mathcal D}\quad , \quad \mathbf{x}\in {\mathcal X}\ , \ y \in {\mathcal Y}.$$
The goal is to be able to predict $y$ as a function of $\mathbf{x}$,
i.e., to **learn** a hypothesis, or a mapping from ${\mathcal X}$ to
${\mathcal Y}$, denoted $h: {\mathcal X}\mapsto {\mathcal Y}$, with
small error with respect to the distribution ${\mathcal D}$. In the case
that the label set is binary ${\mathcal Y}= \{0,1\}$, or discrete such
as in optical character recognition, the *generalization error* of an
hypothesis $h$ with respect to distribution ${\mathcal D}$ is given by
$$\mathop{\mbox{\rm error}}(h) \stackrel{\text{\tiny def}}{=}\mathop{\mbox{\bf E}}_{(\mathbf{x},y)\sim {\mathcal D}} [ h(\mathbf{x}) \neq y ] .$$
More generally, the goal is to learn a hypothesis that minimizes the
loss according to a (usually convex) loss function
$\ell: {\mathcal Y}\times {\mathcal Y}\mapsto {\mathbb R}$. In this case
the generalization error of a hypothesis is defined as:
$$\mathop{\mbox{\rm error}}(h) \stackrel{\text{\tiny def}}{=}\mathop{\mbox{\bf E}}_{(\mathbf{x},y)\sim {\mathcal D}} [ \ell(h(\mathbf{x}), y) ] .$$
We henceforth consider learning algorithms ${\mathcal A}$ that observe a
sample from the distribution ${\mathcal D}$ , denoted
$S \sim {\mathcal D}^m$ for a sample of $m$ examples,
$S = \{(\ensuremath{\mathbf x}_1,y_1),...,(\ensuremath{\mathbf x}_m,y_m)\}$,
and produce a hypothesis
${\mathcal A}(S) : {\mathcal X}\mapsto {\mathcal Y}$ based on this
sample.
The goal of statistical learning can thus be summarised as follows:
::: center
*Given access to i.i.d. samples from an arbitrary distribution over
${\mathcal X}\times {\mathcal Y}$ corresponding to a certain concept,
learn a hypothesis $h : {\mathcal X}\mapsto {\mathcal Y}$ which has
arbitrarily small generalization error with respect to a given loss
function.*
:::
### Overfitting
In the problem of optical character recognition the task is to recognize
a character from a given image in bitmap format. To model it in the
statistical learning setting, the domain ${\mathcal X}$ is the set of
all $n \times n$ bitmap images for some integer $n$. The label set
${\mathcal Y}$ is the latin alphabet, and the concept $C$ maps a bitmap
into the character depicted in the image.
Consider the naive algorithm which fits the perfect hypothesis for a
given sample, in this case set of bitmaps. Namely, ${\mathcal A}(S)$ is
the hypothesis which correctly maps any given bitmap input
$\ensuremath{\mathbf x}_i$ to its correct label $y_i$, and maps all
unseen bitmaps to the character $``1."$
Clearly, this hypothesis does a very poor job of generalizing from
experience - all images that have not been observed yet will be
classified without regard to their properties, surely an erroneous
classification most times. However - the training set, or observed
examples, are perfectly classified by this hypothesis!
This disturbing phenomenon is called "overfitting," a central concern in
machine learning. Before continuing to add the necessary components in
learning theory to prevent overfitting, we turn our attention to a
formal statement of when overfitting can appear.
### No free lunch?
The following theorem shows that learning, as stated in the goal of
statistical learning theory, is impossible without restricting the
hypothesis class being considered. For simplicity, we consider the
zero-one loss in this section.
::: {#thm:nfl .theorem}
**Theorem 9.1** (No Free Lunch Theorem). *Consider any domain
$\mathcal{X}$ of size $|\mathcal{X}| = 2m > 4$, and any algorithm
${\mathcal A}$ which outputs a hypothesis ${\mathcal A}(S)$ given a
sample $S$ of size $m$. Then there exists a concept
$C: \mathcal{X} \rightarrow \{0,1\}$ and a distribution $\mathcal{D}$
such that:*
- *The generalization error of the concept $C$ is zero.*
- *With probability at least $\frac{1}{10}$, the error of the
hypothesis generated by ${\mathcal A}$ is at least
$\mathop{\mbox{\rm error}}(A(S)) \geq \frac{1}{10}$.*
:::
The proof of this theorem is based on the probabilistic method, a useful
technique for showing the existence of combinatorial objects by showing
that the probability they exist in some distributional setting is
bounded away from zero. In our setting, instead of explicitly
constructing a concept $C$ with the required properties, we show it
exists by a probabilistic argument.
::: proof
*Proof.* We show that for any learning algorithm, there is some learning
task (i.e., "hard" concept) that it will not learn well. Formally, take
$\mathcal{D}$ to be the uniform distribution over ${\mathcal X}$. Our
proof strategy will be to show the following inequality, where we take a
uniform distribution over all concepts ${\mathcal X}\mapsto \{0,1\}$
$$Q \overset{def}{=} \mathop{\mbox{\bf E}}_{C:{\mathcal X}\rightarrow\{0,1\}} [\mathop{\mbox{\bf E}}_{S\sim \mathcal{D}^m} [\mathop{\mbox{\rm error}}({\mathcal A}(S))]] \geq \frac{1}{4} .$$
After showing this step, we will use Markov's Inequality to conclude the
theorem.
We proceed by using the linearity property of expectations, which allows
us to swap the order of expectations, and then conditioning on the event
that $\ensuremath{\mathbf x}\in S$.
$$\begin{aligned}
Q & = \mathop{\mbox{\bf E}}_{S} [\mathop{\mbox{\bf E}}_{C} [\mathop{\mbox{\bf E}}_{\ensuremath{\mathbf x}\in \mathcal{X}} [{\mathcal A}(S)(\ensuremath{\mathbf x}) \neq C(\ensuremath{\mathbf x})]]] \\
& = \mathop{\mbox{\bf E}}_{S,\ensuremath{\mathbf x}} [ \mathop{\mbox{\bf E}}_{C} [{\mathcal A}(S)(\ensuremath{\mathbf x}) \neq C(\ensuremath{\mathbf x})|\ensuremath{\mathbf x}\in S] \Pr [\ensuremath{\mathbf x}\in S] ] \\ & + \mathop{\mbox{\bf E}}_{S,\ensuremath{\mathbf x}} [ \mathop{\mbox{\bf E}}_{C} [{\mathcal A}(S)(\ensuremath{\mathbf x}) \neq C(\ensuremath{\mathbf x})|\ensuremath{\mathbf x}\not \in S] \Pr[ \ensuremath{\mathbf x}\not \in S] ].
\end{aligned}$$
All terms in the above expression, and in particular the first term, are
non-negative and at least $0$. Also note that since the domain size is
$2m$ and the sample is of size $|S| \leq m$, we have
$\Pr(\ensuremath{\mathbf x}\not \in S ) \geq \frac{1}{2}$. Finally,
observe that
$\Pr[ {\mathcal A}(S)(\ensuremath{\mathbf x}) \neq C(\ensuremath{\mathbf x})] = \frac{1}{2}$
for all $\ensuremath{\mathbf x}\not\in S$ since we are given that the
"true" concept $C$ is chosen uniformly at random over all possible
concepts. Hence, we get that:
$$Q \geq 0 + \frac{1}{2} \cdot \frac{1}{2} = \frac{1}{4},$$ which is the
intermediate step we wanted to show. The random variable
$\mathop{\mbox{\bf E}}_{S\sim \mathcal{D}^m} [\mathop{\mbox{\rm error}}({\mathcal A}(S))]$
attains values in the range $[0,1]$. Since its expectation is at least
$\frac{1}{4}$, the event that it attains a value of at least
$\frac{1}{4}$ is non-empty. Thus, there exists a concept such that
$$\mathop{\mbox{\bf E}}_{S\sim \mathcal{D}^m} [\mathop{\mbox{\rm error}}({\mathcal A}(S))] \geq \frac{1}{4}$$
where, as assumed beforehand, $\mathcal{D}$ is the uniform distribution
over ${\mathcal X}$.
We now conclude with Markov's Inequality: since the expectation above
over the error is at least one-fourth, the probability over examples
such that the error of ${\mathcal A}$ over a random sample is at least
one-tenth is at least
$$\Pr_{S \sim {\mathcal D}^m} \left( \mathop{\mbox{\rm error}}({\mathcal A}(S) ) \geq \frac{1}{10}\right) \geq \frac{\frac{1}{4}-\frac{1}{10}}{1-\frac{1}{10}} > \frac{1}{10}.$$ ◻
:::
### Examples of learning problems
The conclusion of the previous theorem is that the space of possible
concepts being considered in a learning problem needs to be restricted
for any meaningful guarantee. Thus, learning theory concerns itself with
concept classes, also called hypothesis classes, which are sets of
possible hypotheses from which one would like to learn. We denote the
concept (hypothesis) class by
${\mathcal H}= \{h : {\mathcal X}\mapsto {\mathcal Y}\}$.
Common examples of learning problems that can be formalized in this
model and the corresponding definitions include:
- Optimal character recognition: In the problem of optical character
recognition the domain ${\mathcal X}$ consists of all $n \times n$
bitmap images for some integer $n$, the label set ${\mathcal Y}$ is
a certain alphabet, and the concept $C$ maps a bitmap image into the
character depicted in it. A common (finite) hypothesis class for
this problem is the set of all decision trees with bounded depth.
- Text classification: In the problem of text classification the
domain is a subset of Euclidean space, i.e.,
${\mathcal X}\subseteq {\mathbb R}^d$. Each document is represented
in its bag-of-words representation, and $d$ is the size of the
dictionary. The label set ${\mathcal Y}$ is binary, where one
indicates a certain classification or topic, e.g.,"Economics", and
zero others.
A commonly used hypothesis class for this problem is the set of all
bounded-norm vectors in Euclidean space
${\mathcal H}= \{ h_\mathbf{w}\ , \ \mathbf{w}\in {\mathbb R}^d \ , \ \|\mathbf{w}\|_2^2 \leq \omega \}$
such that
$h_\mathbf{w}(\ensuremath{\mathbf x}) = \mathbf{w}^\top \ensuremath{\mathbf x}$.
The loss function is chosen to be the hinge loss, i.e.,
$\ell(\hat{y},y) = \max\{0 , 1 - \hat{y} y \}$.
- Recommendation systems: recall the online convex optimization
formulation of this problem in section
[7.2](#sec:recommendation_systems){reference-type="ref"
reference="sec:recommendation_systems"}. A statistical learning
formulation for this problem is very similar. The domain is a direct
sum of two sets
${\mathcal X}= {\mathcal X}_1 \oplus {\mathcal X}_2$. Here
$\ensuremath{\mathbf x}_1 \in {\mathcal X}_1$ is a certain media
item, and every person is an item
$\ensuremath{\mathbf x}_2 \in {\mathcal X}_2$. The label set
${\mathcal Y}$ is binary, where one indicates a positive sentiment
for the person to the particular media item, and zero a negative
sentiment.
A commonly considered hypothesis class for this problem is the set
of all mappings
${\mathcal X}_1 \times {\mathcal X}_2 \mapsto {\mathcal Y}$ that,
when viewed as a matrix in
${\mathbb R}^{|{\mathcal X}_1| \times |{\mathcal X}_2|}$, have
bounded algebraic rank.
### Defining generalization and learnability
We are now ready to give the fundamental definition of statistical
learning theory, called Probably Approximately Correct (PAC) learning:
::: {#def:learnability .definition}
**Definition 9.2** (PAC learnability). *A hypothesis class
${\mathcal H}$ is PAC learnable with respect to loss function
$\ell : {\mathcal Y}\times {\mathcal Y}\mapsto {\mathbb R}$ if the
following holds. There exists an algorithm ${\mathcal A}$ that accepts
$S_T = \{(\mathbf{x}_t,y_t), \ t \in [T]\}$ and returns hypothesis
${\mathcal A}(S_T) \in {\mathcal H}$ that satisfies: for any
$\varepsilon,\delta > 0$ there exists a sufficiently large natural
number $T = T(\varepsilon,\delta)$, such that for any distribution
${\mathcal D}$ over pairs $(\mathbf{x},y)$ and $T$ samples from this
distribution, it holds that with probability at least $1-\delta$
$$\mathop{\mbox{\rm error}}( {\mathcal A}(S_T) ) \leq \varepsilon.$$*
:::
A few remarks regarding this definition:
- The set $S_T$ of samples from the underlying distribution is called
the training set. The error in the above definition is called the
**generalization error**, as it describes the overall error of the
concept as generalized from the observed training set. The behavior
of the number of samples $T$ as a function of the parameters
$\varepsilon,\delta$ and the concept class is called the **sample
complexity** of ${\mathcal H}$.
- The definition of PAC learning says nothing about computational
efficiency. Computational learning theory usually requires, in
addition to the definition above, that the algorithm ${\mathcal A}$
is efficient, i.e., polynomial running time with respect to
$\varepsilon,\log \frac{1}{\delta}$ and the representation of the
hypothesis class. The representation size for a discrete set of
concepts is taken to be the logarithm of the number of hypotheses in
${\mathcal H}$, denoted $\log |{\mathcal H}|$.
- If the hypothesis ${\mathcal A}(S_T)$ returned by the learning
algorithm belongs to the hypothesis class ${\mathcal H}$, as in the
definition above, we say that ${\mathcal H}$ is **properly
learnable**. More generally, ${\mathcal A}$ may return hypothesis
from a different hypothesis class, in which case we say that
${\mathcal H}$ is **improperly learnable**.
The fact that the learning algorithm can learn up to *any* desired
accuracy $\varepsilon > 0$ is called the **realizability assumption**
and greatly reduces the generality of the definition. It amounts to
requiring that a hypothesis with near-zero error belongs to the
hypothesis class. In many cases, concepts are only approximately
learnable by a given hypothesis class, or inherent noise in the problem
prohibits realizability (see exercises).
This issue is addressed in the definition of a more general learning
concept, called **agnostic learning**:
::: {#def:agnosticlearnability .definition}
**Definition 9.3** (agnostic PAC learnability). *The hypothesis class
${\mathcal H}$ is agnostically PAC learnable with respect to loss
function $\ell : {\mathcal Y}\times {\mathcal Y}\mapsto {\mathbb R}$ if
the following holds. There exists an algorithm ${\mathcal A}$ that
accepts $S_T = \{(\mathbf{x}_t,y_t), \ t \in [T]\}$ and returns
hypothesis ${\mathcal A}(S_T)$ that satisfies: for any
$\varepsilon,\delta > 0$ there exists a sufficiently large natural
number $T = T(\varepsilon,\delta)$ such that for any distribution
${\mathcal D}$ over pairs $(\mathbf{x},y)$ and $T$ samples from this
distribution, it holds that with probability at least $1-\delta$
$$\mathop{\mbox{\rm error}}( {\mathcal A}(S_T) ) \leq \min_{h \in {\mathcal H}} \{ \mathop{\mbox{\rm error}}(h) \} + \varepsilon.$$*
:::
With these definitions, we can state the fundamental theorem of
statistical learning theory for finite hypothesis classes:
::: theorem
**Theorem 9.4** (PAC learnability of finite hypothesis classes). *Every
finite concept class ${\mathcal H}$ is agnostically PAC learnable with
sample complexity that is
$\mathop{\mbox{\rm poly}}(\varepsilon,\delta, \log |{\mathcal H}|)$.*
:::
In the following sections we prove this theorem, and in fact a more
general statement that holds also for certain infinite hypothesis
classes. The complete characterization of which infinite hypothesis
classes are learnable is a deep and fundamental question, whose complete
answer was given by Vapnik and Chervonenkis (see bibliography). The
question of which (finite or infinite) hypothesis classes are
**efficiently** PAC learnable, especially in the improper sense, is
still at the forefront of learning theory today.
## Agnostic Learning using Online Convex Optimization
In this section we show how to use online convex optimization for
agnostic PAC learning. Following the paradigm of this manuscript, we
describe and analyze a reduction from agnostic learning to online convex
optimization. The reduction is formally described in Algorithm
[\[alg:reductionOCO2LRN\]](#alg:reductionOCO2LRN){reference-type="ref"
reference="alg:reductionOCO2LRN"}.
::: algorithm
::: algorithmic
Input: OCO algorithm ${\mathcal A}$, convex hypothesis class
${\mathcal H}\subseteq {\mathbb R}^d$, convex loss function $\ell$. Let
$h_1 \leftarrow {\mathcal A}(\emptyset)$. Draw labeled example
$(\mathbf{x}_t,y_t) \sim {\mathcal D}$. Let
$f_t(h) = \ell( h(\mathbf{x}_t) , y_t)$. Update
$$h_{t+1} = {\mathcal A}( f_1,...,f_t) .$$ Return
$\bar{h} = \frac{1}{T} \sum_{t=1}^T h_t$.
:::
:::
For this reduction we assumed that the concept (hypothesis) class is a
convex subset of Euclidean space. A similar reduction can be carried out
for discrete hypothesis classes (see exercises). In fact, the technique
we explore below will work for any hypothesis set ${\mathcal H}$ that
admits a low regret algorithm, and can be generalized to infinite
hypothesis classes that are known to be learnable.
Let
$h^\star = \arg\min_{h \in {\mathcal H}} \{ \mathop{\mbox{\rm error}}(h) \}$
be the hypothesis in the class ${\mathcal H}$ that minimizes the
generalization error. Using the assumption that ${\mathcal A}$
guarantees sublinear regret, our simple reduction implies PAC learning,
as given in the following theorem.
::: {#thm:OCO2LRN .theorem}
**Theorem 9.5**. *Let ${\mathcal A}$ be an OCO algorithm whose regret
after $T$ iterations is guaranteed to be bounded by
$\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})$. Then for any
$\delta > 0$, with probability at least $1-\delta$, it holds that
$$\mathop{\mbox{\rm error}}(\bar{h})\le \mathop{\mbox{\rm error}}(h^\star) + \frac{ \ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) }{T} +\sqrt{\frac{8\log (\frac{2}{\delta})}{T}}.$$
In particular, for
$T = O( \frac{1}{\varepsilon^2} \log \frac{1}{\delta} + T_\varepsilon({\mathcal A}) )$,
where $T_\varepsilon({\mathcal A})$ is the integer $T$ such that
$\frac{\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})}{T} \leq \varepsilon$,
we have
$$\mathop{\mbox{\rm error}}(\bar{h})\le \mathop{\mbox{\rm error}}(h^*) + \varepsilon.$$*
:::
How general is the theorem above? In the previous chapters we have
described and analyzed OCO algorithms with regret guarantees that behave
asymptotically as $O(\sqrt{T})$ or better. This translates to sample
complexity of $O(\frac{1}{\varepsilon^2} \log \frac{1}{\delta})$ (see
exercises), which is known to be tight for certain scenarios.
To prove this theorem we need some tools from probability theory, such
as the concentration inequalities that we survey next.
### Reminder: measure concentration and martingales
Let us briefly discuss the notion of a martingale in probability theory.
For intuition, it is useful to recall the simple random walk. Let $X_i$
be a Rademacher random variable which takes values $$X_i = {
\left\{
\begin{array}{ll}
{1}, & {\text{with probability } \quad \frac{1}{2} } \\\\
{-1}, & {\text{with probability } \quad \frac{1}{2} }
\end{array}
\right. } .$$ A simple symmetric random walk is described by the sum of
such random variables, depicted in figure
[9.1](#fig:randomwalk){reference-type="ref" reference="fig:randomwalk"}.
Let $X = \sum_{i=1}^T X_i$ be the position after $T$ steps of this
random walk. The expectation and variance of this random variable are
$\mathop{\mbox{\bf E}}[ X] = 0 \ , \ \mbox{Var}(X) = T$.
::: center
![Symmetric random walk: 12 trials of 200 steps. The black dotted lines
show the functions $\pm \sqrt{x}$ and $\pm 2 \sqrt{x}$, respectively.
](images/random_walk.jpg){#fig:randomwalk width="3.0in"}
:::
The phenomenon of measure concentration addresses the probability of a
random variable to attain values within range of its standard deviation.
For the random variable $X$, this probability is much higher than one
would expect using only the first and second moments. Using only the
variance, it follows from Chebychev's inequality that
$$\Pr\left[ |X| \geq c \sqrt{T} \right] \leq \frac{1}{c^2}.$$ However,
the event that $|X|$ is centred around $O(\sqrt{T})$ is in fact much
tighter, and can be bounded by the Hoeffding-Chernoff lemma as follows
$$\begin{aligned}
\label{lem:chernoff}
\Pr[|X| \ge c \sqrt{T}] \le 2 e^{\frac{-c^2}{2}} & \mbox { Hoeffding-Chernoff lemma.}
\end{aligned}$$
Thus, deviating by a constant from the standard deviation decreases the
probability exponentially, rather than polynomially. This well-studied
phenomenon generalizes to sums of weakly dependent random variables and
martingales, which are important for our application.
::: definition
**Definition 9.6**. *A sequence of random variables $X_1, X_2,...$ is
called a *martingale* if it satisfies:
$$\mathop{\mbox{\bf E}}[X_{t+1}|X_{t}, X_{t-1}...X_{1}] = X_{t} \quad \forall \; t>0.$$*
:::
A similar concentration phenomenon to the random walk sequence occurs in
martingales. This is captured in the following theorem by Azuma.
::: theorem
**Theorem 9.7** (Azuma's inequality). *Let
$\big \lbrace X_{i} \big \rbrace _{i=1}^{T}$ be a martingale of $T$
random variables that satisfy $|X_{i} - X_{i+1}| \leq {1}$. Then:
$$\Pr \left[ |X_{T} - X_{0}|>c \right] \le 2 e^{\frac{-c^2}{2T}}.$$*
:::
By symmetry, Azuma's inequality implies, $$\label{eqn:azuma2}
\Pr \left[ X_{T} - X_{0}> c \right] = \Pr \left[X_{0} - X_{T}> c \right] \le e^{ \frac{-c^2}{2T}}.$$
### Analysis of the reduction
We are ready to prove the performance guarantee for the reduction in
Algorithm
[\[alg:reductionOCO2LRN\]](#alg:reductionOCO2LRN){reference-type="ref"
reference="alg:reductionOCO2LRN"}. Assume for simplicity that the loss
function $\ell$ is bounded in the interval $[0,1]$, i.e.,
$$\forall \hat{y},y \in {\mathcal Y}\ , \ \ell(\hat{y}, y) \in [0,1].$$
::: proof
*Proof of Theorem [9.5](#thm:OCO2LRN){reference-type="ref"
reference="thm:OCO2LRN"}.* We start by defining a sequence of random
variables that form a martingale. Let
$$Z_{t} \stackrel{\text{\tiny def}}{=}\mathop{\mbox{\rm error}}(h_{t})- \ell(h_{t}(\ensuremath{\mathbf x}_{t}),y_{t}) , \quad X_{t} \stackrel{\text{\tiny def}}{=}\sum_{i=1}^t Z_{i}.$$
Let us verify that $\{X_t\}$ is indeed a bounded martingale. Notice that
by definition of $\mathop{\mbox{\rm error}}(h)$, we have that
$$\mathop{\mbox{\bf E}}_{(\mathbf{x},y)\sim {\mathcal D}}[Z_{t} | X_{t-1}] = \mathop{\mbox{\rm error}}(h_t) - \mathop{\mbox{\bf E}}_{(\mathbf{x},y)\sim {\mathcal D}} [\ell(h_t(\ensuremath{\mathbf x}) , y)] = 0 .$$
Thus, by the definition of $Z_t$, $$\begin{aligned}
\mathop{\mbox{\bf E}}[ X_{t+1}|X_{t},...X_{1} ] & = \mathop{\mbox{\bf E}}[Z_{t+1} | X_t]+ X_{t}
= X_t.
\end{aligned}$$ In addition, by our assumption that the loss is bounded,
we have that (see exercises) $$\begin{aligned}
\label{eqn:martingale-bound}
|X_{t} - X_{t-1}| = |Z_{t}| \le 1.
\end{aligned}$$ Therefore we can apply Azuma's theorem to the martingale
$\{X_t\}$, or rather its consequence
[\[eqn:azuma2\]](#eqn:azuma2){reference-type="eqref"
reference="eqn:azuma2"}, and get
$$\Pr[X_{T} > c] \le e^{\frac{-c^2}{2T}}.$$ Plugging in the definition
of $X_{T}$, dividing by $T$ and using
$c = \sqrt{2T\log (\frac{2}{\delta})}$: $$\label{eqn:oco2lrn1}
\Pr\left[
\frac{1}{T}\sum_{t=1}^{T}\mathop{\mbox{\rm error}}(h_{t})- \frac{1}{T}\sum_{t=1}^{T}{\ell(h_{t}(\ensuremath{\mathbf x}_{t}),y_{t})} > \sqrt{\frac{2\log (\frac{2}{\delta})}{T}}
\right] \le \frac{\delta}{2}.$$
A similar martingale can be defined for $h^\star$ rather than $h_t$, and
repeating the analogous definitions and applying Azuma's inequality we
get: $$\label{eqn:oco2lrn2}
\Pr\left [ \frac{1}{T}\sum_{t=1}^{T}\mathop{\mbox{\rm error}}(h^\star)- \frac{1}{T}\sum_{t=1}^{T}{l(h^\star(\ensuremath{\mathbf x}_{t}),y_{t})} < - \sqrt{\frac{2\log (\frac{2}{\delta})}{T}}
\right] \le \frac{\delta}{2}.$$ For notational convenience, let us use
the following notation:
$$\Gamma_1 = \frac{1}{T}\sum_{t=1}^{T}\mathop{\mbox{\rm error}}(h_{t})- \frac{1}{T}\sum_{t=1}^{T}{\ell(h_{t}(\ensuremath{\mathbf x}_{t}),y_{t})},$$
$$\Gamma_2 = \frac{1}{T}\sum_{t=1}^{T}\mathop{\mbox{\rm error}}(h^\star)- \frac{1}{T}\sum_{t=1}^{T}{l(h^\star(\ensuremath{\mathbf x}_{t}),y_{t})}.$$
Next, observe that $$\begin{aligned}
& \frac{1}{T}\sum_{t=1}^{T}\mathop{\mbox{\rm error}}(h_t)- \mathop{\mbox{\rm error}}(h^\star) \\
& = \Gamma_1 - \Gamma_2
+ \frac{1}{T}\sum_{t=1}^{T}{\ell(h_{t}(\ensuremath{\mathbf x}_{t}),y_{t})} -
\frac{1}{T}\sum_{t=1}^{T}{\ell(h^\star (\ensuremath{\mathbf x}_{t}),y_{t})} \\
& \leq \frac{\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})}{T} + \Gamma_1 - \Gamma_2, % & \mbox { $f_t(h) = \ell(h(\bx_t),y_t) $ } % \\
% & \leq |\Gamma_1 | + | \Gamma_2 | + \regret_T(\mA) & \mbox { $\triangle$-inequality } \\
\end{aligned}$$ where in the last inequality we have used the definition
$f_t(h) = \ell(h(\mathbf{x}_t),y_t)$. From the above and Inequalities
[\[eqn:oco2lrn1\]](#eqn:oco2lrn1){reference-type="eqref"
reference="eqn:oco2lrn1"},
[\[eqn:oco2lrn2\]](#eqn:oco2lrn2){reference-type="eqref"
reference="eqn:oco2lrn2"} we get $$\begin{aligned}
& \Pr \left [ \frac{1}{T}\sum_{t=1}^{T}\mathop{\mbox{\rm error}}(h_{t})- \mathop{\mbox{\rm error}}(h^\star) > \frac{\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})}{T} +2\sqrt{\frac{2\log (\frac{2}{\delta})}{T}} \right] \\
&
\le \Pr \left [ \Gamma_1 - \Gamma_2 > 2\sqrt{\frac{2\log (\frac{1}{\delta})}{T}} \right] \\
& \leq \Pr \left [ \Gamma_1 > \sqrt{\frac{2\log (\frac{1}{\delta})}{T}} \right] + \Pr \left [ \Gamma_2 \le - \sqrt{\frac{2\log (\frac{1}{\delta})}{T}} \right] \\
& \leq \delta. \quad\quad\quad \mbox{Inequalities \eqref{eqn:oco2lrn1}, \eqref{eqn:oco2lrn2}}
\end{aligned}$$ By convexity we have that
$\mathop{\mbox{\rm error}}(\bar{h}) \le \frac{1}{T}\sum_{t=1}^{T}\mathop{\mbox{\rm error}}(h_{t})$.
Thus, with probability at least $1-\delta$,
$$\mathop{\mbox{\rm error}}(\bar{h}) \le \dfrac{1}{T}\sum_{t=1}^{T}\mathop{\mbox{\rm error}}(h_{t}) \le \mathop{\mbox{\rm error}}(h^\star) + \frac{ \ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) }{T} +\sqrt{\frac{8\log (\frac{2}{\delta})}{T}}.$$ ◻
:::
## Learning and Compression
Thus far we have considered finite and certain infinite hypothesis
classes, and shown that they are efficiently learnable if there exists
an efficient regret-minimization algorithm for a corresponding OCO
setting.
In this section we describe yet another property which is sufficient for
PAC learnability: the ability to compress the training set. This
property is particularly easy to state and use, especially for infinite
hypothesis classes. It does not, however, imply efficient algorithms.
Intuitively, if a learning algorithm is capable to express an hypothesis
using a small fraction of the training set, we will show that it
generalizes well to unseen data. For simplicity, we only consider
learning problems that satisfy a variant of the realizablility
assumption, i.e., the compression scheme generates a hypothesis that
attains zero error.
More formally, we define the notion of a compression scheme for a given
learning problem as follows. The definition and theorem henceforth can
be generalized to allow for loss functions, but for simplicity, consider
only the zero-one loss function for this section.
::: definition
**Definition 9.8**. *(Compression Scheme) A distribution ${\mathcal D}$
over ${\mathcal X}\times {\mathcal Y}$ admits a compression scheme of
size $k$, realized by an algorithm ${\mathcal A}$, if the following
holds. For any $T > k$, let $S_T = \{(\mathbf{x}_t,y_t), \ t \in [T]\}$
be a sample from ${\mathcal D}$. There exists an
$S' \subseteq S_T \ , \ |S'| = k$, such that the algorithm
${\mathcal A}$ accepts the set of $k$ examples $S'$, and returns a
hypothesis ${\mathcal A}(S') \in \{ {\mathcal X}\mapsto {\mathcal Y}\}$,
which satsifies:
$$\mathop{\mbox{\rm error}}_{S_T}( {\mathcal A}(S') ) = 0 .$$*
:::
The main conclusion of this section is that a learning problem that
admits a compression scheme of size $k$ is PAC learnable with sample
complexity proportional to $k$. This is formally given the following
theorem.
::: {#thm:compression2generalization .theorem}
**Theorem 9.9**. *Let ${\mathcal D}$ be a data distribution that admits
a compression scheme of size $k$ realized by algorithm ${\mathcal A}$.
Then with probability at least $1-\delta$ over the choice of a training
set $|S_T|=T$, it holds that
$$\mathop{\mbox{\rm error}}( {\mathcal A}(S_T)) \leq \frac{8 k \log \frac{T}{\delta}}{T} .$$*
:::
::: proof
*Proof.* Denote by $\mathop{\mbox{\rm error}}_S(h)$ the error of an
hypothesis $h$ on a sample $S$ of i.i.d. examples, where the sample is
taken independently of $h$. Since the examples are chosen independently,
the probability that a hypothesis with
$\mathop{\mbox{\rm error}}(h) > \varepsilon$ has
$\mathop{\mbox{\rm error}}_S(h) = 0$ is at most $(1-\varepsilon)^{|S|}$.
Denote the event of $h$ satisfying these two conditions by
${h \in {\mbox{bad}}}$.
Consider a compression scheme for distribution ${\mathcal D}$ of size
$k$, realized by ${\mathcal A}$, and a sample of size $|S_T|=T \gg k$.
By definition of a compression scheme, the hypothesis returned by
${\mathcal A}$ is based on $k$ examples chosen from the set
$S' \subseteq S_T$. We can bounds the probability of the event that
$\mathop{\mbox{\rm error}}_{S_T}({\mathcal A}(S')) = 0$ and
$\mathop{\mbox{\rm error}}({\mathcal A}(S')) > \varepsilon$, denoted by
${{\mathcal A}(S') \in \mbox{bad}}$, as follows, $$\begin{aligned}
& \Pr[ {{\mathcal A}(S') \in \mbox{bad}} ] \\
& = \sum_{S' \subseteq S_T, |S'|=k} \Pr[ {{\mathcal A}(S') \in \mbox{bad}} ] \cdot \Pr[ S'] & \mbox{law of total probability} \\
& \leq \binom{T}{k} (1-\varepsilon)^T .
\end{aligned}$$
For $\varepsilon= \frac{8k \log \frac{T}{\delta} }{T}$, we have that
$$\binom{T}{k} (1-\varepsilon)^T \leq T^k e^{-\varepsilon T} \leq \delta .$$
Since the compression scheme is guaranteed to return a hypothesis such
that $\mathop{\mbox{\rm error}}_{S_T}({\mathcal A}(S')) = 0$, this
implies that with probability at least $1-\delta$, the hypothesis
${\mathcal A}(S')$ has
$\mathop{\mbox{\rm error}}({\mathcal A}(S')) \leq \varepsilon$. ◻
:::
An important example of the use of compression schemes to bound the
generalization error is for the hypothesis class of hyperplanes in
${\mathbb R}^d$. It is left as an exercise to show that this hypothesis
class admits a compression scheme of size $d$.
## Bibliographic Remarks {#bibliographic-remarks-6}
The foundations of statistical and computational learning theory were
put forth in the seminal works of @Vapnik1998 and @Valiant1984
respectively. There are numerous comprehensive texts on statistical and
computational learning theory, see e.g., [@Kearns1994], and the recent
text [@shalev-shwartz_ben-david_2014].
Reductions from the online to the statistical (a.k.a. "batch") setting
were initiated by Littlestone [@Littlestone89]. Tighter and more general
bounds were explored in [@Cesa06; @CesaGen08; @Zhang05].
The probabilistic method is attributed to Paul Erdos, see the
illuminating text of Alon and Spencer [@AlonS92].
The relationship between compression and PAC learning was studied in the
seminal work of @LittlestoneW86. For more on the relationship and
historical connections between statistical learning and compression see
the inspiring chapter in [@avibook]. More recently @moran2016sample
[@david2016statistical] show that compression is equivalent to
learnability in general supervised learning tasks and give quantitative
bounds for this relationship.
The use of compression for proving generalization error bounds has been
applied in [@hanneke2019sample] for regression and in
[@gottlieb2018near; @kontorovich2017nearest] for nearest neighbor
classification. Another application is the recent work of
@bousquet2020proper which gives optimal generalization error bounds for
support vector machines using compression.
## Exercises
# Learning in Changing Environments {#chap:adaptive}
In online convex optimization the decision maker iteratively makes a
decision without knowledge of the future, and pays a cost based on her
decision and the observed outcome. The algorithms that we have studied
thus far are designed to perform nearly as well as the best single
decision in hindsight. The performance metric we have advocated for,
average *regret* of the online player, approaches zero as the number of
game iterations grows.
In scenarios in which the outcomes are sampled from some (unknown)
distribution, regret minimization algorithms effectively "learn\" the
environment and approach the optimal strategy. This was formalized in
chapter [9](#chap:online2batch){reference-type="ref"
reference="chap:online2batch"}. However, if the underlying distribution
changes, no such claim can be made.
Consider for example the online shortest path problem we have studied in
the first chapter. It is a well observed fact that traffic in networks
exhibits changing cyclic patterns. A commuter may choose one path from
home to work on a weekday, but a completely different path on the
weekend when traffic patterns are different. Another example is the
stock market: in a bull market the investor may want to purchase
technology stocks, but in a bear market perhaps they would shift their
investments to gold or government bonds.
When the environment undergoes many changes, standard regret may not be
the best measure of performance. In changing environments, the online
convex optimization algorithms we have studied thus far for strongly
convex or exp-concave loss functions exhibit undesirable "static\"
behavior, and converge to a fixed solution.
In this chapter we introduce and study a generalization of the concept
of regret called *adaptive* regret, to allow for a changing prediction
strategy. We start with examining the notion of adapting in the problem
of prediction from expert advice. We then continue to the more
challenging setting of online convex optimization, and derive efficient
algorithms for minimizing this more refined regret metric.
## A Simple Start: Dynamic Regret
Before giving the main performance metric studied in this chapter, we
consider the first natural approach: measuring regret w.r.t. any
sequence of decisions. Clearly, in general it is impossible to compete
with an arbitrary changing benchmark. However, it is possible to give a
refined analysis that shows what happens to the regret of an online
convex optimization algorithm vs. changing decisions.
More precisely, define the *dynamic regret* of an OCO algorithm with
respect to a sequence
$\ensuremath{\mathbf u}_1,\ldots,\ensuremath{\mathbf u}_T$ as:
$$\begin{aligned}
\ensuremath{\mathrm{{DynamicRegret}}}_T({\mathcal A},\ensuremath{\mathbf u}_1,\ldots,\ensuremath{\mathbf u}_T) & \stackrel{\text{\tiny def}}{=}& \sum_{t=1}^T f_t(\ensuremath{\mathbf x}_t) - \sum_{t=1}^T f_t(\ensuremath{\mathbf u}_t)
\end{aligned}$$
To analyze the dynamic regret, some measure of the complexity of the
sequence $\ensuremath{\mathbf u}_1,\ldots,\ensuremath{\mathbf u}_T$ is
necessary. Let
${\mathcal P}(\ensuremath{\mathbf u}_1,\ldots,\ensuremath{\mathbf u}_T)$
be the path length of the comparison sequence defined as
$${\mathcal P}(\ensuremath{\mathbf u}_1,\ldots,\ensuremath{\mathbf u}_T) = \sum_{t=1}^{T-1} \|\ensuremath{\mathbf u}_t - \ensuremath{\mathbf u}_{t+1}\| + 1.$$
It is natural to expect the regret to scale with the path length, as
indeed the following theorem shows. For a fixed comparator
$\ensuremath{\mathbf u}_t = \ensuremath{\mathbf x}^\star$, the path
length is one, and thus Theorem
[10.1](#thm:dynamic-regret){reference-type="ref"
reference="thm:dynamic-regret"} recovers the $O(\sqrt{T})$ standard
regret bound. For simplicity, we assume that the time horizon $T$ is
known ahead of time, and so is the path length of the comparator
sequence, although these limitations can be removed (see bibliographic
section).
::: {#thm:dynamic-regret .theorem}
**Theorem 10.1**. *Online Gradient Descent (algorithm
[\[alg:ogd\]](#alg:ogd){reference-type="ref" reference="alg:ogd"}) with
step size
$\eta \approx \sqrt{\frac{{\mathcal P}(\ensuremath{\mathbf u}_1,...,\ensuremath{\mathbf u}_T) }{T}}$
guarantees the following dynamic regret bound:
$$\ensuremath{\mathrm{{DynamicRegret}}}_T({\mathcal A},\ensuremath{\mathbf u}_1,\ldots,\ensuremath{\mathbf u}_T) = O( \sqrt{T {\mathcal P}(\ensuremath{\mathbf u}_1,\ldots,\ensuremath{\mathbf u}_T) } )$$*
:::
::: proof
*Proof.* Using our notation, and following the steps of the proof of
Theorem [3.1](#thm:gradient){reference-type="ref"
reference="thm:gradient"}, $$\begin{aligned}
\|\mathbf{x}_{t+1}-\ensuremath{\mathbf u}_t\|^2\ \leq \|\mathbf{y}_{t+1}-\ensuremath{\mathbf u}_t\|^2 = \|\mathbf{x}_t- \ensuremath{\mathbf u}_t\|^2 + \eta^2
\|\nabla_t\|^2 -2 \eta \nabla_t^\top (\mathbf{x}_t -\ensuremath{\mathbf u}_t) .
\end{aligned}$$ Thus, $$\begin{aligned}
2 \nabla_t^\top (\mathbf{x}_t-\ensuremath{\mathbf u}_t)\ &\leq \frac{ \|\mathbf{x}_t-
\ensuremath{\mathbf u}_t\|^2-\|\mathbf{x}_{t+1}-\ensuremath{\mathbf u}_t\|^2}{\eta} + \eta G^2 %\\
%& \leq \frac{1}{\eta} \left( \|\x_t\|^2 - \|\x_{t+1}\|^2 +
\end{aligned}$$ Using convexity and summing this inequality across time
we get $$\begin{aligned}
& 2 \left( \sum_{t=1}^T f_t(\mathbf{x}_t)-f_t(\ensuremath{\mathbf u}_t) \right ) \leq 2\sum_{t=1}^T \nabla_t^\top (\ensuremath{\mathbf x_{t}}- \ensuremath{\mathbf x}^\star) \\
&\leq \sum_{t=1}^T \frac{ \|\mathbf{x}_t-
\ensuremath{\mathbf u}_t\|^2-\|\mathbf{x}_{t+1}-\ensuremath{\mathbf u}_t\|^2}{\eta} + \eta G^2 T \\
& = \frac{1}{\eta} \sum_{t=1}^T \left( \|\ensuremath{\mathbf x}_t\|^2 - \|\ensuremath{\mathbf x}_{t+1}\|^2 + 2 \ensuremath{\mathbf u}_t^\top (\ensuremath{\mathbf x}_{t+1} - \ensuremath{\mathbf x}_{t}) \right)
+ \eta G^2 T \\
&\leq \frac{2}{\eta} \left( D^2 + \sum_{t=2}^{T} \ensuremath{\mathbf x}_t^\top ( \ensuremath{\mathbf u}_{t-1} - \ensuremath{\mathbf u}_{t}) + \ensuremath{\mathbf u}_T^\top \ensuremath{\mathbf x}_{T+1} - \ensuremath{\mathbf u}_1^\top \ensuremath{\mathbf x}_1 \right) + \eta G^2 T \\
&\leq \frac{3}{\eta} \left( D^2 + D \sum_{t=2}^{T} \| \ensuremath{\mathbf u}_{t-1} - \ensuremath{\mathbf u}_{t} \| \right) + \eta G^2 T & \ensuremath{\mathbf u}_t \in \ensuremath{\mathcal K}\\
& \leq \frac{3D^2 }{\eta} {\mathcal P}(\ensuremath{\mathbf u}_1,...,\ensuremath{\mathbf u}_{T} ) + \eta G^2 T . %\leq 3 DG \sqrt{T \mP(\uv_1,\ldots,\uv_T) }.
\end{aligned}$$ The theorem now follows by choice of $\eta$. ◻
:::
This simple modification to the analysis of online gradient descent
naturally extends to online mirror descent, as well as to other notions
of path distance of the comparison sequence.
We now turn to another metric of performance that requires more advanced
methods than we have seen thus far. This metric can be shown to be more
general than dynamic regret, in the sense that the bounds we prove also
imply low dynamic regret.
## The Notion of Adaptive Regret
The main performance metric we consider in this chapter is designed to
measure the performance of a decision maker in a changing environment.
It is formally given in the following definition.
::: {#def:adaptiveregret .definition}
**Definition 10.2**. *The adaptive regret of an online convex
optimization algorithm ${\mathcal A}$ is defined as the maximum regret
it achieves over any contiguous time interval. Formally,
$$\begin{aligned}
\ensuremath{\mathrm{{AdaptiveRegret}}}_T({\mathcal A}) & \stackrel{\text{\tiny def}}{=}& \sup_{I = [r,s] \subseteq [T]} \left\{ \sum_{t=r}^s f_t(\ensuremath{\mathbf x}_t) - \min_{x^*_I \in \ensuremath{\mathcal K}} \sum_{t=r}^s f_t(\ensuremath{\mathbf x}^*_I) \right\} \\
& = & \sup_{I = [r,s] \subseteq [T]} \left\{ \ensuremath{\mathrm{{Regret}}}_{[r,s]}({\mathcal A}) \right\} .
\end{aligned}$$*
:::
As opposed to standard regret, the power of this definition stems from
the fact that the comparator is allowed to change. In fact, it is
allowed to change indefinitely with every interval of time.
For an algorithm with low adaptive regret, as opposed to standard
regret, how would its performance guarantee differ in a changing
environment? Consider the problem of portfolio selection, for which time
can be divided into disjoint segments with different characteristics:
bear market in the first $T/2$ iterations and bull market in the last
$T/2$ iterations. A (standard) sublinear regret algorithm is only
required to converge to the average of both optimal portfolios, clearly
an undesirable outcome. However, an algorithm with sublinear adaptive
regret bounds would *necessarily* converge to the optimal portfolio in
both intervals.
Not only does this definition make intuitive sense, but it generalizes
other natural notions. For example, consider an OCO setting that can be
divided into $k$ intervals, such that in each a different comparator is
optimal. Then an adaptive regret guarantee of
$\ensuremath{\mathrm{{AdaptiveRegret}}}_T = o(T)$ would translate to
overall regret of
$k \times \ensuremath{\mathrm{{AdaptiveRegret}}}_{T/k}$ compared to the
best $k$-shifting comparator.
### Weakly and strongly adaptive algorithms
The Online Gradient Descent algorithm over general convex losses, with
step sizes $O(\frac{1}{\sqrt{t}})$, attains an adaptive regret guarantee
of $$\ensuremath{\mathrm{{AdaptiveRegret}}}_T(OGD) = O(\sqrt{T}) ,$$ and
this bound is tight. This is a simple consequence of the analysis we
have already seen in chapter [3](#chap:first order){reference-type="ref"
reference="chap:first order"}, and left as an exercise. Unfortunately
this guarantee is meaningless for intervals of length $o(\sqrt{T})$.
Recall that for strongly convex loss functions, the OGD algorithm with
the optimal learning rate schedule attains $O(\log T)$ regret. However,
it does **not** attain any non-trivial adaptive regret guarantee: its
adaptive regret can be as large as $\Omega(T)$, and this is also left as
an exercise.
An OCO algorithm ${\mathcal A}$ is said to be *strongly adaptive* if its
adaptive regret can be bounded by its regret over the interval up to
logarithmic terms in $T$, i.e. $$\begin{aligned}
%& \sup_{I = [r,s] \subseteq [T]} \left\{ \sum_{t=r}^s f_t(\x_t) - \min_{x^*_I \in \K} \sum_{t=r}^s f_t(\x^*_I) \right\} \\
& \ensuremath{\mathrm{{AdaptiveRegret}}}_T({\mathcal A}) = O(\ensuremath{\mathrm{{Regret}}}_I({\mathcal A}) \cdot \log^{O(1)} T).
\end{aligned}$$
The natural question is thus: are there algorithms that attain the
optimal regret guarantee, and simultaneously the optimal adaptive regret
guarantee? As we shall see, the answer is affirmative in a strong sense:
we shall describe and analyze algorithms that are optimal in both
metrics. Furthermore, these algorithms can be implemented with small
computational overhead over the non-adaptive methods we have already
studied.
## Tracking the Best Expert
Consider the fundamental problem studied in the first chapter of this
text, prediction from expert advice, but with a small twist. Instead of
a static best expert, consider the setting in which different experts
are the "best expert\" in different time intervals. More precisely,
consider the situation in which time $[T]$ can be divided into $k$
disjoint intervals such that each admits a different "locally best\"
expert. Can we learn to track the best expert?
This tracking problem was historically the first motivation to study
adaptivity in online learning. Indeed, as shown by Herbster and Warmuth
(see bibliographic section), there is a natural algorithm that attains
optimal regret bounds.
The Fixed Share algorithm, describe in Algorithm
[\[alg:fixed-share\]](#alg:fixed-share){reference-type="ref"
reference="alg:fixed-share"}, is a variant of the Hedge Algorithm
[\[alg:Hedge\]](#alg:Hedge){reference-type="ref" reference="alg:Hedge"}.
On top of the familiar multiplicative updates, it adds a uniform
exploration term whose purpose is to avoid the weight of any expert from
becoming too small. This provably allows a regret bound that tracks the
best expert in any interval.
::: algorithm
::: algorithmic
Input: parameter $\delta < \frac{1}{2}$. Initialize
$\forall i \in [N] , p_i^1 = \frac{1}{N}$. Play
$\ensuremath{\mathbf x}_t = \sum_{i=1}^N p_t^i \ensuremath{\mathbf x}_t^{i}$.
After receiving $f_t$, update for $1 \leq i \leq N$\
$$\hat{p}^{i}_{t+1} = \frac{p^{i}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{i}_t)}}{\sum_{j=1}^N p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}}$$
Fixed-share step:
$$p_{t+1}^{i} = (1 - \delta )\hat{p}^{i}_{t+1} + \frac{\delta}{N}$$
:::
:::
In line with the notation we have used throughout this manuscript, we
denote decisions in a convex decision set by
$\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}$. An expert $i$
suggests decision $\ensuremath{\mathbf x}_t^i$, and suffers loss
according to a convex loss function denoted
$f_t(\ensuremath{\mathbf x}_t^i)$. The main performance guarantee for
the Fixed Share algorithm is given in the theorem below.
::: {#thm:fixed-share .theorem}
**Theorem 10.3**. *Given a sequence of $\alpha$-exp-concave loss
functions, the Fixed-Share algorithm with $\delta = \frac{1}{2 T}$
guarantees
$$\sup_{I = [r,s] \subseteq [T]} \left\{ \sum_{t=r}^s f_t(\ensuremath{\mathbf x}_t) - \min_{i^* \in [N]} \sum_{t=r}^s f_t(\ensuremath{\mathbf x}^{i^*}_t) \right\} = O\left(\frac{1}{\alpha} \log N T \right) .$$*
:::
Notice that this is a different guarantee than adaptive regret as per
Definition [10.2](#def:adaptiveregret){reference-type="ref"
reference="def:adaptiveregret"}, as the decision set is discrete.
However, it is a crucial component in the adaptive algorithms we will
explore in the next section.
As a direct conclusion from this theorem, it can be shown (see
exercises) that if the best expert changes $k$ times in a sequence of
length $T$, the overall regret compared to the best expert in every
interval is bounded by $$O \left( k \log \frac{NT}{k} \right) .$$
To prove this theorem, we start with the following lemma, which is a
fine-grained analysis of the multiplicative weights properties:
::: {#lem:round-reg .lemma}
**Lemma 10.4**. *For all $1 \leq i < N$,
$$f_t(\ensuremath{\mathbf x}_t) - f_t(\ensuremath{\mathbf x}^{i}_t) \leq \alpha^{-1} (\log \hat{p}^{i}_{t+1} - \log \hat{p}^{i}_{t} - \log (1 - \delta) ) .$$*
:::
::: proof
*Proof.* Using the $\alpha$-exp concavity of $f_t$, $$\begin{aligned}
e^{-\alpha f_t(\ensuremath{\mathbf x}_t)} & = & e^{-\alpha f_t(\sum_{j=1}^N p^{j}_t \ensuremath{\mathbf x}^{j}_t)} \geq \sum_{j=1}^N p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}.
\end{aligned}$$ Taking the natural logarithm,
$$f_t(\ensuremath{\mathbf x}_t) \leq -\alpha^{-1} \log \sum_{j=1}^N p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)} \nonumber$$
Hence, $$\begin{aligned}
f_t(\ensuremath{\mathbf x}_t) - f_t(\ensuremath{\mathbf x}^{i}_t) & \leq \alpha^{-1}(\log e^{-\alpha f_t(\ensuremath{\mathbf x}^{i}_t)} - \log \sum_{j=1}^N p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}) \\
& = \alpha^{-1} \log \frac{e^{-\alpha f_t(\ensuremath{\mathbf x}^{i}_t)}}{\sum_{j=1}^N p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}} \\
& = \alpha^{-1} \log \left(\frac{1}{p^{i}_t} \cdot
\frac{p^{i}_te^{-\alpha f_t(\ensuremath{\mathbf x}^{i}_t)}}{\sum_{j=1}^N p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}} \right) \\
& = \alpha^{-1} \log \frac{\hat{p}^{i}_{t+1}}{p^{i}_t} = \alpha^{-1} (\log \hat{p}^{i}_{t+1} - \log {p}^{i}_{t} )
\end{aligned}$$ The proof is completed observing that: $$\begin{aligned}
\log p^{i}_t & = \log \left( (1 - \delta)\hat{p}^{i}_{t} + \frac{\delta}{N} \right) \\
& \geq \log \hat{p}^{i}_t + \log (1 - \delta ) . %\geq \log \hat{p}^{i}_t \log(1- \delta) .
\end{aligned}$$ ◻
:::
Theorem [10.3](#thm:fixed-share){reference-type="ref"
reference="thm:fixed-share"} can now be derived as a corollary:
::: proof
*Theorem [10.3](#thm:fixed-share){reference-type="ref"
reference="thm:fixed-share"}.* By summing up over the interval
$I = [r,s]$, and using the lower bound on $p_t^i$, we have
$$\begin{aligned}
& \sum_{t \in I } f_t(\ensuremath{\mathbf x}_t) - \sum_{t \in I} f_t(\ensuremath{\mathbf x}^{i}_t ) \\
& \leq \sum_{t \in I} \alpha^{-1} (\log \hat{p}^{i}_{t+1} - \log \hat{p}^{i}_{t} - \log (1 - \delta) ) \\
& \leq \frac{1}{\alpha} \left[ \log \frac{1}{\hat{p}^i_r} - |I| \log (1-\delta) \right] \\
& \leq \frac{1}{\alpha} \left[ \log \frac{N}{\delta} + 2 \delta |I| \right] & \hat{p}^i_r \geq \frac{\delta}{N}, \delta < \frac{1}{2} \\
& \leq \frac{1}{\alpha} \log 2 N T + \frac{1}{\alpha} & \delta = \frac{1}{2T}
\end{aligned}$$ ◻
:::
## Efficient Adaptive Regret for Online Convex Optimization {#sec:basic}
The Fixed-Share algorithm described in the previous section is extremely
practical and efficient for discrete sets of experts. However, to
exploit the full power of OCO we require an efficient algorithm for
continuous decision sets.
Consider for example the problems of online portfolio selection and
online shortest paths: naïvely applying the Fixed-Share algorithm is
computationally inefficient. Instead, we seek an algorithm which takes
advantage of the efficient representation of these problems in the
language of convex programming.
We present such a method called FLH, or Follow the Leading History. The
basic idea is to think of different online convex optimization
algorithms starting at different time points as experts, and apply a
version of Fixed Share to these experts.
::: algorithm
::: algorithmic
Let ${\mathcal A}$ be an OCO algorithm. Initialize $p_1^1 = 1$ Set
$\forall j \leq t \ , \ \ensuremath{\mathbf x}^{j}_t \leftarrow {\mathcal A}(f_j,...,f_{t-1})$
[]{#eqn:shalom12 label="eqn:shalom12"} Play
$\ensuremath{\mathbf x}_t = \sum_{j=1}^t p^{j}_t \ensuremath{\mathbf x}^{j}_t$.
After receiving $f_t$, update for $1 \leq i \leq t$\
$$\hat{p}^{i}_{t+1} = \frac{p^{i}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{i}_t)}}{\sum_{j=1}^t p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}}$$
Mixing step: set $p^{t+1}_{t+1} = \frac{1}{t+1}$ and
$$\forall i \neq t+1 \ , \ p^{i}_{t+1} = \left(1 - \frac{1}{t+1} \right)\hat{p}^{i}_{t+1} .$$
:::
:::
The main performance guarantee is given in the following theorem.
::: {#thm:main-flh1 .theorem}
**Theorem 10.5**. *Let ${\mathcal A}$ be an OCO algorithm for
$\alpha$-exp-concave loss function with
$\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})$. Then,
$$\mbox{\ensuremath{\mathrm{{AdaptiveRegret}}}}_T(FLH) \leq \ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) + O(\frac{1}{\alpha} \log T) .$$
In particular, taking ${\mathcal A}\equiv ONS$ guarantees
$$\mbox{\ensuremath{\mathrm{{AdaptiveRegret}}}}_T = O(\frac{1}{\alpha} \log
T) .$$*
:::
Notice that FLH invokes ${\mathcal A}$ at iteration $t$ at most $T$
times. Hence its running time is bounded by $T$ times that of
${\mathcal A}$. This can still be prohibitive as the number of
iterations grows large. In the next section, we show how the ideas from
this algorithm can give rise to an efficient adaptive algorithm with
only $O(\log T)$ computational overhead and slightly worse regret
bounds.
The analysis of FLH is very similar to that of Fixed Share, with the
main subtleties due to the fact that the time horizon $T$ is not assumed
to be known ahead of time, and thus the number of experts varies with
time.
Instead of giving the full analysis, which is deferred to the exercises,
we give a simplified version of FLH which does assume a-priory knowledge
of $T$, and whose analysis can be directly reduced to that of Theorem
[10.3](#thm:fixed-share){reference-type="ref"
reference="thm:fixed-share"}.
::: algorithm
::: algorithmic
Let ${\mathcal A}$ be an OCO algorithm. Set $N=T, \delta= \frac{1}{2T}$.
For all $i \leq t$, set
$\ensuremath{\mathbf x}^{i}_t \leftarrow {\mathcal A}(f_j,...,f_{t-1})$.
Otherwise, set $\ensuremath{\mathbf x}_t^i = \mathbf{0}$. Apply the
Fixed Share algorithm with expert predictions
$\ensuremath{\mathbf x}_t^i$.
:::
:::
The simplified version of FLH is given in Algorithm
[\[alg:simple-flh\]](#alg:simple-flh){reference-type="ref"
reference="alg:simple-flh"}, and it guarantees the following adaptive
regret bound.
::: {#thm:simple-flh .theorem}
**Theorem 10.6**. *Algorithm
[\[alg:simple-flh\]](#alg:simple-flh){reference-type="ref"
reference="alg:simple-flh"} guarantees:
$$\mbox{\ensuremath{\mathrm{{AdaptiveRegret}}}}_T(\mbox{Simple-FLH}) \leq \ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) + O(\frac{1}{\alpha} \log T) .$$*
:::
::: proof
*Proof.* Applying Theorem [10.3](#thm:fixed-share){reference-type="ref"
reference="thm:fixed-share"} to the experts defined in Simple FLH,
guarantees for every interval in time $I = [r,s]$, and by choice of $N$,
for every $i \leq s$, $$\begin{aligned}
\sum_{t \in I } f_t(\ensuremath{\mathbf x}_t) - \sum_{t \in I} f_t(\ensuremath{\mathbf x}^{i}_t ) \leq \frac{1}{\alpha} \log 2 N T + 1 = O(\frac{1}{\alpha} \log T) .
\end{aligned}$$ In particular, consider the sequence of predictions
given by the $r$'th expert, for which we have $$\begin{aligned}
\sum_{t \in I } f_t(\ensuremath{\mathbf x}^{r}_t ) = \ensuremath{\mathrm{{Regret}}}_{s-r+1}({\mathcal A}) \leq \ensuremath{\mathrm{{Regret}}}_T({\mathcal A}).
\end{aligned}$$ The theorem now follows since this holds for every
iterval $I \subseteq [T]$. ◻
:::
## \* Computationally Efficient Methods {#sec:pruning}
In the previous section we studied adaptive regret, introduced and
analyzed an algorithm that attains near optimal adaptive regret bounds.
However, FLH suffers from a significant computational and memory
overhead: it requires maintaining $O(T)$ copies of an online convex
optimization algorithm. This computational overhead, which is
proportional to the number of iterations, can be prohibitive in many
applications. In this section our goal is to implement the algorithmic
template of FLH efficiently and using little space.
To be more precise, henceforth denote the running time per iteration of
algorithm ${\mathcal A}$ as $V_t({\mathcal A})$. Recall that at time
$t$, FLH stores all predictions
$\{\ensuremath{\mathbf x}_t^i \ | \ i \in [t]\}$ and has to compute
weights for all of them. This requires running time of at least
$O(V_t({\mathcal A}) \cdot t)$.
The FLH2 algorithm, described in Algorithm
[\[alg:flh2\]](#alg:flh2){reference-type="ref" reference="alg:flh2"},
significantly cuts down this running time to being only logarithmic in
the current time iteration parameter $t$. To achieve this, FLH2 applies
a pruning method to cut down the number of active online algorithms from
$t$ to $O(\log t)$. However, its adaptive regret guarantee is slightly
worse, and suffers a multiplicative factor of $O(\log T)$ as compared to
FLH.
::: algorithm
::: algorithmic
Let ${\mathcal A}$ be an OCO algorithm. Initialize
$p_1^1 = 1, S_1 = \{1\}$
Set
$\forall j \in S_t \ , \ \ensuremath{\mathbf x}^{j}_t \leftarrow {\mathcal A}(f_j,...,f_{t-1})$
Play
$\ensuremath{\mathbf x}_t = \sum_{j \in S_t} p^{j}_t \ensuremath{\mathbf x}^{j}_t$.
[]{#algstep:mw label="algstep:mw"} After receiving $f_t$, perform update
for $i \in S_t$:\
$$\hat{p}^{i}_{t+1} = \frac{p^{i}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{i}_t)}}{\sum_{j \in S_t} p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}}$$
[]{#algstep:mixing label="algstep:mixing"} Pruning: set
$S_{t+1} \leftarrow \mbox{Prune}(S_t) \cup \{t+1\}$. Set
$\hat{p}^{t+1}_{t+1}$ to $\frac{1}{t}$, and update:
$$\forall i \in S_{t+1} \ . \ p^{i}_{t+1} = \frac{ \hat{p}^{i}_{t+1}} {\sum_{j \in S_{t+1}} \hat{p}^j_{t+1} }$$
:::
:::
Before giving the exact details of this pruning method, we state the
performance guarantee for FLH2.
::: {#thm:flh2 .theorem}
**Theorem 10.7**. *Given an OCO algorithm ${\mathcal A}$ with regret
$\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})$ and running time
$V_T(A)$, algorithm $FLH2$ guarantees:
$V_T(FLH2) \leq V_T({\mathcal A}) \log T$ and
$$\mbox{\ensuremath{\mathrm{{AdaptiveRegret}}}}_T(FLH2) \leq \ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) \log T + O(\frac{1}{\alpha} \log^2 T) .$$*
:::
The main conclusion from this theorem is obtained by using FLH2 with
${\mathcal A}$ being the ONS algorithm from chapter
[4](#chap:second order-methods){reference-type="ref"
reference="chap:second order-methods"}. This gives adaptive regret of
$O(\frac{1}{\alpha} \log^2 T)$ and running time which is polynomial in
natural parameters of the problem and poly-logarithmic in the number of
iterations.
Before diving into the analysis, we explain the main new ingredient. At
the heart of this algorithm is a new method for incorporating history.
We will show that it suffices to store only $O(\log t)$ experts at time
$t$, rather than all $t$ experts as in FLH.
At time $t$, there is a working set $S_t$ of experts. In FLH, this set
can be thought of to contain $E^1,\cdots,E^t$, where each $E^i$ is the
algorithm ${\mathcal A}$ starting from iteration $i$. For the next
round, a new expert $E^{t+1}$ is added to get $S_{t+1}$. The complexity
and regret of FLH is directly related to the cardinality of these sets.
The key to decreasing the sizes of the sets $S_t$ is to also *remove*
(or *prune*) some experts. Once an expert is removed, it is never used
again. The algorithm will perform the multiplicative update and mixing
steps (steps [\[algstep:mw\]](#algstep:mw){reference-type="ref"
reference="algstep:mw"} and
[\[algstep:mixing\]](#algstep:mixing){reference-type="ref"
reference="algstep:mixing"} in algorithm
[\[alg:flh2\]](#alg:flh2){reference-type="ref" reference="alg:flh2"})
only on the working set of experts.
The problem of maintaining the set of active experts can be thought of
as the following abstract data streaming problem. Suppose the integers
$1,2,\cdots$ are being "processed\" in a streaming fashion. At time $t$,
we have "read\" the positive integers up to $t$ and maintain a very
small subset of them in $S_t$. At time $t$ we create $S_{t+1}$ from
$S_t$: we are allowed to add to $S_t$ only the integer $t+1$, and remove
some integers already in $S_t$. Our aim is to maintain a set $S_t$ which
satisfies:
1. For every positive $s \leq t$,
$[s,(s+t)/2] \cap S_t \neq \emptyset$.
2. For all $t$, $|S_t| = O(\log T)$.
3. For all $t$, $S_{t+1}\backslash S_t = \{t+1\}$.
The first property of the sets $S_t$ intuitively means that $S_t$ is
"well spread out\" in a logarithmic scale. This is depicted in
Figure [10.1](#fig-st){reference-type="ref" reference="fig-st"}. The
second property ensures computational efficiency.
::: center
![Illustration of the working set $S_t$](images/st.pdf){#fig-st
width="3in"}
:::
Indeed, the procedure "Prune\" maintains $S_t$ with these exact
properties, and is detailed after we prove Theorem
[10.7](#thm:flh2){reference-type="ref" reference="thm:flh2"}.
We proceed to prove the main theorem. We start with an analogue of Lemma
[10.4](#lem:round-reg){reference-type="ref" reference="lem:round-reg"}.
::: {#round-reg2 .proposition}
**Proposition 10.8**. *The following holds for all $i \in S_t$,*
1. *$f_t(\ensuremath{\mathbf x}_t) - f_t(\ensuremath{\mathbf x}^{i}_t) \leq \alpha^{-1} (\log \hat{p}^{i}_{t+1} - \log \hat{p}^{i}_{t} + \log \frac{t-1}{t} )$*
2. *$f_t(\ensuremath{\mathbf x}_t) - f_t(\ensuremath{\mathbf x}^{t}_t)\leq \alpha^{-1} (\log \hat{p}^{t}_{t+1} + \log t)$*
:::
::: proof
*Proof.* Using the $\alpha$-exp concavity of $f_t$ - $$\begin{aligned}
e^{-\alpha f_t(\ensuremath{\mathbf x}_t)} & = & e^{-\alpha f_t(\sum_{j \in S_t} p^{j}_t \ensuremath{\mathbf x}^{j}_t)} \geq \sum_{j \in S_t} p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}
\end{aligned}$$ Taking the natural logarithm,
$$f_t(\ensuremath{\mathbf x}_t) \leq -\alpha^{-1} \log \sum_{j \in S_t} p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)} \nonumber$$
Hence, $$\begin{aligned}
f_t(\ensuremath{\mathbf x}_t) - f_t(\ensuremath{\mathbf x}^{i}_t) & \leq \alpha^{-1}(\log e^{-\alpha f_t(\ensuremath{\mathbf x}^{i}_t)} - \log \sum_{j \in S_t} p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}) \\
& = \alpha^{-1} \log \frac{e^{-\alpha f_t(\ensuremath{\mathbf x}^{i}_t)}}{\sum_{j \in S_t} p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}} \\
& = \alpha^{-1} \log \left( \frac{1}{p^{i}_t} \cdot
\frac{p^{i}_te^{-\alpha f_t(\ensuremath{\mathbf x}^{i}_t)}}{\sum_{j \in S_t} p^{j}_t e^{-\alpha f_t(\ensuremath{\mathbf x}^{j}_t)}}\right) \\
& = \alpha^{-1} \log \frac{\hat{p}^{i}_{t+1}}{p^{i}_t}
\end{aligned}$$ To complete the proof, we note the following two facts
that are analogous to the ones used in Claim
[10.4](#lem:round-reg){reference-type="ref" reference="lem:round-reg"}:
1. For $1 \leq i < t$,
$\log p^{i}_t \geq \log \hat{p}^{i}_t + \log \frac{t-1}{t}$
2. $\log p^{t}_t \geq -\log t$
Proving these facts is left as an exercise. ◻
:::
Using this we can prove the following Lemma.
::: {#eff-int-reg .lemma}
**Lemma 10.9**. *Consider some time interval $I = [r,s]$. Suppose that
$E^r$ was in the working set $S_t$, for all $t \in I$. Then the regret
incurred in $I$ is at most
$\frac{1}{\alpha} \log (s) + \ensuremath{\mathrm{{Regret}}}_{T}({\mathcal A})$.*
:::
::: proof
*Proof.* Consider the regret in $I$ with respect to expert $E^r$,
$$\begin{aligned}
& \sum_{t = r}^{s} (f_t(\ensuremath{\mathbf x}_t) - f_t(\ensuremath{\mathbf x}^{r}_t)) \\
& = (f_r(\ensuremath{\mathbf x}_r) - f_r(\ensuremath{\mathbf x}^{r}_r)) + \sum_{t = r+1}^{s} (f_t(\ensuremath{\mathbf x}_t) - f_t(\ensuremath{\mathbf x}^{r}_t)) \nonumber \\
& \leq \alpha^{-1} \bigl(\log \hat{p}^{r}_{r+1} + \log r + \sum_{t = r+1}^{s} (\log \hat{p}^{r}_{t+1} - \log \hat{p}^{r}_{t} + \log \frac{t}{t-1} )\bigr) & \mbox{ Claim \ref{round-reg2}} \\
& = \alpha^{-1} (\log r + \log \hat{p}^{r}_{s+1} + \sum_{t = r+1}^{s} \log \frac{t}{t-1}) \nonumber \\
& = \alpha^{-1} (\log (s) + \log \hat{p}^{r}_{s+1} ) \nonumber
\end{aligned}$$
Since $\hat{p}^{r}_{s+1} \leq 1$, $\log \hat{p}^{r}_{s+1} \leq 0$. This
implies that the regret w.r.t. expert $E^r$ is bounded by
$\alpha^{-1} \log (s)$. Since $E^r$ has regret bounded by
$\ensuremath{\mathrm{{Regret}}}_I({\mathcal A}) \leq \ensuremath{\mathrm{{Regret}}}_T({\mathcal A})$
over $I$, the conclusion follows. ◻
:::
Given the properties of $S_t$, we can show that in any interval the
regret incurred is small.
::: {#aflh-reg .lemma}
**Lemma 10.10**. *For any interval $I$ the regret incurred by the FLH2
is at most\
$(\frac{1}{\alpha} \log(s) + \ensuremath{\mathrm{{Regret}}}_{T}({\mathcal A})) (\log_2 |I|+1)$.*
:::
::: proof
*Proof.* Let $|I| \in [2^q, 2^{q+1})$, and denote for simplicity
$R_T = \frac{1}{\alpha} \log(s) + \ensuremath{\mathrm{{Regret}}}_{T}({\mathcal A})$.
We will prove by induction on $q$.
**base case:** For $q=0$ the regret is bounded by
$$f_r(\ensuremath{\mathbf x}_r) \leq \ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) \leq R_T$$
**induction step:** By the properties of the $S_t$'s, there is an expert
$E^i$ in the pool such that $i \in [r,(r+s)/2]$. This expert $E^i$
entered the pool at time $i$ and stayed throughout $[i,s]$. By
Lemma [10.9](#eff-int-reg){reference-type="ref"
reference="eff-int-reg"}, the algorithm incurs regret at most
$R_T = \frac{1}{\alpha} \log (s) + \ensuremath{\mathrm{{Regret}}}_{T}({\mathcal A})$
in $[i,s]$.
The interval $[r,i-1]$ has size at most
$\frac{|I|}{2} \in [2^{q-1},2^q)$, and by induction the algorithm has
regret of at most $R_T \cdot q$ on this interval. This gives a total of
$R_T(q+1)$ regret on $I$. ◻
:::
We can now prove Theorem [10.7](#thm:flh2){reference-type="ref"
reference="thm:flh2"}:
::: proof
*Theorem [10.7](#thm:flh2){reference-type="ref" reference="thm:flh2"}.*
The running time of FLH2 is bounded by $|S_t| \cdot V_T({\mathcal A})$.
Since $|S_t| = O(\log t)$, we can bound the running time by
$O(V_T({\mathcal A}) \log T)$. This fact, together with
Lemma [10.10](#aflh-reg){reference-type="ref" reference="aflh-reg"},
completes the proof. ◻
:::
### The pruning method {#section:streamsoln}
We now explain the pruning procedure used to maintain the set
$S_t \subseteq \{1,2,...,t\}$.
We specify the *lifetime* of integer $i$ - if $i = r2^k$, where $r$ is
odd, then the lifetime of $i$ is $2^{k+2}+1$. Suppose the lifetime of
$i$ is $m$. Then for any time $t \in [i,i+m]$, integer $i$ is *alive* at
$t$. The set $S_t$ is simply the set of all integers that are alive at
time $t$. Obviously, at time $t$, the only integer added to $S_t$ is
$t$ - this immediately proves Property (3). We now prove the other
properties.
::: proof
*Proof.* (Property (1)) We need to show that some integer in
$[s,(s+t)/2]$ is alive at time $t$. This is trivially true when
$t-s < 2$, since $t-1, t \in S_t$. Let $2^\ell$ be the largest power of
$2$ such that $2^\ell \leq (t-s)/2$. There is some integer
$x \in [s,(s+t)/2]$ such that $2^\ell | x$. The lifetime of $x$ is
larger than $2^\ell \times 2 + 1 > t-s$, so $x$ is alive at $t$. ◻
:::
::: proof
*Proof.* (Property (2)) For each $0 \leq k \leq \lfloor \log t \rfloor$,
let us count the number of integers of the form $r2^k$ ($r$ odd) alive
at $t$. The lifetimes of these integers are $2^{k+2}+1$. The only
integers alive lie in the interval $[t-2^{k+2}-1,t]$. Since all of these
integers of this form are separated by gaps of size at least $2^k$,
there are at most a constant number of such integers alive at $t$. In
total, the size of $S_t$ is $O(\log t)$. ◻
:::
## Bibliographic Remarks {#bibliographic-remarks-7}
Dynamic regret bounds for online gradient descent were proposed by
@Zinkevich03, and further studied in [@besbes2015non]. It was shown in
[@zhang2018dynamic] that adaptive regret bounds imply dynamic regret
bounds.
The study of learning in changing environments can be traced to the
seminal work of @HW in the context of tracking for the problem of
prediction from expert advice. Their technique was later extended to
tracking of experts from a small pool [@BW].
The problem of tracking a large set of experts efficiently was studied
using the Fixed-Share technique in
[@singer-portfolios; @asinger-portfolios; @Gyorgy05trackingthe].
The deviation from Fixed-Share to the FLH technique and the notion of
adaptive regret were introduced in @hazan2007adaptive. These techniques
were subject of later study and extensions
[@adamskiy2016closer; @zhang2019adaptive]. @daniely2015strongly study
adaptive regret for weakly convex loss functions and introduced the term
"strongly adaptive\", which differentiates the weakly and strongly
convex settings. They note that FLH is a strongly adaptive algorithm.
The use of an exponential look-back for prediction has roots in
information theory [@WillemsK97; @ShamirM06]. Efficient methods for
streaming, that were used in this chapter to maintaining a small set of
active experts, were studied in the steaming algorithms literature
[@gopalan2007estimating].
Adaptive regret algorithms were motivated by applications involving
changing environments, such as the portfolio selection problem. More
recently they were applied for time series prediction [@anava2013online]
and the control of dynamical systems [@gradu2020adaptive].
## Exercises
# Boosting and Regret {#chap:boosting}
In this chapter we consider a fundamental methodology of machine
learning: *boosting*. In the statistical learning setting, roughly
speaking, boosting refers to the process of taking a set of rough "rules
of thumb" and combining them into a more accurate predictor.
Consider for example the problem of Optical Character Recognition (OCR)
in its simplest form: given a set of bitmap images depicting
hand-written postal-code digits, classify those that contain the digit
"1" from those of "0".
::: center
![Distinguishing zero versus one from a single
pixel](images/mnist.pdf){width="3.3in"}
:::
Seemingly, discerning the two digits seems a difficult task taking into
account the different styles of handwriting, inconsistent styles even
for the same person, label errors in the training data, etc. However, an
inaccurate rule of thumb is rather easy to produce: in the bottom-left
area of the picture we'd expect many more dark bits for "1"s than if the
image depicts a "0". This is, of course, a rather inaccurate classifier.
It does not consider the alignment of the digit, thickness of the
handwriting, and numerous other factors. Nevertheless, as a rule of
thumb - we'd expect better-than-random performance and some correlation
with the ground truth.
The inaccuracy of the crude single-bit predictor is compensated by its
simplicity. It is not hard to implement a classifier based upon this
rule of thumb, which is very efficient indeed. The natural and
fundamental question which now arises is: can several such rules of
thumb be combined into a single, accurate and efficient classifier?
In the rest of this chapter we shall formalize this question in the
statistical learning theory framework. We then proceed to use the
technology developed in this manuscript, namely regret minimization
algorithms for online convex optimization, to answer this question in
the affirmative. Our development will be somewhat non-standard: we'll
describe a black-box reduction from regret-minimization to boosting.
This allows any of the OCO methods previously discussed in this text to
be used as the main component of a boosting algorithm.
## The Problem of Boosting
Throughout this chapter we use the notation and definitions of chapter
[9](#chap:online2batch){reference-type="ref"
reference="chap:online2batch"} on learning theory, and focus on
statistical learnability rather than agnostic learnability. More
formally, we assume the so called "realizability assumption", which
states that for a learning problem over hypothesis class ${\mathcal H}$
there exists some $h^\star \in \mathcal{H}$ such that its generalization
error is zero, or formally $\mathop{\mbox{\rm error}}(h^\star)=0.$
Using the notations of the previous chapter, we can define the following
seemingly weaker notion than statistical learnability.
::: definition
**Definition 11.1** (Weak learnability). *The concept class
${\mathcal H}: {\mathcal X}\mapsto {\mathcal Y}$ is said to be
$\gamma$-weakly-learnable if the following holds. There exists an
algorithm ${\mathcal A}$ that accepts $S_m = \{(\mathbf{x},y)\}$ and
returns an hypothesis in ${\mathcal A}(S_m) \in {\mathcal H}$ that
satisfies:\
for any $\delta > 0$ there exists $m = m(\delta)$ large enough such
that for any distribution ${\mathcal D}$ over pairs $(\mathbf{x},y)$,
for $y = h^\star(\ensuremath{\mathbf x})$, and $m$ samples from this
distribution, it holds that with probability $1 - \delta$,
$$\mathop{\mbox{\rm error}}( {\mathcal A}(S_m) ) \leq \frac{1}{2} - \gamma$$*
:::
This is an apparent weakening of the definition of statistical
learnability that we have described in chapter
[9](#chap:online2batch){reference-type="ref"
reference="chap:online2batch"}: the error is not required to approach
zero. The standard case of statistical learning in the context of
boosting is called "strong learnability". An algorithm that achieves
weak learning is referred to as a weak learner, and respectively we can
refer to a strong learner as an algorithm that attains statistical
learning, i.e., allows for generalization error arbitrarily close to
zero, for a certain concept class.
The central question of boosting can now be formalized: are weak
learning and strong learning equivalent? In other words, is there an
(efficient?) procedure that has access to a weak oracle for a concept
class, and returns a strong learner for the class?
Miraculously, the answer is affirmative, and gives rise to one of the
most effective paradigms in machine learning.
## Boosting by Online Convex Optimization
In this section we describe a *reduction* from OCO to boosting. The
template is similar to the one we have used in chapter
[9](#chap:online2batch){reference-type="ref"
reference="chap:online2batch"}: using one of the numerous algorithms for
online convex optimization we have explored in this manuscript, as well
as access to a weak learner, we create a procedure for strong learning.
### Simplification of the setting
Our derivation focuses on simplicity rather than generality. As such, we
make the following assumptions:
1. We restrict ourselves to the classical setting of binary
classification. Boosting to real-valued losses is also possible, but
outside our scope. Thus, we assume the loss function to be the
zero-one loss, that is: $$\ell(\hat{y}, y) = {
\left\{
\begin{array}{ll}
{0}, & {y = \hat{y}} \\\\
{1}, & {0/w}
\end{array}
\right. }$$
2. We assume that the concept class is realizable, i.e., there exists
an $h^\star \in {\mathcal H}$ such that
$\mathop{\mbox{\rm error}}(h^\star) = 0$. There are results on
boosting in the agnostic learning setting, these are surveyed in the
bibliographic section.
3. We denote the distribution over examples
${\mathcal X}\times {\mathcal Y}= \{(x,y)\}$, where
$y = h^\star(\ensuremath{\mathbf x})$, as a point in
$\Delta_{{\mathcal X}}$. That is, a point
$\mathbf{p}\in \Delta_{{\mathcal X}}$ is a non-negative vector that
integrates to one over all examples. For simplicity, we think of
${\mathcal X},{\mathcal Y}$ as a finite, and therefore
$\mathbf{p}\in \Delta_{m}$ belongs to the $m$ dimensional simplex,
i.e., is a discrete distribution over $m$ elements.
4. We henceforth denote the weak learning algorithm by ${\mathcal W}$,
and denote by ${\mathcal W}(\mathbf{p}, \delta )$ a call to the weak
learning algorithm over distribution $\mathbf{p}$ that satisfies
$$\Pr[ \mathop{\mbox{\rm error}}_{\mathbf{p}} ({\mathcal W}(\mathbf{p},\delta)) \geq \frac{1}{2}- \gamma ] \leq \delta .$$
With these assumptions and definitions we are ready to prove the main
result: a reduction from weak learning to strong learning using an
online convex optimization algorithm with a sublinear regret bound.
Essentially, our task would be to find a hypothesis which attains zero
error on a given sample.
### Algorithm and analysis
Pseudocode for the boosting algorithm is given in Algorithm
[\[alg:boost1\]](#alg:boost1){reference-type="ref"
reference="alg:boost1"}. This reduction accepts as input a $\gamma$-weak
learner and treats it as a black box, returning a function which we'll
prove is a strong learner.
The reduction also accepts as input an online convex optimization
algorithm denoted ${\mathcal A}^{OCO}$. The underlying decision set for
the OCO algorithm is the $m$-dimensional simplex, where $m$ is the
sample size. Thus, its decisions are distributions over examples. The
cost functions are linear, and assign a value of zero or one, depending
on whether the current hypothesis errs on a particular example. Hence,
the cost at a certain iteration is the expected error of the current
hypothesis (chosen by the weak learner) over the distribution chosen by
the low-regret algorithm.
::: algorithm
::: algorithmic
**Input**: ${\mathcal H},\delta$, OCO algorithm ${\mathcal A}^{OCO}$,
$\gamma$-weak learning algorithm ${\mathcal W}$, sample
$S_m \sim {\mathcal D}$. Set $T$ such that
$\frac{1}{T} {\ensuremath{\mathrm{{Regret}}}_T(A^{OCO})} \leq \frac{\gamma}{2}$
Set distribution $\mathbf{p}_1 = \frac{1}{m} \mathbf{1}\in \Delta_m$ to
be the uniform distribution. Find hypothesis
$h_t \leftarrow {\mathcal W}(\mathbf{p}_t ,\frac{\delta}{2T} )$ Define
the loss function $f_t( \mathbf{p}) = \mathbf{r}_t^\top \mathbf{p}$,
where the vector $\mathbf{r}_t \in {\mathbb R}^m$ is defined as
$$\mathbf{r}_t( i) = {
\left\{
\begin{array}{ll}
{1}, & { h_t(\ensuremath{\mathbf x}_i)=y_i } \\\\
{0}, & {o/w}
\end{array}
\right. }$$ Update
$\mathbf{p}_{t+1} \leftarrow {\mathcal A}^{OCO} (f_1,...,f_t)$
$\bar{h}(\ensuremath{\mathbf x}) =\text{sign}(\sum_{t=1}^T h_t(\ensuremath{\mathbf x}))$
:::
:::
It is important to note that the final hypothesis $\bar{h}$ which the
algorithm outputs does not necessarily belong to ${\mathcal H}$ - the
initial hypothesis class we started off with.
::: {#thm:boosting-basic .theorem}
**Theorem 11.2**. *Algorithm
[\[alg:boost1\]](#alg:boost1){reference-type="ref"
reference="alg:boost1"} returns a hypothesis $\bar{h}$ such that with
probability at least $1-\delta$,
$$\mathop{\mbox{\rm error}}_S(\bar{h}) =0 .$$*
:::
::: proof
*Proof.* Given $h\in \mathcal{H}$, we denote its empirical error on the
sample $S$, weighted by the distribution $\mathbf{p}\in \delta_m$, by:
$$\begin{aligned}
\nonumber
\mathop{\mbox{\rm error}}_{S,\mathbf{p}}(h) = \sum_{i=1}^m \mathbf{p}(i) \cdot \mathbf{1}_{ h(\ensuremath{\mathbf x}_i) \neq y_i } .
\end{aligned}$$ Notice that by definition of $\mathbf{r}_t$ we have
$\mathbf{r}_t^\top \mathbf{p}_t = 1 - \mathop{\mbox{\rm error}}_{S , \mathbf{p}_t} (h_t)$.
Since $h_t$ is the output of a $\gamma$-weak-learner on the distribution
$\mathbf{p}_t$, we have for all $t \in [T]$, $$\begin{aligned}
\nonumber
\Pr[ \mathbf{r}_t^\top \mathbf{p}_t \leq \frac{1}{2} + \gamma ] & = \Pr[ 1 - \mathop{\mbox{\rm error}}_{S , \mathbf{p}_t} (h_t) \leq \frac{1}{2}+\gamma ] \\
& = \Pr[ \mathop{\mbox{\rm error}}_{S , \mathbf{p}_t} (h_t) \geq \frac{1}{2}- \gamma ] \\
& \leq \frac{\delta}{2T} .
\end{aligned}$$ This applies for each $t$ separately, and by the union
bound we have
$$\Pr[ \frac{1}{T} \sum_{t=1}^T \mathbf{r}_t^\top \mathbf{p}_t \geq \frac{1}{2} + \gamma ] \geq 1- \delta$$
Denote by $S_\phi \subseteq S$ be the set of all missclassified examples
by $\bar{h}$. Let $\mathbf{p}^*$ the uniform distribution over $S_\phi$.
$$\begin{aligned}
\nonumber
\sum_{t=1}^T \mathbf{r}_t^\top \mathbf{p}^* & = \sum_{t=1}^T \frac{1}{|S_\phi|}\sum_{(\ensuremath{\mathbf x},y) \in S_\phi} \mathbf{1}_{h_t(\ensuremath{\mathbf x}) = y} \\
& = \frac{1}{|S_\phi|} \sum_{(\ensuremath{\mathbf x},y) \in S_\phi} \sum_{t=1}^T \mathbf{1}_{h_t(\ensuremath{\mathbf x}_j) = y_j} \\
& \leq \frac{1}{|S_\phi|} \sum_{(\ensuremath{\mathbf x},y) \in S_\phi} \frac{T}{2} & \mbox{ $\bar{h}(\ensuremath{\mathbf x}_j) \neq y_j $} \\
& =\frac{T}{2} .
\end{aligned}$$ Combining the previous two observations, we have with
probability at least $1-\delta$ that $$\begin{aligned}
\frac{1}{2} + \gamma & \leq \frac{1}{T} \sum_{t=1}^T \mathbf{r}_t^\top \mathbf{p}_t \\
& \leq \frac{1}{T} \sum_{t=1}^T \mathbf{r}_t^\top \mathbf{p}^* + \frac{1}{T} \ensuremath{\mathrm{{Regret}}}_T({\mathcal A}^{OCO}) & \mbox { low regret of ${\mathcal A}^{OCO}$} \\
& \leq \frac{1}{2} + \frac{1}{T} \ensuremath{\mathrm{{Regret}}}_T({\mathcal A}^{OCO} ) \\
& \leq \frac{1}{2} + \frac{\gamma}{2} .
\end{aligned}$$ This is a contradiction. We conclude that a distribution
$\mathbf{p}^*$ cannot exist, and thus all examples in $S$ are classified
correctly. ◻
:::
### AdaBoost
A special case of the template reduction we have described is obtained
when the OCO algorithm is taken to be the Multiplicative Updates method
we have come to know in the manuscript.
Corollary [5.7](#cor:eg){reference-type="ref" reference="cor:eg"} gives
a bound of $O(\sqrt{T\log m})$ on the regret of the EG algorithm in our
context. This bounds $T$ in Algorithm
[\[alg:boost1\]](#alg:boost1){reference-type="ref"
reference="alg:boost1"} by $O(\frac{1}{\gamma^2} \log m)$.
Closely related is the AdaBoost algorithm, which is one of the most
useful and successful algorithms in Machine Learning at large (see
bibliography). Unlike the Boosting algorithm that we have analyzed,
AdaBoost doesn't have to know in advance the parameter $\gamma$ of the
weak learners. Pseudo code for the AdaBoost algorithm is given in
[\[alg:adaboost\]](#alg:adaboost){reference-type="ref"
reference="alg:adaboost"}.
::: algorithm
::: algorithmic
**Input**: ${\mathcal H},\delta$, $\gamma$-weak-learner ${\mathcal W}$,
sample $S_m \sim{\mathcal D}$. Set $\mathbf{p}_1 \in \Delta_m$ be the
uniform distribution over $S_m$. Find hypothesis
$h_t \leftarrow {\mathcal W}(\mathbf{p}_t,\frac{\delta}{T})$ Calculate
$\varepsilon_t = \mathop{\mbox{\rm error}}_{S,\mathbf{p}_t}(h_t)$,
$\alpha_t = \frac{1}{2}\log(\frac{1-\varepsilon_t}{\varepsilon_t})$
Update,
$$\mathbf{p}_{t+1}(i) = \frac{\mathbf{p}_t(i) e^{ -\alpha_t y_i h_t(i)} } {\sum_{j=1}^m \mathbf{p}_t(j)e^{-\alpha_t y_j h_t(j)}}$$
$\bar{h}(\ensuremath{\mathbf x}) =\text{sign}(\sum_{t=1}^T \alpha_t h_t(\ensuremath{\mathbf x}))$
:::
:::
### Completing the picture
In our discussion so far we have focused only on the empirical error
over a sample. To show generalization and complete the Boosting theorem,
one must show that zero empirical error on a large enough sample implies
$\varepsilon$ generalization error on the underlying distribution.
Notice that the hypothesis returned by the Boosting algorithms does not
belong to the original concept class. This presents a challenge for
certain methods of proving generalization error bounds that are based on
measure concentration over a fixed hypothesis class.
Both issues are resolved using the implication that compression implies
generalization, as given in Theorem
[9.9](#thm:compression2generalization){reference-type="ref"
reference="thm:compression2generalization"}. We sketch the argument
below, and the precise derivation is left as an exercise.
Roughly speaking, boosting algorithm
[\[alg:boost1\]](#alg:boost1){reference-type="ref"
reference="alg:boost1"} runs on $m$ examples for
$T = O(\frac{\log m}{\gamma^2})$ rounds, returns a final hypothesis
$\bar{h}$ that is the majority vote of $T$ hypothesis, and classifies
correctly all $m$ examples of the training set.
Suppose that the weak learning algorithm has sample complexity of size
$k(\gamma,\delta)$: given $k = k(\gamma,\delta)$ examples, it returns a
hypothesis with generalization error at most $\frac{1}{2} - \gamma$ with
probability at least $1-\delta$. Further, suppose the original training
set of $m$ examples was sampled from distribution ${\mathcal D}$.
Since $\bar{h}$ classifies correctly the entire training set, it follows
that the distribution ${\mathcal D}$ has a compression scheme of size
$$Tk = O\left( \frac{k(\gamma,\frac{\delta}{T}) \log m}{\gamma^2} \right) .$$
Therefore, using Theorem
[9.9](#thm:compression2generalization){reference-type="ref"
reference="thm:compression2generalization"}, we have that,
$$\mathop{\mbox{\rm error}}_{{\mathcal D}}(\bar{h}) \leq O\left( \frac{k \log^2 \frac{m}{\delta}} {\gamma^2 m} \right) .$$
Now one can obtain an arbitrary small generalization error by choosing
$m$ as a function of $k,\delta,\gamma$. Notice that this argument makes
an assumption only about the sample complexity of the weak learning
algorithm, rather than the hypothesis class ${\mathcal H}$.
## Bibliographic Remarks {#bibliographic-remarks-8}
The theoretical question of Boosting and posed and addressed in the work
of @Schapire90 [@freund1995boosting]. The AdaBoost algorithm was
proposed in the seminal paper of @FreundSch1997. The latter paper also
contains the essential ingredients for the reduction from general
low-regret algorithms to boosting.
Boosting has had significant impact on theoretical and practical data
analysis as described by the statistician Leo Breiman [@Breiman01]. For
a much more comprehensive survey of Boosting theory and applications see
the recent book [@schapire2012boosting].
The theory for agnostic boosting is more recent, and several different
definitions and settings exist, see
[@kalai2008agnostic; @KalaiS05; @kanade2009potential; @feldman2009distribution; @bendavid2001agnostic],
the most general of which is perhaps by @kanade2009potential.
A unified framework for realizable and agnostic boosting, for both the
statistical and online settings, is given in [@brukhim2020online].
The theory of boosting has been extended to real valued learning via the
theory of gradient boosting [@friedman2002stochastic]. More recently it
was extended to online learning
[@leistner2009robustness; @chen2012online; @chen2014boosting; @beygelzimer2015optimal; @beygelzimer2015online; @agarwal2019boosting; @jung2017online; @jung2018online; @brukhim2020online2].
## Exercises
# Online Boosting {#chap:ocoboost}
This text considers online optimization and learning, and it is a
natural question to ask whether the technique of boosting has an
analogue in the online world? What is a "weak learner\" in online convex
optimization, and how can one strengthen it? This is the subject of this
chapter, and we shall see that boosting can be extremely powerful and
useful in the setting of online convex optimization.
## Motivation: Learning from a Huge Set of Experts
Recall the classical problem of prediction from expert advice from the
first chapter of this text. A learner iteratively makes decisions and
receives loss according to an arbitrarily chosen loss function. For its
decision making, the learner is assisted by a pool of experts. Classical
algorithms such as the Hedge algorithm
[\[alg:Hedge\]](#alg:Hedge){reference-type="ref" reference="alg:Hedge"},
guarantees a regret bound of $O(\sqrt{T \log N} )$, where $N$ is the
number of experts, and this is known to be tight.
However, in many problems of interest, the class of experts is too large
to efficiently manipulate. This is particularly evident in contextual
learning, as formally defined below, where the experts are *policies* --
functions mapping contexts to action. In such instances, even if a
regret bound of $O(\sqrt{T \log N})$ is meaningful, the algorithms
achieving this bound are computationally inefficient; their running time
is linear in $N$. This linear dependence is many times unacceptable: the
effective number of policies mapping contexts to actions is exponential
in the number of contexts.
The boosting approach to address this computational intractability is
motivated by the observation that it is often possible to design simple
*rules-of-thumb* that perform slightly better than random guesses.
Analogously to the weak learning oracles from chapter
[11](#chap:boosting){reference-type="ref" reference="chap:boosting"}, We
propose that the learner has access to an "online weak learner\" - a
computationally cheap mechanism capable of guaranteeing multiplicatively
approximate regret against a base hypotheses class.
In the rest of this chapter we describe efficient algorithms that when
provided weak learners, compete with the convex hull of the base
hypotheses class with near-optimal regret.
### Example: boosting online binary classification
As a more precise example to the motivation we just surveyed, we
formalize online boosting for binary prediction from expert advice. At
iteration $t$, a set of experts denoted $h \in {\mathcal H}$, observe a
context $\mathbf{a}_t$, and predict a binary outcome
$h(\mathbf{a}_t) \in \{-1 ,1 \}$. The loss of each expert is taken to be
the binary loss, $- h(\mathbf{a}_t) \cdot y_t$ for a true label
$y_t \in \{-1,1\}$. The Hedge algorithm from the first chapter applies
to this problem, and guarantees a regret of
$O(\sqrt{T \log |{\mathcal H}|})$ for a finite ${\mathcal H}$. However,
the case in which ${\mathcal H}$ is extremely large, maintaining the
weights is prohibitive computationally.
A weak online learner ${\mathcal W}$ in this setting is an algorithm
which is guaranteed to attain at most a factor $\gamma$ loss from the
best expert in class, for some $\gamma \in [0,1]$, up to an additive
sublinear regret term. Formally, for any sequence of contexts and labels
$\{\mathbf{a}_t,y_t\}$,
$$\sum_{t=1}^T y_t \cdot {\mathcal W}(\mathbf{a}_t) \le \gamma \cdot \underset{h \in {\mathcal H}}{\min} \sum_{t=1}^T y_t \cdot h(\mathbf{a}_t) + \ensuremath{\mathrm{{Regret}}}_{T}({\mathcal W}).$$
The online boosting question can now be phrased as follows: given access
to a weak online learning algorithm ${\mathcal W}$, can we design an
efficient online algorithm ${\mathcal A}$ that guarantees vanishing
regret over ${\mathcal H}$? More formally, let
$$\ensuremath{\mathrm{{Regret}}}_T({\mathcal A}) = \sum_{t=1}^T y_t \cdot {\mathcal W}(\mathbf{a}_t) - \underset{h \in {\mathcal H}}{\min} \sum_{t=1}^T y_t \cdot h(\mathbf{a}_t) .$$
Can we design an algorithm ${\mathcal A}$ that has
$\frac{\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})}{T} \mapsto 0$,
without explicit access to ${\mathcal H}$? As we will see, the answer to
this question is affirmative in a strong sense: boosting does have an
online analogue which is a powerful technique in online learning. In the
next section we describe a more powerful notion of boosting that applies
to the full generality of online convex optimization. This in turn
implies an affirmative answer to this question for online binary
classification.
### Example: personalized article placement
In the problem of matching articles to visitors of a web-page on the
Internet, a number of articles are available to be placed in a given
web-page for a particular visitor. The goal of the decision maker, in
this case the article placer, is to find the most relevant article that
will maximize the probability of a visitor click.
It is usually the case that context is available, in the form of user
profile, preferences surfing history and so forth. This context is
invaluable in terms of placing the most relevant article. Thus, the
decision of the article-placer is to choose from a *policy*: a mapping
from context to article.
The space of policies is significantly larger than the space of articles
and space of contexts: its size is the power of articles to the
cardinality of contexts. This motivates the use of online learning
algorithms whose computational complexity is independent of the number
of experts.
The natural formulation of this problem is not binary prediction, but
rather multi-class prediction. Formulating this problem in the language
of online convex optimization is left as an exercise.
## The Contextual Learning Model
Boosting in the context of online convex optimization is most useful for
the contextual learning problem which we now describe.
Let us consider the familiar OCO setting over a general convex decision
set $\ensuremath{\mathcal K}\subseteq {\mathbb R}^d$, and adversarially
chosen convex loss functions
$f_1,...,f_t : \ensuremath{\mathcal K}\mapsto {\mathbb R}$. Boosting is
particularly important in settings that we have a very large number of
possible experts that makes running one of the algorithms we have
considered thus far infeasible. Concretely, suppose we have access to a
hypothesis class
${\mathcal H}\subseteq \{\mathbf{a}\} \mapsto \ensuremath{\mathcal K}$,
that given a sequence of contexts $\mathbf{a}_1, ...,\mathbf{a}_t$,
produces a new point
$h( \mathbf{a}_{t+1} ) \in \ensuremath{\mathcal K}$.
We have studied numerous methods capable of minimizing regret for this
setting in this text, all assumed that we have access to the set
${\mathcal H}$, and depend on its diameter in some way.
To avoid this dependence, we consider an alternative access model to
${\mathcal H}$. A weak learner for the OCO setting is defined as
follows.
::: {#online_agnostic_wl .definition}
**Definition 12.1**. *An online learning algorithm ${\mathcal W}$ is a
$\gamma$-**weak OCO learner (WOCL)** for ${\mathcal H}$ and
$\gamma \in (0,1)$, if for any sequence of contexts $\{ \mathbf{a}_t \}$
and **linear** loss functions $f_1,...,f_T$, for which
$\max_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} f_t(\ensuremath{\mathbf x}) - \min_{\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}} f_t(\ensuremath{\mathbf y}) \leq 1$
, we have $$\label{eq:wl-stepwise}
\sum_{t=1}^T f_t({\mathcal W}(\mathbf{a}_t)) \le \gamma \cdot \underset{h \in {\mathcal H}}{\min} \sum_{t=1}^T f_t(h(\mathbf{a}_t)) + (1-\gamma) \sum_{t=1}^T f_t(\bar{\ensuremath{\mathbf x}}) + \ensuremath{\mathrm{{Regret}}}_{T}({\mathcal W}),$$
where
$\bar{\ensuremath{\mathbf x}} = \int_{\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}} \ensuremath{\mathbf x}$
is the center of mass of $\ensuremath{\mathcal K}$.*
:::
This definition differs in two aspects from the types of regret
minimization guarantees we have seen thus far. For one, the algorithm
competes with a $\gamma$-multiple of the best comparator in hindsight,
and is "weak\" in this precise manner.
Secondly, a multiplicative guarantee is not invariant for a constant
shift. This is the reason for the existence of an additional component,
$\sum_t f_t(\bar{\ensuremath{\mathbf x}})$, in the regret bound. This
can be thought of as the cost of a random, or naive predictor. A weak
learner must, at the very least, perform better than this naive and
non-anticipating predictor!
It is convenient to henceforth assume that the loss functions are
shifted such that $f_t(\bar{\ensuremath{\mathbf x}}) = 0$. Under this
assumption, we can rephrase $\gamma$-WOCL as $$\label{eqn:simpleWOLL}
\sum_{t=1}^T f_t({\mathcal W}(\mathbf{a}_t)) \le \gamma \cdot \underset{h \in {\mathcal H}}{\min} \sum_{t=1}^T f_t(h(\mathbf{a}_t)) + \ensuremath{\mathrm{{Regret}}}_{T}({\mathcal W}).$$
## The Extension Operator
The main difficulty is coping with the approximate guarantee that the
WOCL provides. Therefore the algorithm we describe henceforth scales the
predictions returned by the weak learner by a factor of
$\frac{1}{\gamma}$. This means that the scaled decisions do not belong
to the original decision set anymore, and need to be projected back.
Here lies the main challenge. First, we assume that the loss functions
$f \in {\mathcal F}$ are defined over all of ${\mathbb R}^d$ to enable
valid decisions outside of $\ensuremath{\mathcal K}$. Next, we need to
be able to project onto $\ensuremath{\mathcal K}$ without increasing the
cost. It can be seen that some natural families of functions, i.e.,
linear functions, do not admit any such projection. To remedy this
situation, we define the extension operator of a function over a convex
domain $\ensuremath{\mathcal K}$ as follows.
First, denote the Euclidean distance function to a set
$\ensuremath{\mathcal K}$ as (see also section
[13.2](#sec:approach-dist){reference-type="ref"
reference="sec:approach-dist"}),
$${\bf Dist}(\cdot, \ensuremath{\mathcal K}) \ , \ {\bf Dist}(\ensuremath{\mathbf x},\ensuremath{\mathcal K}) = \min_{\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}} \|\ensuremath{\mathbf y}-\ensuremath{\mathbf x}\| .$$
::: {#defn:extension .definition}
**Definition 12.2**
($(\ensuremath{\mathcal K},\kappa,\delta)$-extension). *The extension
operator over $\ensuremath{\mathcal K}\subseteq {\mathbb R}^d$ is
defined as:
$$X_{\ensuremath{\mathcal K},\kappa,\delta}[f] : {\mathbb R}^d \mapsto {\mathbb R}\ \ , \ \ X[f ] = S_\delta[ f + \kappa \cdot {\bf Dist}(\cdot,\ensuremath{\mathcal K}) ] ,$$
where the smoothing operator $S_\delta$ was defined as per Lemma
[2.8](#lem:SmoothingLemma){reference-type="ref"
reference="lem:SmoothingLemma"}.*
:::
The important take-away from these operators is the following lemma,
whose importance is crucial in the OCO boosting algorithm
[\[alg:ocoboost\]](#alg:ocoboost){reference-type="ref"
reference="alg:ocoboost"}, as it projects infeasible points that are
obtained from the weak learners to the feasible domain.
::: {#lem:extensionX .lemma}
**Lemma 12.3**. *The $(\ensuremath{\mathcal K},\kappa,\delta)$-extension
of a function ${\ensuremath{\hat{f}}}= X[f]$ satisfies the following:*
1. *For every point
$\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}$, we have
$\| {\ensuremath{\hat{f}}}(\ensuremath{\mathbf x}) - f(\ensuremath{\mathbf x}) \|_2 \leq \delta G$.*
2. *The projection of a point, whose gradient is bounded by $G$, onto
$\ensuremath{\mathcal K}$ improves the
$(\ensuremath{\mathcal K},\kappa,\delta)$-extension function, for
$\kappa = G$, value up to a small term,
$${\ensuremath{\hat{f}}}\left( \prod_\ensuremath{\mathcal K}(\ensuremath{\mathbf x})\right) \leq {\ensuremath{\hat{f}}}(\ensuremath{\mathbf x}) + \delta G .$$*
:::
::: proof
*Proof.*
1. Since
${\bf Dist}(\ensuremath{\mathbf x},\ensuremath{\mathcal K}) = 0$ for
all $x \in \ensuremath{\mathcal K}$, this follows immediately from
Lemma [2.8](#lem:SmoothingLemma){reference-type="ref"
reference="lem:SmoothingLemma"}.
2. Denote
$\ensuremath{\mathbf x}_\pi = \prod_\ensuremath{\mathcal K}(\ensuremath{\mathbf x})$
for brevity. Then $$\begin{aligned}
& {\ensuremath{\hat{f}}}(\ensuremath{\mathbf x}_\pi) - {\ensuremath{\hat{f}}}(\ensuremath{\mathbf x}) \\
& \leq f(\ensuremath{\mathbf x}_\pi) - f(\ensuremath{\mathbf x}) - \kappa {\bf Dist}(\ensuremath{\mathbf x},\ensuremath{\mathcal K}) + \delta G & \mbox{part 1} \\
& \leq f(\ensuremath{\mathbf x}_\pi) - f(\ensuremath{\mathbf x}) - \kappa \| \ensuremath{\mathbf x}- \ensuremath{\mathbf x}_\pi\| + \delta G \\
& = \nabla f(\ensuremath{\mathbf x}) (\ensuremath{\mathbf x}- \ensuremath{\mathbf x}_\pi) - \kappa \| \ensuremath{\mathbf x}- \ensuremath{\mathbf x}_\pi\| + \delta G \\
& \leq \| \nabla f(\ensuremath{\mathbf x}) \| \| \ensuremath{\mathbf x}- \ensuremath{\mathbf x}_\pi\| - \kappa \| \ensuremath{\mathbf x}- \ensuremath{\mathbf x}_\pi\| + \delta G & \mbox{Cauchy-Schwartz} \\
& \leq G \| \ensuremath{\mathbf x}- \ensuremath{\mathbf x}_\pi\| - \kappa \| \ensuremath{\mathbf x}- \ensuremath{\mathbf x}_\pi\| + \delta G \\
& \leq \delta G & \mbox{choice of $\kappa$} .
\end{aligned}$$
◻
:::
## The Online Boosting Method
The online boosting algorithm we describe in this section is closely
related to the online Frank-Wolfe algorithm from chapter
[7](#chap:FW){reference-type="ref" reference="chap:FW"}. Not only does
it deliver in boosting WOCL to strong learning, but it gives an even
stronger guarantee: low regret over the convex hull of the hypothesis
class.
Algorithm [\[alg:ocoboost\]](#alg:ocoboost){reference-type="ref"
reference="alg:ocoboost"} efficiently converts a weak online learning
algorithm into an OCO algorithm with vanishing regret in a black-box
manner. The idea is to apply the weak learning algorithm on linear
functions that are gradients of the loss. The algorithm then recursively
applies another weak learner on the gradients of the residual loss, and
so forth.
::: algorithm
::: algorithmic
Input: $N$ copies of the $\gamma$-WOCL
${\mathcal W}^1, {\mathcal W}^2, \ldots, {\mathcal W}^N$, parameters
$\eta_1,...,\eta_T$,$\delta,\kappa=G$. Receive context $\mathbf{a}_t$,
choose $\ensuremath{\mathbf x}_t^0 = \mathbf{0}$ arbitrarily. Define
$\ensuremath{\mathbf x}_t^i = (1 - \eta_i) \ensuremath{\mathbf x}_t^{i-1} + \eta_i \frac{1}{\gamma} {\mathcal W}^i(\mathbf{a}_t)$.
Predict
$\ensuremath{\mathbf x}_t = \prod_{\ensuremath{\mathcal K}}[ \ensuremath{\mathbf x}_t^N ]$,
suffer loss $f_t(\ensuremath{\mathbf x}_t)$. Obtain loss function $f_t$,
create
${\ensuremath{\hat{f}}}_t = X_{\ensuremath{\mathcal K},\kappa,\delta}[f_t]$.
Define and pass to ${\mathcal W}^i$ the linear loss function $f_t^i$,
$$f_t^i(\ensuremath{\mathbf x}) = \nabla {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t^{i-1}) \cdot \ensuremath{\mathbf x}.$$
:::
:::
However, the Frank-Wolfe method is not applied directly to the loss
functions, but rather to a proxy loss which defined using the extension
operation in [12.2](#defn:extension){reference-type="ref"
reference="defn:extension"}. Importantly, algorithm
[\[alg:ocoboost\]](#alg:ocoboost){reference-type="ref"
reference="alg:ocoboost"} has a running time that is independent of
$|{\mathcal H}|$.
Notice that if $\gamma=1$, the algorithm stills gives a significant
advantage as compared to the weak learner: the regret guarantee is vs.
the convex hull of ${\mathcal H}$, as compared to the best single
hypothesis.
::: {#thm:oco-boost-convex .theorem}
**Theorem 12.4** (Main). *The predictions $\ensuremath{\mathbf x}_t$
generated by
Algorithm [\[alg:ocoboost\]](#alg:ocoboost){reference-type="ref"
reference="alg:ocoboost"} with
$\delta = \sqrt{ \frac{ D^2}{\gamma N} }, \eta_i = \min \{\frac{2}{i}, 1\}$
satisfy $$\begin{aligned}
\sum_{t=1}^T f_t(\ensuremath{\mathbf x}_t)\ - \min_{h^\star \in \mathbf{CH}({\mathcal H})} \sum_{t=1}^T f_t( h^\star(\mathbf{a}_t) ) \leq \frac{ 5 d G D T}{\gamma\sqrt{ N}} + \frac{2GD}{\gamma } \ensuremath{\mathrm{{Regret}}}_T({\mathcal W}) .
\end{aligned}$$*
:::
##### Remark 1:
It is possible to obtain tighter bounds by a factor of the dimension,
and other constant terms, using a more sophisticated smoothing operator.
References for these tighter results are given in the bibliographic
section at the end of this chapter.
##### Remark 2:
The regret bound of Theorem
[12.4](#thm:oco-boost-convex){reference-type="ref"
reference="thm:oco-boost-convex"} is nearly as good as we could hope
for. The first term approaches zero as the number of weak learners $N$
grows. The second term is sublinear as the regret of the weak learner.
It is scaled by a factor of $\frac{1}{\gamma}$, which we can expect due
to the approximate guarantee of the weak learner.
Before proving the theorem, let us define some notations we use. The
algorithm defines the extension of the loss functions as
$${\ensuremath{\hat{f}}}_t = X[f_t] = S_\delta [ f_t + G\cdot {\bf Dist}(\ensuremath{\mathbf x},\ensuremath{\mathcal K}) ] .$$
We apply the setting of $\kappa=G$, as required by Lemma
[12.3](#lem:extensionX){reference-type="ref"
reference="lem:extensionX"}, and by Lemma
[2.8](#lem:SmoothingLemma){reference-type="ref"
reference="lem:SmoothingLemma"}, ${\ensuremath{\hat{f}}}_t$ is
$\frac{dG }{\delta}$-smooth. Also, denote by
$\mathbf{CH}({\mathcal H}) = \{ \sum_{h \in {\mathcal H}} \mathbf{p}_h h | \mathbf{p}\in \Delta_{\mathcal H}\}$
the convex hull of the set ${\mathcal H}$, and let
$$h^\star = \mathop{\mathrm{\arg\min}}_{h^\star \in \mathbf{CH}({\mathcal H})}\sum_{t=1}^T f_t(h^\star(\mathbf{a}_t))$$
to be the best hypothesis in the convex hull of ${\mathcal H}$ in
hindsight, i.e., the best convex combination of hypothesis from
${\mathcal H}$. Notice that since the loss functions are generally
convex and non-linear, this convex combination is not necessarily a
singleton. We define
$\ensuremath{\mathbf x}_t^\star = h^\star(\mathbf{a}_t)$ as the
decisions of this hypothesis.
The main crux of the proof is given by the following lemma.
::: {#lem:main-analysis .lemma}
**Lemma 12.5**. *For smoothed loss functions
$\{ {\ensuremath{\hat{f}}}_t \}$ that are ${\beta}$-smooth and $\hat{G}$
Lipschitz, it holds that $$\begin{aligned}
\sum_{t=1}^T {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t^N)\ - \sum_{t=1}^T {\ensuremath{\hat{f}}}_t( \ensuremath{\mathbf x}^\star_t ) \leq \frac{2 {\beta} D^2 T}{\gamma^2 N} + \frac{\hat{G}D}{\gamma} \ensuremath{\mathrm{{Regret}}}_T({\mathcal W}) .
\end{aligned}$$*
:::
::: proof
*Proof.* Define for all $i = 0, 1, 2, \ldots, N$,
$$\Delta_i = \sum_{t=1}^T \left({\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t^i) - {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}^\star_t)\right) .$$
Recall that ${\ensuremath{\hat{f}}}_t$ is ${\beta}$ smooth by our
assumption. Therefore: $$\begin{aligned}
& \Delta_i = \sum_{t=1}^T \left[ {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t^{i-1} + \eta_i ( \frac{1}{\gamma} {\mathcal W}^i(\mathbf{a}_t) - \ensuremath{\mathbf x}_t^{i-1})) - {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}^\star_t) \right]\\
\leq & \sum_{t=1}^T \Bigl[ {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t^{i-1}) - {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}^\star_t) + \eta_i \nabla {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t^{i-1}) \cdot ( \frac{1}{\gamma} {\mathcal W}^i(\mathbf{a}_t) - \ensuremath{\mathbf x}_t^{i-1}) \\
& + \frac{\eta_i^2{\beta}}{2} \| \frac{1}{\gamma} {\mathcal W}^i(\mathbf{a}_t) - \ensuremath{\mathbf x}_t^{i-1} \|^2 \Bigr] .
\end{aligned}$$ By using the definition and linearity of $f_t^i$, we
have $$\begin{aligned}
\Delta_i \leq& \sum_{t=1}^T \left[{\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t^{i-1}) - {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}^\star_t) +\eta_i ( f_t^i( \frac{1}{\gamma} {\mathcal W}^i(\mathbf{a}_t)) - f_t^i(\ensuremath{\mathbf x}_t^{i-1})) + \frac{\eta_i^2{\beta} D^2}{2 \gamma^2} \right] \\
=& \Delta_{i-1} + \sum_{t=1}^T \eta_i ( \frac{1}{\gamma} f_t^i( {\mathcal W}^i(\mathbf{a}_t)) - f_t^i(\ensuremath{\mathbf x}_t^{i-1})) + \sum_{t=1}^T\frac{\eta_i^2{\beta} D^2}{2 \gamma^2} .
\end{aligned}$$ Now, note the following equivalent restatement of the
WOCL guarantee, which again utilizes linearity of $f_t^i$ to conclude:
linear loss on a convex combination of a set is equal to the same convex
combination of the linear loss applied to individual elements.
$$\begin{aligned}
\frac{1}{\gamma} \sum_{t=1}^T f_t^i ({\mathcal W}^i(\mathbf{a}_t)) \leq &\min_{h^\star \in {\mathcal H}} \sum_{t=1}^T f_t^i (h^\star (\mathbf{a}_t)) + \frac{\hat{G}D \ensuremath{\mathrm{{Regret}}}_T({\mathcal W})}{\gamma}\\
=& \min_{h^\star \in \mathbf{CH}({\mathcal H})} \sum_{t=1}^T f_t^i (h^\star(\mathbf{a}_t)) + \frac{\hat{G}D \ensuremath{\mathrm{{Regret}}}_T({\mathcal W})}{\gamma} .
\end{aligned}$$ Using the above and that
$h^\star\in \mathbf{CH}({\mathcal H})$, we have $$\begin{aligned}
& \Delta_i \\
& \leq \Delta_{i-1} + \sum_{t=1}^T [ \eta_i \nabla {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t^{i-1}) \cdot (\ensuremath{\mathbf x}_t^\star - \ensuremath{\mathbf x}_t^{i-1}) + \frac{\eta_i^2{\beta} D^2}{2 \gamma^2 } ] + \eta_i \frac{\hat{G}D}{\gamma} \ensuremath{\mathrm{{Regret}}}_T({\mathcal W}) \\
& \leq \Delta_{i-1} (1 - \eta_i ) + \frac{\eta_i^2{\beta} D^2 T }{2 \gamma^2 } + \eta_i {R_T} .
\end{aligned}$$ where the last inequality uses the convexity of
$\hat{f}_t$ and we denote
$R_T = \frac{\hat{G}D}{\gamma} \ensuremath{\mathrm{{Regret}}}_T({\mathcal W})$.
We thus have the recurrence
$$\Delta_i \leq \Delta_{i-1} (1 - \eta_i) + \eta_i^2 \frac{{\beta} D^2 T }{2 \gamma^2 } + \eta_i {R_T} .$$
Denoting $\hat{\Delta}_i = \Delta_i - {R_T}$, we are left with
$$\hat{\Delta}_i \leq \hat{\Delta}_{i-1} (1 - \eta_i) + \eta_i^2 \frac{{\beta} D^2 T }{2 \gamma^2 } .$$
This is a recursive relation that can be simplified by applying Lemma
[7.2](#lemma:FW-recursion){reference-type="ref"
reference="lemma:FW-recursion"} from chapter
[7](#chap:FW){reference-type="ref" reference="chap:FW"}. We obtain that
$\hat{\Delta}_N \leq \frac{2 {\beta} D^2 T}{\gamma^2 N}$. ◻
:::
We are ready to prove the main guarantee of Algorithm
[\[alg:ocoboost\]](#alg:ocoboost){reference-type="ref"
reference="alg:ocoboost"}.
::: proof
*Proof of Theorem [12.4](#thm:oco-boost-convex){reference-type="ref"
reference="thm:oco-boost-convex"}.* Using both parts of Lemma
[12.3](#lem:extensionX){reference-type="ref" reference="lem:extensionX"}
in succession, we have $$\begin{aligned}
\sum_{t=1}^T f_t(\ensuremath{\mathbf x}_t)\ - \sum_{t=1}^T f_t( \ensuremath{\mathbf x}^\star_t )
& \leq \sum_{t=1}^T {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t)\ - \sum_{t=1}^T {\ensuremath{\hat{f}}}_t( \ensuremath{\mathbf x}^\star_t ) + 2 \delta G T \\
& \leq \sum_{t=1}^T {\ensuremath{\hat{f}}}_t(\ensuremath{\mathbf x}_t^N)\ - \sum_{t=1}^T {\ensuremath{\hat{f}}}_t( \ensuremath{\mathbf x}^\star_t ) + 3 \delta G T. \label{eqn:shalom4}
\end{aligned}$$ Next, recall by Lemma
[2.8](#lem:SmoothingLemma){reference-type="ref"
reference="lem:SmoothingLemma"}, that ${\ensuremath{\hat{f}}}_t$ is
$\frac{d G}{\delta}$-smooth. By applying Lemma
[12.5](#lem:main-analysis){reference-type="ref"
reference="lem:main-analysis"}, and optimizing $\delta$, we have
$$\begin{aligned}
\sum_{t=1}^T f_t(\ensuremath{\mathbf x}_t)\ - \sum_{t=1}^T f_t( \ensuremath{\mathbf x}^\star_t )
& \leq 3 \delta G T + \frac{2 d G D^2 T}{\delta \gamma^2 N} + \frac{\hat{G}D}{\gamma} \ensuremath{\mathrm{{Regret}}}_T({\mathcal W}) \\
& = \frac{ 5 \sqrt{d} G D T}{\gamma \sqrt{N}} + \frac{\hat{G}D}{\gamma} \ensuremath{\mathrm{{Regret}}}_T({\mathcal W}) \\
& \leq \frac{ 5 d G D T}{\gamma \sqrt{N}} + \frac{\hat{G}D}{\gamma} \ensuremath{\mathrm{{Regret}}}_T({\mathcal W}) ,
\end{aligned}$$ where the last inequality is only to obtain a nicer
expression.
It remains to bound $\hat{G}$, and we claim that $\hat{G} \leq 2G$. To
see this, notice that the function
${\bf Dist}(\ensuremath{\mathbf x},\ensuremath{\mathcal K})$ is
$1$-Lipschitz, since $$\begin{aligned}
& {\bf Dist}(\ensuremath{\mathbf x}, \ensuremath{\mathcal K}) - {\bf Dist}(\ensuremath{\mathbf y}, \ensuremath{\mathcal K}) \\
& = \|\ensuremath{\mathbf x}-\Pi_\ensuremath{\mathcal K}(\ensuremath{\mathbf x})\| - \|\ensuremath{\mathbf y}-\Pi_\ensuremath{\mathcal K}(\ensuremath{\mathbf y})\| \\
& \leq \|\ensuremath{\mathbf x}-\Pi_\ensuremath{\mathcal K}(\ensuremath{\mathbf y})\| - \|\ensuremath{\mathbf y}-\Pi_\ensuremath{\mathcal K}(\ensuremath{\mathbf y})\| & \mbox{ $\Pi_\ensuremath{\mathcal K}(\ensuremath{\mathbf y})\in\ensuremath{\mathcal K}$}\\
& \leq \|\ensuremath{\mathbf x}-\ensuremath{\mathbf y}\| . & \mbox{ $\Delta$-inequality}
\end{aligned}$$ Thus, by the definition of the extension operator and
the functions $f_t^i$, we have that
$\|\nabla f_t^i(\ensuremath{\mathbf x}_t^i)\|=\|\nabla \hat{f}_t(\ensuremath{\mathbf x}_t^i)\| \leq 2G$. ◻
:::
## Bibliographic Remarks {#bibliographic-remarks-9}
The theory of boosting, which we have surveyed in chapter
[11](#chap:boosting){reference-type="ref" reference="chap:boosting"},
originally applied to binary classification problems. Boosting for
real-valued regression was studied in the theory of gradient boosting by
@friedman2002stochastic.
Online boosting, for both the classification and regression settings was
studied much later
[@leistner2009robustness; @chen2012online; @chen2014boosting; @beygelzimer2015optimal; @beygelzimer2015online; @agarwal2019boosting; @jung2017online; @jung2018online; @brukhim2020online2].
The relationship to the Frank-Wolfe method was explicit in these works,
and also studied in [@10.1214/16-AOS1505; @wang2015functional]. A
framework which encapsulates both agnostic and realizable boosting, for
both offline and online settings, is given in [@brukhim2020online].
Boosting for the full online convex optimization setting, with a
multiplicative approximation and general convex decision set, was
obtained in [@hazan2021boosting]. The latter also give tighter bounds by
a factor of the dimension than those presented in this text using a more
sophisticated smoothing technique known as the Moreau-Yoshida
regularization [@beck2017first].
The contextual experts and bandits problems have been proposed by
@langford2008epoch as a decision making framework with large number of
policies. In the online setting, several works study the problem with
emphasis on efficient algorithms given access to an optimization oracle
[@rakhlin2016bistro; @syrgkanis2016improved; @syrgkanis2016efficient; @rakhlin2016bistro].
For surveys on contextual bandit algorithms and applications of this
model see [@zhou2015survey; @bouneffouf2019survey].
## Exercises
# Blackwell Approachability and Online Convex Optimization {#chap:approach}
The history of adversarial prediction started with the seminal works of
mathematicians David Blackwell and James Hannan. In most of the text
thus far, we have presented the viewpoint of sequential prediction and
loss minimization, taken by Hannan. This was especially true in chapter
[5](#chap:regularization){reference-type="ref"
reference="chap:regularization"}, as the FPL algorithm dates back to his
work. In this chapter we turn to a dual view of regret minimization,
called "Blackwell approachability\". Approachability theory originated
in the work of Blackwell, and was discovered simultaneously to that of
Hannan. A short historical account is surveyed in the bibliographic
materials at the end of this chapter.
For decades the relationship between regret minimization in general
convex games and Blackwell approachability was not fully understood. The
common thought was, in fact, that Blackwell approachability is a
stronger notion. In this chapter we show that approachability and online
convex optimization are equivalent in a strong sense: an algorithm for
one task implies an algorithm for the other with no loss of
computational efficiency.
As a side benefit to this equivalence, we deduce a proof of Blackwell's
approachability theorem using the existence of online convex
optimization algorithms. This proof applies to a more general version of
approachabilty, over general vector games, and comes with rates of
convergence that are borrowed from the OCO algorithms we have already
studied.
While previous chapters had a practical motivation and introduced
methods for online learning, this chapter is purely theoretical, and
devoted to give an alternate viewpoint of online convex optimization
from a game theoretic perspective.
## Vector-Valued Games and Approachability
Von Neumann's minimax theorem, that we have studied in chapter
[8](#chap:games){reference-type="ref" reference="chap:games"},
establishes a central result in the theory of two-player zero-sum games
by providing a prescription to both players. This prescription is in the
form of a pair of optimal mixed strategies: each strategy attains the
optimal worst-case value of the game without knowledge of the opponent's
strategy. However, the theorem fundamentally requires that both players
have a utility function that can be expressed as a *scalar*.
In 1956, in response to von Neumann's result, David Blackwell posed an
intriguing question: what guarantee can we hope to achieve when playing
a two-player game with a *vector-valued payoff*?
A vector-valued game is defined similarly to zero-sum games as we have
defined in Definition [8.2](#defn:zsg){reference-type="ref"
reference="defn:zsg"}, with reward/loss vectors replacing the scalar
rewards/losses.
::: {#defn:vectorgame .definition}
**Definition 13.1**. *A two-player vector game is given by a set of
$n \times m$ vectors
$\{ \ensuremath{\mathbf u}(i,j) \in {\mathbb R}^d \}$. The reward vector
for the row player playing strategy $i \in [n]$, and column player
playing strategy $j \in [m]$, is given by the vector
$\ensuremath{\mathbf u}(i,j) \in {\mathbb R}^d$.*
:::
Similar to scalar games, we can define mixed strategies as distributions
over pure strategies, and denote the expected reward vector for playing
mixed strategies by
$$\forall \ensuremath{\mathbf x}\in \Delta_n , \ensuremath{\mathbf y}\in \Delta_m \ . \ \ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) = \mathop{\mbox{\bf E}}_{i \sim \ensuremath{\mathbf x}, j \sim \ensuremath{\mathbf y}} \left[ \ensuremath{\mathbf u}(i,j) \right] .$$
We henceforth consider more general vector games than originally
considered in the literature. The additional generality allows for
uncountably many strategies for both players, and allows the strategies
to originate from bounded convex and closed sets in Euclidean space.
::: {#defn:generalizedvectorgame .definition}
**Definition 13.2**. *A generalized two-player vector game is given by a
set of vectors $\{ \ensuremath{\mathbf u}\in {\mathbb R}^d \}$, and two
bounded convex and closed decision sets
$\ensuremath{\mathcal K}_1,\ensuremath{\mathcal K}_2$. The reward vector
for the row player playing strategy
$\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}_1$, and column player
playing strategy $\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}_2$,
is given by the vector
$\ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) \in {\mathbb R}^d$.*
:::
The goal of a zero-sum game is clear: to guarantee a certain
loss/reward. What should be the vector game generalization? Blackwell
proposed to ask "can we guarantee that our vector payoff lies in some
closed convex set $S$?"
It is left as an exercise at the end of this chapter to show that an
immediate analogue of Von Neumann's theorem does not exist: there is no
single mixed strategy that ensures the vector payoff lies in a given
set. However, this does not rule out an asymptotic notion, if we allow
the game to repeat indefinitely, and ask whether there exists a strategy
to ensure that the *average* reward vector lies in a certain set, or at
least approaches it in terms of Euclidean distance. This is exactly the
solution concept that Blackwell proposed as defined formally below.
Using the notation we have used throughout this text, we denote the
(Euclidean) distance to a bounded, closed and convex set $S$ as
$${\bf Dist}(\mathbf{w},S) = \min_{\ensuremath{\mathbf x}\in S} \|\mathbf{w}-\ensuremath{\mathbf x}\| .$$
::: {#def:approach .definition}
**Definition 13.3**. *Given a generalized vector game
$\ensuremath{\mathcal K}_1,\ensuremath{\mathcal K}_2, \{\ensuremath{\mathbf u}(\cdot,\cdot)\}$,
we say that a set $S \subseteq {\mathbb R}^d$ is **approachable** if
there exists some algorithm ${\mathcal A}$, called an **approachability
algorithm**, which iteratively selects points
$\ensuremath{\mathcal K}_1 \ni \ensuremath{\mathbf x}_{t} \leftarrow {\mathcal A}(\ensuremath{\mathbf y}_{1}, \ensuremath{\mathbf y}_{2}, \ldots, \ensuremath{\mathbf y}_{t-1})$,
such that, for any sequence\
$\ensuremath{\mathbf y}_1, \ensuremath{\mathbf y}_2, \ldots , \ensuremath{\mathbf y}_T \in \ensuremath{\mathcal K}_2$,
we have
$${\bf Dist}\textstyle{\left(\frac 1 T \sum_{t=1}^T \ensuremath{\mathbf u}(\ensuremath{\mathbf x}_t, \ensuremath{\mathbf y}_t), S \right) }\to 0
\quad \text{ as } \quad
T \to \infty .$$*
:::
Under this notion, we can now allow the player to implement an adaptive
strategy for a repeated version of the game, and we require that the
average reward vector comes arbitrarily close to $S$. Blackwell's
theorem characterizes which sets in Euclidean space are approachable. We
give it below in generalized form,
::: {#thm:blackwell .theorem}
**Theorem 13.4** (Blackwell's Approachability Theorem). *For any vector
game
$\ensuremath{\mathcal K}_1,\ensuremath{\mathcal K}_2,\{\ensuremath{\mathbf u}(\cdot,\cdot)\}$,
the closed, bounded and convex set $S \subseteq {\mathbb R}^d$ is
approachable if and only if the following condition holds:
$$\forall \ensuremath{\mathbf y}\in \ensuremath{\mathcal K}_2 \ , \ \exists \ensuremath{\mathbf x}\in \ensuremath{\mathcal K}_1 \ , \mbox{s.t. } \ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) \in S .$$*
:::
The approachability condition spelled out in the equation above is both
necessary and sufficient. The necessity of this condition is left as an
exercise, and the more interesting implication is that any set that
satisfies this condition is, in fact, approachable. Our reductions
henceforth give an explicit proof of Blackwell's theorem, and we leave
it as an exercise to draw the explicit conclusion of this theorem from
the first efficient reduction.
The relationship between Blackwell approachability in vector games and
OCO may not be evident at this point. However, we proceed to show that
the two notions are in fact algorithmically equivalent.
In the next section we show that any algorithm for OCO can be
efficiently converted to an approachability algorithm for vector games.
Following this, we show the other direction as well: an approachability
algorithm for vector games gives an OCO algorithm with no loss of
efficiency!
## From Online Convex Optimization to Approachability {#sec:approach-dist}
In this section we give an efficient reduction from OCO to
approachability. Namely, assume that we have an OCO algorithm denoted
${\mathcal A}$, that attains sublinear regret. Our goal is to design a
Blackwell approachability algorithm for a given vector game and closed,
bounded convex set $S$. Thus, the reduction in this section shows that
OCO is a stronger notion than approachabiliy. This direction is perhaps
the more surprising one, and was discovered more recently, see
bibliographic section for an historical account of this development.
Since we are looking to approach a given set, it is natural to consider
minimizing the distance of our reward vector to the set. Recall we
denote the (Euclidean) distance to a set as
${\bf Dist}(\mathbf{w},S) = \min_{\ensuremath{\mathbf x}\in S} \|\mathbf{w}-\ensuremath{\mathbf x}\|$.
The support function of closed convex set $S$ is given by
$$h_S(\mathbf{w}) = \max_{\ensuremath{\mathbf x}\in S} \{ \mathbf{w}^\top \ensuremath{\mathbf x}\}.$$
Notice that this function is convex, since it is a maximum over linear
functions.
::: {#lem:dist-equivalence .lemma}
**Lemma 13.5**. *The distance to a set can be written as
$${\bf Dist}(\ensuremath{\mathbf u},S) = \max_{\|\mathbf{w}\|\leq 1} \left\{ \mathbf{w}^\top \ensuremath{\mathbf u}- h_S(\mathbf{w}) \right\} .$$*
:::
::: proof
*Proof.* Using the definition of the support function, $$\begin{aligned}
& \max_{\|\mathbf{w}\|\leq 1} \left\{ \mathbf{w}^\top \ensuremath{\mathbf u}- h_S(\mathbf{w}) \right\} \\
& =
\max_{\|\mathbf{w}\|\leq 1} \left\{ \mathbf{w}^\top \ensuremath{\mathbf u}- \max_{\ensuremath{\mathbf x}\in S} \mathbf{w}^\top \ensuremath{\mathbf x}\right\} \\
& = \max_{\|\mathbf{w}\|\leq 1} \min_{\ensuremath{\mathbf x}\in S} \left\{ \mathbf{w}^\top \ensuremath{\mathbf u}- \mathbf{w}^\top \ensuremath{\mathbf x}\right\} & \mbox{negation}\\
& = \min_{\ensuremath{\mathbf x}\in S} \max_{\|\mathbf{w}\|\leq 1} \left\{ \mathbf{w}^\top \ensuremath{\mathbf u}- \mathbf{w}^\top \ensuremath{\mathbf x}\right\} & \mbox{minimax theorem} \\
& = \min_{\ensuremath{\mathbf x}\in S} \| \ensuremath{\mathbf x}- \ensuremath{\mathbf u}\| \\
& = {\bf Dist}(\ensuremath{\mathbf u},S) .
\end{aligned}$$ ◻
:::
Blackwell's theorem characterizes approachable sets: it is necessary and
sufficient to be able to find a best response
$\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}_1$, to any
$\ensuremath{\mathbf y}\in \ensuremath{\mathcal K}_2$, such that
$\ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) \in S$.
To proceed with the reduction, we need an equivalent condition stated
formally as follows.
::: {#lem:blackwell-1 .lemma}
**Lemma 13.6**. *For a generalized vector game
$\ensuremath{\mathcal K}_1,\ensuremath{\mathcal K}_2,\{\ensuremath{\mathbf u}\}$,
the following to conditions are equivalent:*
1. *There exists a feasible best response,
$$\forall \ensuremath{\mathbf y}\in \ensuremath{\mathcal K}_2 \ , \ \exists \ensuremath{\mathbf x}\in \ensuremath{\mathcal K}_1 \ , \mbox{s.t. } \ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) \in S .$$*
2. *For all $\mathbf{w}\in {\mathbb R}^d, \|\mathbf{w}\|\leq 1$, there
exists $\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}_1$ such
that
$$\forall \ensuremath{\mathbf y}\in \ensuremath{\mathcal K}_2 \ \ , \ \ \mathbf{w}^\top \ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) - h_S(\mathbf{w}) \leq 0 .$$*
:::
::: proof
*Proof.* Consider the scalar zero sum game
$$\min_{\ensuremath{\mathbf x}} \max_{\ensuremath{\mathbf y}} {\bf Dist}(\ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) , S) = \lambda .$$
Blackwell's theorem asserts that $\lambda = 0$ if and only if $S$ is
approachable. Using Sion's generalization to the Von Neumann minimax
theorem from chapter [8](#chap:games){reference-type="ref"
reference="chap:games"}, $$\begin{aligned}
& \lambda = \min_{\ensuremath{\mathbf x}} \max_\ensuremath{\mathbf y}{\bf Dist}(\ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) , S) \\
& = \min_\ensuremath{\mathbf x}\max_\ensuremath{\mathbf y}\max_{\| {\mathbf{w}} \|\leq 1} \left\{ {\mathbf{w}}^\top \ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) - h_S ({\mathbf{w}}) \right\} & \mbox{Lemma \ref{lem:dist-equivalence}} \\
& = \max_{\|{\mathbf{w}}\|\leq 1} \min_\ensuremath{\mathbf x}\max_\ensuremath{\mathbf y}\left\{ {\mathbf{w}}^\top \ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) - h_S ({\mathbf{w}}) \right\} & \mbox{minimax}
%& \geq \min_\x \max_\y \left\{ \w^\top \uv(\x,\y) - h_S (\w) \right\}. & \mbox{particular $\w$ }
\end{aligned}$$ Thus, the second statement of the lemma is satisfied if
and only if $\lambda = 0$. ◻
:::
As mentioned previously, the necessity of Blackwell's condition is left
as an exercise. To prove sufficiency, we assume that form (2) of
Blackwell's condition as in Lemma
[13.6](#lem:blackwell-1){reference-type="ref"
reference="lem:blackwell-1"} is satisfied. Formally, we henceforth
assume that the vector game and set $S$ are equipped with a best
response oracle ${\mathcal O}$, such that $$\label{eqn:blackwell-oracle}
\forall \ensuremath{\mathbf y}\in \ensuremath{\mathcal K}_2 \ , \ \ \mathbf{w}^\top \ensuremath{\mathbf u}( {\mathcal O}(\mathbf{w}),\ensuremath{\mathbf y}) - h_S(\mathbf{w}) \leq 0 .$$
We proceed with the formal proof of sufficiency, constructively
specified in Algorithm
[\[alg:oco2approach\]](#alg:oco2approach){reference-type="ref"
reference="alg:oco2approach"}. Notice that in this reduction, the
functions $f_t$ are concave, and the OCO algorithm is used for
maximization.
::: algorithm
::: algorithmic
Input: generalized vector game
$\ensuremath{\mathcal K}_1,\ensuremath{\mathcal K}_2,\{ \ensuremath{\mathbf u}(\cdot,\cdot)\}$,
set $S$, best response oracle ${\mathcal O}$, OCO algorithm
${\mathcal A}$ Set
$\ensuremath{\mathcal K}= \mathbb{B}\in {\mathbb R}^d$ to be the unit
Euclidean ball, as decision set for ${\mathcal A}$. set
$f_t(\mathbf{w}) = \mathbf{w}^\top \ensuremath{\mathbf u}_{t-1} - h_S(\mathbf{w})$
Query ${\mathcal A}$:
$\mathbf{w}_t \leftarrow {\mathcal A}(f_1, \ldots, f_{t-1})$ Query
${\mathcal O}$:
$\ensuremath{\mathbf x}_t \leftarrow {{\mathcal O}}(\mathbf{w}_t)$
Observe $\ensuremath{\mathbf y}_t$ and let
$\ensuremath{\mathbf u}_t = \ensuremath{\mathbf u}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_t)$
$\bar{\ensuremath{\mathbf u}}_T = \frac{1}{T} \sum_{t=1}^T \ensuremath{\mathbf u}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_t)$
:::
:::
::: theorem
**Theorem 13.7**. *Algorithm
[\[alg:oco2approach\]](#alg:oco2approach){reference-type="ref"
reference="alg:oco2approach"}, with input OCO algorithm ${\mathcal A}$,
returns the vector
$\bar{\ensuremath{\mathbf u}}_T = \frac{1}{T} \sum_{t=1}^T \ensuremath{\mathbf u}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_t)$
that approaches the set $S$ at a rate of
$${\bf Dist}( \bar{\ensuremath{\mathbf u}}_T , S ) \leq \frac{\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})}{T}$$*
:::
::: proof
*Proof.* Notice that equation
[\[eqn:blackwell-oracle\]](#eqn:blackwell-oracle){reference-type="eqref"
reference="eqn:blackwell-oracle"} implies that for $\mathbf{w}_t$ as
defined in the algorithm, we have
$$\forall \ensuremath{\mathbf y}\in \ensuremath{\mathcal K}_2 \ \ , \ \ \mathbf{w}_t^\top \ensuremath{\mathbf u}({\mathcal O}(\mathbf{w}_t),\ensuremath{\mathbf y}) - h_S(\mathbf{w}_t) \leq 0 .$$
This implies that for any $t$, $$\label{eqn:blackwell-e1}
f_t(\mathbf{w}_t) = \mathbf{w}_t^\top \ensuremath{\mathbf u}({\mathcal O}(\mathbf{w}_t),\ensuremath{\mathbf y}_t) - h_S(\mathbf{w}_t) \leq 0 .$$
Therefore we have, using Lemma
[13.5](#lem:dist-equivalence){reference-type="ref"
reference="lem:dist-equivalence"}, $$\begin{aligned}
{\bf Dist}(\bar{\ensuremath{\mathbf u}}_T, S) & = \max_{\|\mathbf{w}\|\leq 1} \left\{ \mathbf{w}^\top \bar{\ensuremath{\mathbf u}}_T - h_S(\mathbf{w}) \right\} \\
& = \max_{\mathbf{w}^\star \in \ensuremath{\mathcal K}} \frac{1}{T} \sum_t f_t(\mathbf{w}^\star) & \mbox{definition of $f_t$} \\
& \leq \frac{1}{T} \sum_t f_t(\mathbf{w}_t) + \frac{\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})}{T} & \mbox{OCO guarantee of ${\mathcal A}$} \\
& \leq \frac{\ensuremath{\mathrm{{Regret}}}_T({\mathcal A})}{T} & \mbox{equation \eqref{eqn:blackwell-e1}}
\end{aligned}$$ ◻
:::
This theorem explicitly relates OCO with approachability, and since we
have already proved the existence of efficient OCO algorithms in this
text, it can be used to formally prove Blackwell's theorem. Completing
the details is left as an exercise.
## From Approachability to Online Convex Optimization
In this subsection we show the converse reduction: given an
approachability algorithm, we design an OCO algorithm with no loss of
computational efficiency. This direction was essentially shown by
Blackwell for discrete decision problems, as described in more detail in
the bibliographic section. We prove it here in the full generality of
OCO.
Formally, given an approachability algorithm ${\mathcal A}$, denote by
${\bf Dist}_T({\mathcal A})$ an upper bound on its rate of convergence
to the set $S$ as a function of the number of iterations $T$. That is,
for a given vector game, denote by
$\bar{\ensuremath{\mathbf u}}_T = \frac{1}{T} \sum_{t=1}^T \ensuremath{\mathbf u}(\ensuremath{\mathbf x}_t,\ensuremath{\mathbf y}_t)$
the average reward vector. Then ${\mathcal A}$ guarantees
$${\bf Dist}(\bar{\ensuremath{\mathbf u}}_T,S) \leq {\bf Dist}_T({\mathcal A}) \ , \ \lim_{T \mapsto \infty} {\bf Dist}_T({\mathcal A}) = 0 .$$
Given an approachability algorithm ${\mathcal A}$, we henceforth create
an OCO algorithm with vanishing regret.
### Cones and polar cones
Approachability is in a certain geometric sense a dual to OCO. To see
this, we require several geometric notions, that are explicitly required
for the reduction from approachability to OCO.
For a given convex set $\ensuremath{\mathcal K}\subseteq {\mathbb R}^d$,
we define its cone as the set of all vectors in
$\ensuremath{\mathcal K}$ multiplied by a non-negative scalar,
$$\mbox{cone}(\ensuremath{\mathcal K}) = \{ c \cdot \ensuremath{\mathbf x}\ | \ \ensuremath{\mathbf x}\in \ensuremath{\mathcal K}, 0 \leq c \in {\mathbb R}\} .$$
The notion of a convex cone is not strictly required for the proofs
below, but they are commonly used in the context of approachability. The
polar set to a given set
$\ensuremath{\mathcal K}\subseteq {\mathbb R}^d$ is defined to be
$$\ensuremath{\mathcal K}^0 \stackrel{\text{\tiny def}}{=}\{ \ensuremath{\mathbf y}\in {\mathbb R}^{d} \ \mbox{ s.t. } \ \forall \ensuremath{\mathbf x}\in \ensuremath{\mathcal K}\ , \ \ensuremath{\mathbf x}^\top \ensuremath{\mathbf y}\leq 0 \} .$$
It is left as an exercise to prove that $\ensuremath{\mathcal K}^0$ is a
convex set, and that for cones, the polar to the polar is the original
set.
Henceforth we need the extension of a convex set defined as follows.
Denote by $1 \oplus \ensuremath{\mathcal K}$ as the direct sum of the
scalar one and the set $\ensuremath{\mathcal K}$, i.e., all vectors of
the form
$\tilde{\ensuremath{\mathbf x}} = 1 \oplus \ensuremath{\mathbf x}$ for
$\ensuremath{\mathbf x}\in \ensuremath{\mathcal K}$. Denote the bounded
polar extension of a set $\ensuremath{\mathcal K}$ by
$$Q(\ensuremath{\mathcal K}) = (1 \oplus \ensuremath{\mathcal K})^0 .$$
That is, we take all points in the polar set to the direct sum
$1 \oplus \ensuremath{\mathcal K}$.
This definition of the polar set gives rise to the following
quantitative characterization.
::: {#lem:dual-dist .lemma}
**Lemma 13.8**. *Let $\ensuremath{\mathbf y}\in {\mathbb R}^{d+1}$ be
such that
${\bf Dist}(\ensuremath{\mathbf y},Q(\ensuremath{\mathcal K})) \leq \varepsilon$.
Then, denoting by $D$ the diameter of $\ensuremath{\mathcal K}$,
$$\forall \tilde{\ensuremath{\mathbf x}} \in 1 \oplus \ensuremath{\mathcal K}\ , \ \ensuremath{\mathbf y}^\top \tilde{\ensuremath{\mathbf x}} \leq \varepsilon(D+1) .$$*
:::
::: proof
*Proof.* By definition of distance to a set, we have that
${\bf Dist}(\ensuremath{\mathbf y},Q(\ensuremath{\mathcal K})) \leq \varepsilon$,
implies the existence of a point
$\ensuremath{\mathbf z}\in Q(\ensuremath{\mathcal K})$ such that
$\| \ensuremath{\mathbf y}- \ensuremath{\mathbf z}\| \leq \varepsilon$.
Thus, for all
$\tilde{\ensuremath{\mathbf x}} \in 1 \oplus \ensuremath{\mathcal K}$,
we have $$\begin{aligned}
\ensuremath{\mathbf y}^\top \tilde{\ensuremath{\mathbf x}} & = (\ensuremath{\mathbf y}- \ensuremath{\mathbf z}+ \ensuremath{\mathbf z})^\top \tilde{\ensuremath{\mathbf x}} \\
& \leq \| \ensuremath{\mathbf y}- \ensuremath{\mathbf z}\| \|\tilde{\ensuremath{\mathbf x}} \| + \ensuremath{\mathbf z}^\top \tilde{\ensuremath{\mathbf x}} & \mbox{Cauchy-Schwartz} \\
& \leq \varepsilon\|\tilde{\ensuremath{\mathbf x}} \| + \ensuremath{\mathbf z}^\top \tilde{\ensuremath{\mathbf x}} & \|\ensuremath{\mathbf y}-\ensuremath{\mathbf z}\|\leq \varepsilon\\
& \leq \varepsilon\|\tilde{\ensuremath{\mathbf x}}\| + 0 & \tilde{\ensuremath{\mathbf x}} \in 1 \oplus \ensuremath{\mathcal K}, \ensuremath{\mathbf z}\in (1\oplus \ensuremath{\mathcal K})^0 \\
& \leq \varepsilon(1+D) .
\end{aligned}$$ ◻
:::
### The reduction
Algorithm [\[alg:bwa_to_lra\]](#alg:bwa_to_lra){reference-type="ref"
reference="alg:bwa_to_lra"} takes as an input a Blackwell
approachability algorithm that guarantees, under the necessary and
sufficient condition, convergence to a given set. It also takes as an
input a set $\ensuremath{\mathcal K}$ for OCO.
The reduction considers a vector game with decision sets
$\ensuremath{\mathcal K},{\mathcal F}$ and approachability set
$S = Q(\ensuremath{\mathcal K})$, and generates a sequence of decisions
that guarantee low regret as we prove next.
Since this reduction creates the approachability set $S$ as a function
of $\ensuremath{\mathcal K}$, we need to prove that indeed the set $S$
is approachable. We show this in the next subsection.
::: algorithm
::: algorithmic
Input: closed, bounded and convex decision set
$\ensuremath{\mathcal K}\subset {\mathbb R}^d$, approachability oracle
${\mathcal A}$. Let: vector game w.
$\ensuremath{\mathcal K}_1 = \ensuremath{\mathcal K}$,
$\ensuremath{\mathcal K}_2 = {\mathcal F}$, and set
$S := Q(\ensuremath{\mathcal K})$.
Query ${\mathcal A}$:
$\ensuremath{\mathbf x}_t \leftarrow {\mathcal A}(f_1, \ldots, f_{t-1})$
Let: ${\mathcal L}(f_1, \ldots, f_{t-1}) := \ensuremath{\mathbf x}_t$
Receive: cost function $f_t$ Construct reward vector
$\ensuremath{\mathbf u}(\ensuremath{\mathbf x}_t ,f_t) := \nabla_t^\top \ensuremath{\mathbf x}_t \oplus (-\nabla_t)$
:::
:::
::: {#thm:blackwell_to_olo .theorem}
**Theorem 13.9**. *The reduction defined in
Algorithm [\[alg:bwa_to_lra\]](#alg:bwa_to_lra){reference-type="ref"
reference="alg:bwa_to_lra"}, for any input algorithm ${\mathcal A}$,
produces an OLO algorithm ${\mathcal L}$ such that
$$\ensuremath{\mathrm{{Regret}}}({\mathcal L}) \leq T (D +1) \cdot {{\bf Dist}_T({\mathcal A})} .$$*
:::
::: proof
*Proof.* The approachability algorithm guarantees
${\bf Dist}( \bar{\ensuremath{\mathbf u}_T} , S ) \leq {\bf Dist}_T({\mathcal A})$.
Using the definition of $S$ and Lemma
[13.8](#lem:dual-dist){reference-type="ref" reference="lem:dual-dist"}
we have
$$\begin{aligned}
& \forall \ensuremath{\mathbf {\tilde{x}}_{}} \in Q(\ensuremath{\mathcal K}) \ . \ (D+1) \cdot {\bf Dist}_T({\mathcal A}) \\
& \geq ( \frac{1}{T} \sum_{t=1}^T \ensuremath{\mathbf u}(\ensuremath{\mathbf x}_t, f_t)) ^\top \ensuremath{\mathbf {\tilde{x}}_{}} \\
& \geq ( \frac{1}{T} \sum_{t=1}^T \ensuremath{\mathbf u}(\ensuremath{\mathbf x}_t, f_t)) ^\top (1 \oplus \ensuremath{\mathbf x}^\star ) \\
& = \frac{1}{T} \sum_{t=1}^T \nabla_t^\top \ensuremath{\mathbf x}_t - \frac{1}{T} \sum_{t=1}^T \nabla_t^\top \ensuremath{\mathbf x}^\star \\
& \geq \frac{1}{T} \ensuremath{\mathrm{{Regret}}}_T({\mathcal L}),
\end{aligned}$$ where the second inequality holds since the first
inequality holds for every $\ensuremath{\mathbf {\tilde{x}}_{}}$, in
particular for the vector $1 \oplus \ensuremath{\mathbf x}^\star$. ◻
:::
### Existence of a best response oracle
Notice that the reduction of this section from approachability to OCO
does not require the best response oracle. However, Blackwell's
approachability theorem does require this oracle as sufficient and
necessary, and thus for the set $S$ we constructed to be approachable at
all, such an oracle needs to exist. This is what we show next.
Consider the vectors $\ensuremath{\mathbf u}_t$ constructed in the
reduction. A best response oracle finds, for every vector
$\ensuremath{\mathbf y}$, a vector $\ensuremath{\mathbf x}$ that
guarantees
$\ensuremath{\mathbf u}(\ensuremath{\mathbf x},\ensuremath{\mathbf y}) \in S$.
In our case, this translates to the condition
$$\forall f \in {\mathcal F}\ , \ \exists \ensuremath{\mathbf x}\in \ensuremath{\mathcal K}\ , \ \nabla f(\ensuremath{\mathbf x})^\top \ensuremath{\mathbf x}\oplus (- \nabla f(\ensuremath{\mathbf x})) \in (1 \oplus \ensuremath{\mathcal K})^0 .$$
By definition of the polar set, this implies that for all
$\tilde{\ensuremath{\mathbf x}} \in \ensuremath{\mathcal K}$, we have
$$\nabla f(\ensuremath{\mathbf x})^\top \ensuremath{\mathbf x}- \nabla f(\ensuremath{\mathbf x})^\top \tilde{\ensuremath{\mathbf x}} \leq 0 .$$
In other words, the best response oracle corresponds to a procedure that
given $f$, finds a vector $\ensuremath{\mathbf x}^\star$ such that
$$\forall \ensuremath{\mathbf x}\in \ensuremath{\mathcal K}\ . \ f(\ensuremath{\mathbf x}^\star) - f(\ensuremath{\mathbf x}) \leq \nabla f(\ensuremath{\mathbf x}^\star)^\top (\ensuremath{\mathbf x}^\star - \ensuremath{\mathbf x}) \leq 0 .$$
This is an optimization oracle for the set $\ensuremath{\mathcal K}$!
## Bibliographic Remarks {#bibliographic-remarks-10}
David Blackwell's celebrated Approachability Theorem was published in
[@blackwell_analog_1956]. The first no-regret algorithm for a discrete
action setting was given in a seminal paper by James Hannan in
[@Hannan57] the next year. That same year, Blackwell pointed out
[@blackwell1954controlled] that his approachability result leads, as a
special case, to an algorithm with essentially the same low-regret
guarantee proven by Hannan. For Hannan's account of events see
[@gilliland2010conversation].
Over the years several other problems have been reduced to Blackwell
approachability, including asymptotic calibration
[@foster_asymptotic_1998], online learning with global cost functions
[@even-dar_online_2009] and more [@mannor2008regret]. Indeed, it has
been presumed that approachability, while establishing the existence of
a no-regret algorithm, is strictly more powerful than
regret-minimization; hence its utility in such a wide range of problems.
However, this was recently shown not to be the case.
@abernethy2011blackwell showed that approachability is in fact
equivalent to OCO. This result is the basis of the material presented in
this chapter. One side of their reduction was simplified and generalized
in [@shimkin2016online].
## Exercises
|
# Introduction to Trustworthy Machine Learning
## Scale is all we need?
::: definition
Generalization An ML model generalizes well if the rules found on the
training set can be applied to new test situations we are interested in.
:::
The story of Machine Learning (ML) seems to be that a bigger model with
more data implies better test loss, as shown in
Figure [1.1](#fig:scaling){reference-type="ref"
reference="fig:scaling"}. Such models generalize well. Of course, more
computing resources are needed, but more prominent tech companies
possess them.
!["Language modeling performance improves smoothly as we increase the
model size, dataset \[\...\] size, and amount of compute \[with
sufficiently small batch size\] used for training. For optimal
performance all three factors must be scaled up in tandem. Empirical
performance has a power-law \[i.e., $y = a \cdot x^b$\] relationship
with each individual factor when not bottlenecked by the
two." [@DBLP:journals/corr/abs-2001-08361].
$1 \text{ PF-day} = 10^{15} \cdot 24 \cdot 3600 \text{ floating point operations}$.
Figure taken
from [@DBLP:journals/corr/abs-2001-08361].](gfx/01_scaling.pdf){#fig:scaling
width="0.9\\linewidth"}
Between 2013 and 2020, there was a steady increase in
ImageNet [@5206848] top-1 accuracy
(Figure [1.2](#fig:leaderboard){reference-type="ref"
reference="fig:leaderboard"}). This increase slowed over time, and
between 2020 and 2023, we see a plateau in the top-1 accuracy --
seemingly, we "solved ImageNet."
![ImageNet top 1 accuracy leaderboard on
05.03.2023 [@imagenetleaderboard]. The performance of state-of-the-art
methods plateaued over time.](gfx/01_imagenet.png){#fig:leaderboard
width="\\linewidth"}
### Are we done with ML?
So, are we done with ML? If the reader's answer is 'yes', then the
following questions naturally follow:
- Why do we not see ML used in every business?
- Why is ML not changing our lives yet?
- Why have we not gone through a quantum leap in productivity
(results, profits, products) owing to ML?
If the reader's answer is 'no', then we ask:
- What are the remaining challenges in ML?
- How can we capture and measure those challenges?
This book aims to answer these questions while showcasing current
state-of-the-art approaches in the field of TML.
## Key Limitations of ML
Our answer is 'no': Not all businesses use ML, and we have not yet gone
through a quantum leap in productivity because of ML. Let us review the
*fundamental limitations of ML*.
### ML often does not work.
ML models *do* generalize, but not in the way one would expect. They
tend to generalize well, given
1. sufficient amount of data,
2. appropriate inductive biases, and
3. if we stay in the *same distribution* as the training set
(in-distribution (ID) generalization).
Our models, however, need to cope with *new situations* in practice.
Whenever there are changes in the deployment conditions, our model will
usually work *much worse*.
### ML has high operating costs.
So, we usually need to constantly adjust our model to the new settings.
This requires
1. an ML engineer ($\cO$(100k USD/year)),
2. collecting fresh data (on dedicated pipelines) or buying specialized
proprietary data, and
3. computing resources or credits for an ML cloud to adjust the models
on the new data.
From a business perspective, these points boil down to a money issue. ML
has high operating costs if our model constantly has to be adapted to
new scenarios. If we had a model that generalized well, we would have
less or even none of these costs.
### ML is currently not trustworthy.
Even if we address the previous concerns, broad use of ML is not just a
matter of whether our model works well or not -- *it is difficult to
trust ML models*. Extreme cases are when our *life*, *health*, or
*money* is at stake.
**Example 1**: Ten AI doctors say we have stomach cancer and recommend
chemo- and radiotherapy. Could we trust this diagnosis and start these
treatments? The majority of people would want to have the cancer pointed
out in the MRT images. This is an example of *explanability*.
**Example 2**: We are in a self-driving car driving through a curvy road
along a cliff. Should we lift our hands off the wheel? Probably not. We
likely *could* not even do that because these cars would insist on human
intervention (e.g., by giving warning signs). The automatic detection of
an unusual environment is an example of *uncertainty quantification*.
**Example 3**: It is also hard to trust images generated by
[DALL-E](https://openai.com/product/dall-e-2) to be sensible: We often
see absurd artifacts in otherwise great ML-generated art. This is a
problem of *OOD generalization*, as our model only gives high-quality
images for a restricted set of prompts.
## Topics of the Book
The topics this book covers are as follows:
1. **Out-Of-Distribution (OOD) Generalization.** Can we train a model
to work well beyond the training distribution?
2. **Explainability.** Can we make a model explain itself?
3. **Uncertainty.** Can we make a model know when it does not know?
4. **Evaluation.** How to quantify trustworthiness? How to measure
progress?
The topics we do not cover but are also core parts of Trustworthy
Machine Learning:
1. **Fairness.** Demographic disparity is a core concern of fairness,
which is the difference between the proportion of rejects and
accepts for each population subgroup. The use of sensitive
attributes (often implicitly) is also a significant problem
regarding trustworthiness.
2. **Privacy and Security.** Data are often proprietary and private.
How to keep the data safe? Often we can reverse-engineer the
original samples of the training set, e.g., in language models. This
way, one can obtain sensitive, private information as well, e.g.,
medical records of patients.
3. **Abuse of AI tools.** One can use ML to create deepfakes, e.g., to
swap faces of people. Disseminating falsehood, e.g., via Large
Language Models (LLMs), is also an alarming problem.
4. **Environmental concern.** Accelerated computing consumes much
energy.
5. **Governance.** It is important to regulate the use of AI and
formalize boundaries of AI usage.
## Trustworthiness: Transition from "What" to "How"
To give an introduction to trustworthiness in ML, let us first define
the "What" and "How" parts of an ML problem.
::: definition
The "What" Part of a Problem The "What" part of a problem is learning
the task we want to solve, i.e., the relationship between $X$ and $Y$.
For example, the "What" part might be categorizing images into classes.
The "What" point of view is that predicting $Y$ given $X$ is sufficient.
:::
::: definition
The "How" Part of a Problem The "How" part of a problem specifies how a
system comes to its prediction, what cues it is basing its
decision-making on, and how it reasons about the prediction.
For robust AI systems, whether we solve a problem is not enough. How we
solve it matters more.
:::
We currently have a "What" to "How" *paradigm shift* in ML. Solving the
"What" part is often *not enough*, as detailed in the following section.
### Why Solving "What" is Not Enough
A model can use multiple *recognition cues* $Z$ to make its prediction.
These cues determine what the model bases its prediction on and what it
exploits. There are *two categories* of cues:
1. **Causal, robust cue.** Such cues are robust to environmental
changes, as the prediction is not based on that. Indeed, the label
is *caused* by this cue. We need to rely on causal, robust cues
because otherwise, we will not generalize well to new domains. As an
example, consider a car classification task. Then $Z$ could be the
car body region of the image, which is a robust cue.
2. **Non-causal, spurious cue.** Such cues are hurtful for
generalization. The label is not causally related to this cue, but
they are *highly correlated* in the dataset. In the car
classification example, a highway in the background would be a
spurious cue.
When using vanilla training, nothing stops the model from using only
non-causal, spurious cues, e.g., for recognition. The model can achieve
high training accuracy (and even high in-distribution test accuracy) if
the spurious cues are highly correlated with the label. Whenever the
model faces an OOD dataset, however, it can perform arbitrarily poorly
based on how predictive the learned bias cue is in the new setting.
#### Shifted Focus in ML: The "How" parts of problems in Computer Vision
In Computer Vision (CV), we might often be interested in whether an ML
system is robust to perturbations. Examples include Gaussian noise,
motion blur, zoom blur, brightness, and contrast changes. However, there
are even more creative perturbations. For example, we might measure
whether the ML system can still classify objects accurately in quite
improbable positions.
Spurious cues that are highly correlated with the task cue but are
otherwise semantically irrelevant can greatly harm a model's performance
when not acted against. We often want to test whether our classifier
exploits spurious cues. This can lead to it breaking down on OOD
samples. For example, we can observe the behavior of the classifier in
cases where the background is changed, the foreground object is
deleted/changed, or the backgrounds and foregrounds are mixed across
categories. If our model uses the image background as a spurious cue to
make its predictions, it will showcase poor performance in these tests.
#### Shifted Focus in ML: The "How" parts of problems in Natural Language Processing
We would like to briefly mention Chain-of-Thought (CoT) Prompting. An
example is given in Figure [1.3](#fig:cot){reference-type="ref"
reference="fig:cot"}. If we want to teach our Natural Language
Processing (NLP) model a new task, we can provide it with some examples
of the task and the correct answer and then ask a follow-up question. We
supply no explanation of the answer in this case. What happens often is
that the LLMs give incorrect answers to the next question. However, when
prompting the model with exemplary detailed explanations of each correct
answer, called CoT Prompting, the model also explains its prediction and
even gets the answer right. It learns to rely on the right cues to
provide the answer (and also provides an explanation).
![CoT Prompting can lead to better model answers. Figure taken
from [@wei2023chainofthought].](gfx/01_cot.pdf){#fig:cot
width="\\linewidth"}
### Machine Learning 2.0
We distinguish two ML paradigms regarding what question they seek
answers for: ML 1.0 and 2.0.
::: definition
Machine Learning 1.0 In ML 1.0, we learn the distribution $P(X, Y)$ (or
derivative distributions, such as $P(Y \mid X)$), either implicitly or
explicitly, from $(X, Y)$ ("What") data. ML 1.0 only considers the
"What" task: It does not include the used cues, explanations, or
reasoning, i.e., the "How" aspect $Z$.
:::
::: definition
Machine Learning 2.0 In ML 2.0, we learn the distribution $P(X, Z, Y)$
(or derivative distributions), either implicitly or explicitly, from
$(X, Y)$ ("What") data:
$$\text{Input } X \xrightarrow{\makebox[1.4cm]{}} \stackanchor{\text{Selection of cue, exact mechanism, reasoning.}}{\text{The ``How'' aspect \(Z\)}} \xrightarrow{\makebox[1.4cm]{}} \text{Output } Y$$
:::
The motivation of ML 2.0 is clear: we want to use the same kind of data
to get more knowledge. However, the $Z$-problem is *not guaranteed to be
solvable* from $(X, Y)$ data. Learning $P(X, Z, Y)$ contains all kinds
of derivative tasks (a new set of tasks compared to what we had in ML
1.0): Now, we are trying to learn some distribution of $X$, $Z$, and
$Y$. For example, we may wish to be able to predict the Ground Truth
(GT) $Z$ from input $X$ correctly (learn $P(Z \mid X)$), i.e., to make
sure that given an input, the model is choosing the right cue for input
$X$.
In the following chapters, we aim to introduce the reader to various
scalable trustworthy ML solutions with a focus on both theory and
applications.
# OOD Generalization
## Introduction to OOD Generalization
OOD generalization stands as a pivotal challenge in modern ML research.
It seeks to construct robust models that perform accurately even on data
not represented in the training set. This branch of research not only
elevates the trustworthiness and reliability of ML systems but also
broadens their applicability in real-world scenarios.
Before we get our hands dirty, we have to discuss some terms that are
often used in OOD generalization. Let us start with the most basic one:
the *task* we want to solve.
![Illustrations of various computer vision tasks, taken from [@article].
The field of computer vision is
vast.](gfx/02_cvexamples.png){#fig:cvexamples width="0.8\\linewidth"}
::: definition
Task Task refers to the ground truth (GT), possibly non-deterministic
(see aleatoric uncertainty in
Section [4.2](#ssec:types){reference-type="ref" reference="ssec:types"})
function that maps from the input space $\cX$ to the output space $\cY$
that a model is learning, or is a description thereof. Equivalently, the
task is the GT distribution $P(Y \mid X = x)$ we wish to model.
**Alternative definition**: Task is the factor of variation (cue) that
matters for us, i.e., the factor we want to recognize at deployment.
Tasks are not inherent to the data; they are always defined by humans.
This slightly differs from the previous definition, but both explain the
same concept.
:::
### Examples of Tasks
::: definition
ImageNet ImageNet [@deng2009imagenet] is a large-scale, diverse dataset
initially created for object recognition research. Nowadays, it is
popular to use ImageNet for classification, omitting the prediction of a
bounding box. It contains millions of annotated images collected from
the web and spans thousands of object categories that are organized
according to the WordNet hierarchy for nouns. The dataset contains
hundreds to thousands of samples per node in the hierarchy.
**Ambiguity with "the" ImageNet Dataset**: The term "ImageNet dataset"
has been used to refer to mainly two variants of the dataset which has
caused a great deal of confusion:
- **Full ImageNet Dataset/ImageNet-21K/ImageNet-22K**: The full
ImageNet dataset contains 14,197,122 images associated with 21,841
WordNet categories [@imagenetwiki]. However, not all of these images
are used in typical computer vision benchmarks. ImageNet-21K is
equivalent to ImageNet-22K, the difference is that some researchers
round up the number of classes to 22,000 in the name.
- **ImageNet Large Scale Visual Recognition Challenge (ILSVRC)
Dataset/ImageNet-1K/ILSVRC2017**: This is the most widely used
subset of the ImageNet-21K dataset, involving 1,000 object
categories. It contains 1,281,167 training data points, 50,000
validation samples, and 100,000 test images. [@imagenetwiki].
However, the labels for the test set are not released. Therefore,
one can only use the validation performance for evaluation when
writing a paper, making the evaluation process less trustworthy. The
annual ILSVRC competition, especially the 2012 challenge, which was
won by the deep learning model AlexNet [@10.5555/2999134.2999257],
played a pivotal role in the rise of deep learning.
:::
Even within "classification", there exist various tasks: different sets
of classes correspond to different tasks.
- The Pascal VOC datasets [@Everingham10] consider 20 classes. These
are datasets for object detection, instance segmentation, semantic
segmentation, action classification, and image classification.
- The COCO datasets [@cocodataset] contain 80 object categories and 91
stuff categories. Object categories strictly contain the Pascal VOC
classes. These are datasets for object detection, instance
segmentation, panoptic segmentation, semantic segmentation, and
captioning. Crowd labels are added when there are too many (more
than ten) instances of a class in an image. These aggregate multiple
objects.[^1]
#### Examples of Tasks in Computer Vision (CV)
An overview of CV tasks is given in
Figure [2.1](#fig:cvexamples){reference-type="ref"
reference="fig:cvexamples"}.
- *Semantic segmentation* aims to predict a semantic label for each
pixel in an image.
- *Classification* is the problem of categorizing a single object in
the image.
- *Classification + localization* aims classify *and* localize a
single object in the image.
- *Object detection* classifies and localizes *all* objects in the
image. Now we have no restrictions on the number of objects the
image might contain.
- *Instance segmentation* assigns a semantic label and an instance
label to every pixel in the image. The instance label differentiates
between unique instances with the same semantic label.
#### Examples of Tasks in NLP
::: definition
Semantic Analysis Semantic analysis in natural language processing (NLP)
analyzes the conceptual meaning of morphemes, words, phrases, sentences,
grammar, and vocabulary.
:::
::: definition
Pragmatic Analysis Pragmatic analysis in NLP analyzes semantic meaning
but also analyzes context. Instead of examining what an expression
means, it studies what the speaker means in a specific context.
:::
*Analysis tasks* aim to uncover syntactic, semantic, and pragmatic
relationships between words/phrases/sentences in a document.
- Tokenization is an essential syntactic analysis technique.
- The semantic analysis of a document might involve sentence
classification (like sentiment analysis) or named-entity
recognition.
- Word sense disambiguation is a particular example of pragmatic
analysis. It aims to unfold which sense of a word is meant in a
certain context.
- Part of speech tagging is can be both deemed a semantic and a
pragmatic analysis technique. It marks up words in a document with
the corresponding part of speech (e.g., noun or verb).
*Generation tasks* involve generating text.
- Machine translation is an example of conditional text generation
where a translation in language $B$ is generated given the original
document in language $A$.
- Question answering is also a conditional text generation problem
where the model generates a coherent answer given a natural language
question.
- Language modeling is the task of predicting the next word/character
in a document or, equivalently, the task of assigning a probability
to any text. Here, we condition on the partial sequence we have
generated so far.
### Generalization Types
Now, we are ready to consider a general overview of generalization
types. First, let us introduce some terms that will play a crucial role
in our discussion of OOD generalization.
::: definition
Environment (Domain) The environment is the distribution from which our
data are sampled.
:::
::: definition
Cue (Feature, Attribute) Cues, features, and attributes all refer to the
factors of variation in the data sample. Examples include color, shape,
and size.
**Note**: A cue is not necessarily a feature in a vector representation.
Cues are also entirely independent of the model. They are
characteristics of the dataset.
:::
::: definition
In-Distribution (ID) and Out-of-Distribution (OOD) Samples
In-distribution (ID) samples come from a test dataset which is used to
gauge the model's performance on familiar data (in-distribution
generalization). Out-of-distribution (OOD) samples, on the other hand,
are drawn from a different test dataset to assess the model's
performance on unfamiliar or unexpected data (out-of-distribution
generalization).
:::
::: definition
Generalization Types
::: center
[]{#tab:gentypes label="tab:gentypes"}
---------------------------------------------------------------------------------
**Generalization **How is training **How is training
type** $\boldsymbol{\approx}$ $\boldsymbol{\ne}$
test?** test?**
------------------ -------------- ------------------------ ----------------------
ID Training and test sets We have different
come from the same samples.
distribution.
OOD Cross-Domain Training and test sets They are from
are for the same task. different domains.
Cross-Bias They have different
cue-correlations.
Adversarial Test samples are
worst-case scenarios.
---------------------------------------------------------------------------------
:::
**Note**: This is not a comprehensive list of OOD generalization
variants.
:::
Let us give examples for each scenario and consider some remarks.
#### Example of ID Generalization
We consider the task of recognizing a set of people from an office. They
might be in different poses or situations, but always the same people,
both in dev and deployment. The office theme will be common in the
subsequent examples for different generalization types to highlight and
emphasize the main differences between these.
#### Example of Cross-Domain OOD Generalization
Here, we might consider the task of recognizing person $A$ from the
office, but for the first time in a party costume during deployment. We
have the same (or even different) people from dev in new, unseen
clothes. One of the features is changing from training to test, meaning
the training and test sets are from different domains. This
generalization scenario mixes many factors; we will focus on cross-bias
generalization more.
#### Example of Cross-Bias OOD Generalization
Persons $A$ and $B$ work in the office of the previous examples. We want
to recognize person $A$ for the first time in person $B$'s jacket. We
have the same people but in exchanged clothes. The biased cue for person
$A$ has changed from their jacket to person $B$'s jacket. More formally,
the cue that was highly correlated with person $B$ in the training set
now co-occurs with person $A$ in the test dataset. The ML system will
likely predict person $B$ if we do not counteract the bias. This is
because of the well-known shortcut bias of ML systems, which we will
discuss later.
In practice, we are usually interested in changing the cue from training
to test the model is likely to look at when making a prediction (because
of shortcut bias), e.g., clothing. Such benchmarks test whether the
model is focusing on a cue that is irrelevant to the task (e.g., a
person's clothing is irrelevant to their identity).
#### Example of Adversarial OOD Generalization
Consider the problem of recognizing person $A$ even when they hide their
identity with a face mask (with someone else's face on it or using other
tricks). Now person $A$ is the adversary against our face recognition
system, but this does not necessarily mean that person $A$ has malicious
intentions. Person $A$ might wish to hide their true identity by making
the model fail to recognize his face. There are also adversarial
patterns to avoid facial recognition systems, e.g., to avoid
surveillance. Adversarial generalization is a tough task, and it is even
more challenging to obtain guarantees for this generalization type.
## Why do we even care about OOD generalization?
In the YouTube video "[Self Driving Collision
(Analysis)](https://www.youtube.com/watch?v=Zl9rM8D3k34&list=LL)" [@collisionanalysis],
we see perfect weather and visibility, with low traffic. Nevertheless,
as the Tesla turns onto the road, it does not detect a row of plastic
bollards and hits them. This accident is not a one-off occurrence, as
later in the video, it tries to hit other bollards too. Why does this
happen? Because this is a new street arrangement that the model has not
seen before, and it fails to generalize to this situation. To be sure
that the model is robust in many situations, we need some kind of OOD
generalization.
Many things constantly change in the world. New, unseen events happen
all the time, like the Covid pandemic. If we trained a model before the
pandemic to predict loungewear sales for a particular date, it might
have extrapolated well until national lockdowns were announced. These
lockdowns caused a substantial domain shift, in which loungewear sales
increased considerably. The model we trained before the lockdowns failed
to reflect reality after this environmental change.
The typical solution to domain shifts is model retraining. Things
inevitably change over time, and the model accuracy drop over time is
unavoidable if the model is kept fixed. People thus often recollect
data, annotate new samples, and retrain the model on new data. We can
use this procedure to keep the model's accuracy above a certain
threshold, illustrated in Figure [2.2](#fig:decay){reference-type="ref"
reference="fig:decay"}.
![Illustration of the use of regular model updates to preserve
deployment accuracy, taken from [@modelupdate]. In many cases, model
accuracy would plummet over time if we did not update it
regularly.](gfx/02_model_decay.png){#fig:decay width="0.8\\linewidth"}
::: definition
Model Selection Model selection is the process of selecting the best
model after the individual models are evaluated based on the required
criteria. One usually has a pool of models specialized for various
domains. The expert chooses the best model for the current deployment
scenario. For example, Amazon often performs model selection in its
cloud services.
:::
::: definition
MLOps MLOps is an engineering discipline that aims to unify ML systems
development and deployment to standardize and streamline the *continuous
delivery* of high-performing models in production [@mlops]. An overview
is given in Figure [2.3](#fig:mlops){reference-type="ref"
reference="fig:mlops"}.
:::
![MLOps is a complex discipline with multiple participants. **Note**:
Data Acquisition is not just a DB query. It also includes the collection
of data. The data curation procedure can take a long time. One must keep
track of shifting data (data versions), keep annotators in the loop, and
update models accordingly. This procedure can be very costly. Figure
adapted from [@mlops].](gfx/02_mlops.pdf){#fig:mlops
width="\\linewidth"}
However, the constant retraining of models and the model selection
expertise (MLOps) is costly.
- **Manpower**: 100k EUR/person/year at least.
- **GPUs, electricity**: 25k EUR/year + 8000 kg $\text{CO}_2$/year
([considering a single NVIDIA Tesla A100 unit and Google
Cloud](https://cloud.google.com/products/calculator#id=457292aa-54c3-471e-91bb-d418e7dd7032)).
- **Data management** (schema, maintenance) is also expensive.
::: information
NVIDIA Tesla A100 The NVIDIA Tesla A100 is a tensor core GPU often used
for training ML models. It can be partitioned into 7 GPU instances so
multiple networks can efficiently operate simultaneously (training or
inference) on a single A100. In early 2023, it has one of the world's
fastest memory bandwidths, with over $2$ TB/s. Training BERT is possible
*under a minute* using a cluster of 2048 A100 GPUs [@nvidiaa100].
:::
::: definition
DevOps A set of practices intended to reduce the time between committing
a change to a system and the change being placed into production while
ensuring high quality. [@devopsdef]
:::
ML problems arise from business goals. If there is no distribution shift
and no need for model selection, there is no need for MLOps, and we only
need DevOps. We need MLOps (continuous updates of models) because the
data, user, and environment shift continuously. Ideally, we only have to
perform continuous updates semi-automatically: We only need a few people
to maintain the system. Eventually, however, we wish to get over MLOps
as well. We need models that are very robust to domain shifts to achieve
this.
### Greater Levels of Automation
First, we define *diagonal datasets* that will help us understand the
levels of automation in ML and the ill-defined behavior in OOD
generalization (Section [2.7.1](#ssec:spurious){reference-type="ref"
reference="ssec:spurious"}).
::: definition
Diagonal Dataset A dataset in which all (or multiple in general) cues
vary at the same time (i.e., they are perfectly correlated) that can be
used to achieve 100% accuracy. Thus, it is impossible to infer what the
deployment task is from the label variation. A model using either of the
perfectly correlated cues could achieve 100% training accuracy.
:::
Next, we need to describe the Amazon Mechanical Turk service to reason
about annotation costs and crowdsourcing.
::: definition
Amazon Mechanical Turk The [Amazon Mechanical
Turk](https://www.mturk.com/) (AMT) is an online labor market for
dataset annotation, where one can crowdsource their annotation task.
:::
We consider five levels of automation (1: lowest, 5: highest) in
problem-solving.
#### Level 1: No ML
In this case, we have no ML model to use for our particular problem. The
human effort is gigantic: A center with hundreds of personnel is
constantly required (which was a common case 40-50 years ago). They take
care of input streams on the fly, i.e., they are processing a continuous
data stream with *human intelligence*. This procedure is *very costly*
and *inefficient*.
#### Level 2: MLOps with Periodic Annotation
In this setup, we have an ML model available to help with our problem.
However, this model can only generalize to the same distribution based
on the annotated samples. The human effort is reduced but still
considerable: A group of people annotates thousands (possibly millions
across projects) of samples every month, as the world is changing
quickly. Options for annotations include in-house annotators,
outsourcing to annotation companies, or crowdsourcing through AMT.
Annotation costs 10-30 USD per hour per person on AMT. (Slightly above
minimum wage for US workers.) Harder tasks, e.g., instance segmentation,
cost more. For the browser-based annotation of 1 million images, we
estimate up to 1 million USD for AMT crowdsourcing. An ML engineer's
market price is 100-300k USD per year per person. These costs are
prohibitively expensive for small businesses.
#### Level 3: MLOps with Reduced Annotation
Now, we have an ML model that is minimally resilient to distribution
shifts. The human effort is reduced even more: Annotation is required
only every year. This resilience reduces the cost of MLOps quite a bit.
#### Level 4: MLOps with No Annotation
In this hypothetical scenario, our ML model -- once trained -- is so
robust against distribution shifts that it only requires minimal human
engineering (e.g., hyperparameter adaptation and model selection).
Regarding the human effort, annotation is not needed anymore. Only ML
engineers are needed to select the right model suitable for the task at
that particular time (based on the needs of business executives). They
are also constantly looking for the best models.
#### Level 5: ML without MLOps {#sssec:level5}
Here, even the ML engineer functionality is (partly) automatized. The
model can alter its hyperparameters to adapt to changing distributions.
Adapting hyperparameters usually requires fine-tuning; however, the way
we choose hyperparameters can be made very efficient, e.g., requiring
only very few observations of training sessions and data samples. (In
ML, we always need observations.) It can even be automatized with, e.g.,
Bayesian optimization. Importantly, this does not refer to a meta-model
that can automatically choose between the set of candidate models. We
cannot even be sure that is possible, as certain factors cannot be
inferred from the data. As an example, let us consider a diagonal
dataset in which the shape and color cues co-occur perfectly. At one
point, users might want a shape-based classifier. Later they might
change their mind and want a color-based classifier. This requirement is
not reflected in the data stream for a diagonal dataset: it is part of
the human specification. This is precisely why model selection always
involves human feedback.
Why is an expert still needed for model selection? One might wonder why
an expert decision-maker cannot be replaced in this very idealistic
hypothetical scenario. This is because some metrics are often unreliable
(that look good on paper, but the model that performs well on them might
not be what we want), and there are requirements from a model that are
often hard to quantify. An ML engineer might also be needed to keep the
pool of models up to date, including the latest innovations in ML. There
are also always new model architectures and general technologies that
appear. This pool needs to be constantly curated and updated to the
general needs of the users. These new models might also not be better
than previous ones on *all* criteria, just some of them (e.g., better
accuracy at the cost of less interpretability).
There might also be many criteria to adhere to. For example, we might be
interested in the performance, computational resources, fairness,
calibration, or explainability. Accuracy is not the only criterion, and
there is no *single* criterion. The single *best* model does not exist
in general, no matter how robust our pool of models is; and even if our
pool of models is robust, some models might perform (slightly) better in
exact deployment scenarios on certain metrics -- we want to squeeze out
performance. Model selection is not just an argmax. With multiple
criteria, it is often too difficult to put some weights on these metrics
and use thresholding. Automating model selection is, therefore, a
challenging problem with fundamental limitations.
Finally, an expert is always needed to *give the final word*. They must
make an executive decision and choose the best model based on the
business needs. When there are problems with a new model (e.g.,
fairness), a human must intervene and roll it back to a previous state.
**Note**: The expert discussed here does not have to be an ML expert.
The main decisions usually come from business executives.
### Once we "solve" OOD generalization\...
What happens if we "solve" OOD generalization (i.e., our models become
robust to distribution shifts)?
- Our model will work well even under new situations.
- MLOps will not be needed at the current scale. (However, model
selection and ML expertise will probably be needed for a long time.)
- Small businesses will be able to adopt ML more easily.
- ML can be extended to more risky applications because we can be sure
that it will work in novel situations, too.
- ML will drive the risky applications, e.g., the industry of
healthcare, finance, or transportation. Robust models gain trust.
However, we will see later that *explainability* is just as
important.
To summarize our introduction to OOD generalization and drive the key
points across:
-
-
## Formal Setup of OOD Generalization {#ssec:formal}
### Stages of ML Systems
To discuss a more formal setup of OOD generalization, let us first
consider two stages of ML systems: *development* and *deployment*.
::: definition
Development (dev) Development is the stage where we train our model and
make design choices (for hyperparameters) within some resource
constraints.
:::
::: definition
Deployment Deployment is the stage where our final model is facing the
real-world environment. This environment is called a *deployment
environment* and can change over time.
:::
::: definition
Training Training is the particular action of fitting the model's
parameters within the dev stage to the training set, with a fixed
hyperparameter setting.
We do not separate the training phase from the rest of the dev phase,
but we *do* separate dev from deployment.
:::
::: definition
Testing Testing is a lab setup designed to mimic the deployment scenario
closely -- scientists evaluate their final inventions on test benchmarks
and report their results.
**Practice point of view**:
- This is different from deployment and still a part of development.
- If we want to be precise: As soon as we have labeled samples
from deployment (and we make any design choices based on these
or just test our model), we are using information from the
deployment setup in dev. We cannot talk about true (domain or
task) generalization anymore. The deployment scenario should
stay fictitious and unobserved in such settings.
**Academia point of view**:
- The test set ([2.3.2](#ssec:splits){reference-type="ref"
reference="ssec:splits"}) and the action of testing is treated as a
part of the deployment.
:::
The specification of these stages can be bundled into one *setting*.
::: definition
Setting/Setup A setting specifies the available resources (during
development) and an ML system's surrounding (deployment) environment.
**Essential components of a setting**:
- **Development resources**: What types of datasets, samples, labels,
supervisions, guidance, explanation, tools, knowledge, or inductive
bias are available?
- ML engineers are also resources. They have their own knowledge
to optimize an ML model the right way. If we have better
engineers with better intuition of what to do in a scenario, we
can train the model quicker and better.
- **Deployment environment**: What kind of distribution will our ML
model be deployed on?
- **Time**: Resource availability changes over time. The deployment
environment changes over time. We can only deploy after development,
but sometimes we keep developing after deployment.
:::
#### Example of a Setting
Consider an ID supervised learning setup. This is an ideal scenario ML
research has started its exploration in. Various strong results about
consistency, convergence rates, and error bounds can be given in this
setup [@jiang2019non; @NEURIPS2021_0e1ebad6] that break in OOD settings.
Our *development resources* are labeled $(X, Y)$ samples from
distribution $P$. Our *deployment environment* contains unlabeled
samples $X$ from distribution $P$ presented one by one. **Note**: This
is an incomplete description of the development resources and the
deployment environment that aims to drive the main points across. In
scientific papers, a much more thorough description is required.
We usually specify settings when we have an actual task we want to work
on, i.e., we have a *real-world scenario* at hand.
::: definition
Real-World Scenario A real-world scenario is a projection of a setting
onto a hypothetical or actual convincing real-world example. This is a
particular situation that fits the setting.
:::
#### Example of a Real-World Scenario
Consider an ID supervised learning setup again (the simplest case). Our
*task* is to build a system for detecting defects (e.g., dents) in
wafers (semiconductors, pieces of silicon used to create integrated
circuits) through image analysis. Our *development resources* contain a
dataset of wafer images with corresponding labels -- defective or
normal. In our *deployment environment*, the images are of the same
distribution, as the wafer products and camera sensors are identical
between the dev dataset and the data stream from deployment.
::: information
How to compare methods with different resources? We always want to
compare methods fairly. If one method uses fewer resources in
development, we cannot compare the two methods fairly.
:::
### Dataset Splits in ML {#ssec:splits}
Next, we discuss different general dataset splits used in ML.
::: definition
Training Set The training set is a (usually large) collection of samples
whose purpose is to train the model.
**What is optimized?** Model parameters.
**What is the objective?** The training loss, possibly with
regularization.
**What is the optimization algorithm?** A gradient descent variant using
Tensor Processing Units (TPUs), or GPUs.
**How frequent are updates of the model?** $\cO$(milliseconds-seconds).
:::
::: definition
Validation/Dev Set
The purpose of the validation set is to roughly simulate the deployment
scenario by using samples the model has not seen yet and measure ID
generalization.
**What is optimized?** Model hyperparameters and design choices.
**What is the objective?** Generalization metrics.
- If we consider true OOD generalization without having access to the
target domain (i.e., not domain adaptation
([2.4.4](#sssec:da){reference-type="ref" reference="sssec:da"}) or
test-time training ([2.4.6](#sssec:ttt){reference-type="ref"
reference="sssec:ttt"})), we cannot measure OOD generalizability on
the validation set. Therefore, the validation set usually comes from
the same domain(s) as the training set. Otherwise, we would already
tune our hyperparameters on the domain we wish to generalize to;
thus, whether we measure OOD generalization on the test set later is
questionable. Such scenarios exhibit 'leakage', which we will cover
in Section [2.5.2](#sssec:leakage){reference-type="ref"
reference="sssec:leakage"}.
**What is the optimization algorithm?** For example, Bayesian
optimization, "Grad student descent", random search.
**How frequent are updates of the model?** $\cO$(minutes-days).
:::
::: definition
Test Set The test set is used to simulate the deployment scenario more
accurately than during validation by using samples from the distribution
we believe the model will face during deployment. The test set can,
therefore, also measure OOD generalization.
**What is optimized?** The methodology and overall approach through the
shift of the field.
- For example, the shift from CNNs towards ViTs.
- The line is unclear between the change of methodology and design
choices; this is more like a spectrum.
**What is the objective?** Generalization metrics.
- The test set can be any type of OOD dataset.
**What is the optimization algorithm?** Paradigm shifts, updating the
evaluation or the evaluation standard.
- As the field changes, the set test also changes. For example, for
ImageNet, many test sets are available (e.g., for generalization to
different OOD scenarios), and more have been added over time.
- We are setting a new goal for the field that many researchers will
follow.
- Standard refers to the benchmark, metric, or protocol according to
which we evaluate our models. (It has a close connection to the test
sets we use.)
**How frequent are updates of the model?** $\cO$(months-years).
- In the scale of months and years, methods are *meant to be
optimized* to the test set. The problems this optimization entails
are crucial to understand and are discussed in detail in
Section [2.3.3](#ssec:idealism){reference-type="ref"
reference="ssec:idealism"}.
- The test set must be updated to the user and societal needs over
time. Naturally, the training set and validation set also change
over time.
:::
### Why Idealists Cannot Evaluate on the Test Set {#ssec:idealism}
We measure accuracy on the test set because we wish to *compare* our
method to previous methods. This is an implicit way of choosing a model
over other methods, which is part of the methodology. Therefore, the
test set is still a part of development in practice in the most precise
sense.[^2]
Whenever we make any decisions based on test results (be it ours or
others'), we cannot measure generalizability on the test set anymore.
This is almost always violated in practice. However, there is no clear
workaround, as benchmarks are essential to progress in any field of ML
research. We can only "spoil the test set less," but we can never *not*
spoil it if we want to advance the field.
## Common Settings for OOD Generalization
There is no such thing as *the* OOD generalization setting. There are
many different scenarios for it. Let us first explain why
differentiating between these learning settings is important.
### Why are the learning settings important?
![Collage of different domain labels and corresponding images, taken
from [@https://doi.org/10.48550/arxiv.2003.06054]. Images of the same
kind of objects can be surprisingly different when considering different
domains.](gfx/02_domainlabels.pdf){#fig:domainlabels
width="0.6\\linewidth"}
Let us first define the notion of *domain labels*.
::: definition
Domain Label The domain label is an indicator of the source distribution
of each data point in the form of a categorical variable (e.g., dataset
name).
:::
**Example**: "MNIST" can be a domain label for an image from the MNIST
dataset [@lecun2010mnist]. Samples in different datasets are (almost
always) coming from different distributions. Other valid domain labels
include "Art Painting", "Cartoon", "Sketch", or "Real World", as
illustrated in Figure [2.4](#fig:domainlabels){reference-type="ref"
reference="fig:domainlabels"}.
Distinguishing various learning settings is of crucial importance for
the following reasons.
1. To figure out which techniques can be used for the given learning
scenario. We want to understand the given ingredients precisely and
know the relevant search keywords for googling the papers.
2. To compare against previous methods in the same learning setting. It
is key to enumerate the exact (and sometimes hidden) ingredients
used by a method and compare it only with methods that use the same
ingredients. Some authors may give misleading information about the
setting their method operates in. For example, if one claims to have
not used domain labels but has used some equivalent form of them, we
must be able to notice that and voice our concerns. Comparing
methods based on their ingredients is much more justified than
comparing based on the name of the settings the authors *claim* to
adhere to.
### ID generalization
For the sake of comparison, let us start with ID generalization. An
illustration of this setting is depicted in
Figure [2.5](#fig:id_ood){reference-type="ref" reference="fig:id_ood"}.
We have the same domain all the way through development and deployment.
During deployment, we get unlabeled samples to which we wish our model
to generalize.
![Illustration of the ID generalization setting (top) and the general
OOD generalization setting (bottom). OOD generalization showcases a
change of domain.](gfx/02_id_ood.pdf){#fig:id_ood width="\\linewidth"}
### Domain-Dependent OOD Generalization
A general view of this setting is shown in
Figure [2.5](#fig:id_ood){reference-type="ref" reference="fig:id_ood"}.
There are different domains for development and deployment. One needs to
generalize to the deployment domain. This is the most general setting
for OOD generalization. There are many names of settings. Exact
definitions of settings for OOD generalization fluctuate. Therefore,
understanding the exact ingredients for each setting matters much more
than "which name to put". For the purpose of the book, we will still go
over the settings and try to put definite boundaries. We discuss
different categorizations of OOD generalization settings below.
#### Categorizing according to the nature of the difference between dev and deployment
- In *cross-domain* generalization, the deployment environment
contains completely unseen cues in dev.
- In *cross-bias* generalization, deployment contains unseen
compositions of seen cues in dev.
- *Adversarial* generalization considers a (real/hypothetical)
adversary in deployment who tries to choose the worst-case domain.
#### Categorizing according to the extra information provided to address the ill-posedness
- In *domain generalization*, domain labels are provided.
- In *domain adaptation*, some (un-)labeled target domain samples are
available in dev.
- *Test-time training*'s dev continues even after deployment. We get
access to deployment (target domain) samples. We may or may not
label them.
- *Domain-incremental continual learning* considers a single domain
during dev. Domains are added over time during deployment.
These settings all come with different ingredients, and one should not
compare methods across different settings. The two axes of variation
above are independent.
### Domain Adaptation {#sssec:da}
![Domain adaptation setting. The development stage also comprises
samples from domain 2.](gfx/02_da.pdf){#fig:da width="\\linewidth"}
Domain adaptation is illustrated in
Figure [2.6](#fig:da){reference-type="ref" reference="fig:da"}. During
the dev stage, we have access to some labeled or unlabeled samples
(depending on the exact situation) from the deployment environment. We
can, e.g., align our features with the target domain statistics using
moment matching.
#### Moment Matching in Domain Adaptation
![Feature embedding distributions can be notably different between the
domains available in the development stage of domain adaptation. The
figure shows Gaussians fit to domain-wise empirical feature
distributions of samples from different domains for feature alignment in
domain adaptation using moment matching: we aim to match these Gaussians
among domains. $f$: feature, $d$:
domain.](gfx/02_alignment.pdf){#fig:dist}
An example of domain-wise feature distributions is illustrated in
Figure [2.7](#fig:dist){reference-type="ref" reference="fig:dist"}.
These can, e.g., correspond to the penultimate layer's sample-wise
activations in a ResNet-50. We represent the empirical distribution of
the domain-wise feature values by their expectation and variance
(approximated by the sample mean and variance). For domain 2, we have a
few unlabeled samples that can be used for this computation. We place,
e.g., an $L_2$ penalty on the differences between domain 1 and 2
statistics (sample mean and variance in our example) of intermediate
features, or we can also consider the Wasserstein distance between the
Gaussians as a penalty. During training, for samples from domain 1, we
compute the task loss (e.g., cross-entropy) and the penalty term. For
samples from domain 2, we only compute the penalty, as there are no
labels for these samples. We only backpropagate gradients of the penalty
through the labeled domain 1 samples and use the unlabeled samples only
to calculate the penalty. This way, we only directly train on domain 1
but adapt our model based on domain 2 samples to generalize to domain 2.
This tends to give us a small amount of improvement in robustness.
### Domain Generalization {#ssec:domain}
![Domain generalization setting. We have access to multiple domains
during the development stage, but we have to generalize to a novel,
unseen domain in the deployment stage.](gfx/02_dg.pdf){#fig:dg
width="\\linewidth"}
An overview of domain generalization is given in
Figure [2.8](#fig:dg){reference-type="ref" reference="fig:dg"}. During
the dev stage, we have access to labeled samples from multiple domains.
We also know the domain label for every sample. Knowing domain labels is
usually a hidden assumption; not many papers talk about this. If we do
not know the domain labels, there are techniques for detecting the
domains without them, but these are never perfect and come with
additional assumptions.
#### Moment Matching in Domain Generalization
We can "unlearn" domain-related characteristics in our representation by
performing *moment matching* similarly to domain adaptation, but now
between all domains available in the development stage. Similarly to the
domain adaptation case, we compute the sample mean and variance
separately for each domain as we have domain labels.[^3] We fit
Gaussians to the features of samples from different domains. We align
the Gaussians for different domains by, e.g., placing an $L_2$ penalty
on the pairwise differences between their corresponding means and
covariance matrices. We backpropagate gradients through all samples,
using both the task labels and domain labels. If we succeed, we ignore
differences among domains in the training set based on moments. We hope
that the model becomes independent of domain information (of any kind),
so it will probably work well on the next (unknown) domain.
### Test-Time Training {#sssec:ttt}
![Test-time training setting. The development stage continues in the
deployment stage.](gfx/02_ttt.pdf){#fig:ttt width="\\linewidth"}
Test-time training is shown in
Figure [2.9](#fig:ttt){reference-type="ref" reference="fig:ttt"}. After
training our model, we keep updating it according to the labeled or
unlabeled samples (depending on the exact setting) from the deployment
environment. Thus, dev continues even into the deployment because our
model keeps being updated. We might not do labeling in domain 2, but it
helps to have access to incoming domain 2 samples and correct the
feature model on the fly (e.g., by performing moment matching).
This paradigm is becoming more popular these days. A key figure of the
approach is Alexei Efros.
### Domain-Incremental Continual Learning
![Domain-incremental continual learning. New domains are added over time
in the deployment stage.](gfx/02_cldi.pdf){#fig:cldi
width="\\linewidth"}
An overview of the domain-incremental continual learning setting is
given in Figure [2.10](#fig:cldi){reference-type="ref"
reference="fig:cldi"}. We train on a single domain before deployment.
Domains are added over time during deployment. We label a few samples
over time and update our model on the way. Only the labeled samples are
used for improving our model. We hope that the model also does not
forget previous domains. Performance should remain as high as possible
for previous domains as well. Data keeps coming from all deployment
domains, but we must adapt quickly to the new domain.
### Task-Dependent OOD Generalization
In general, the task being different is a lot harder than the domain
being different. Usually, a different task also means a different
domain.[^4]
![Comparison of ID generalization and zero-shot learning. Zero-shot
learning aims to tackle a novel, unseen task in the deployment
stage.](gfx/02_zeroshot.pdf){#fig:zeroshot width="\\linewidth"}
So far, the task stayed the same across development and deployment.
However, the task can also change over time. The best-known scenario of
this is *zero-shot learning*, which is compared to ID generalization in
Figure [2.11](#fig:zeroshot){reference-type="ref"
reference="fig:zeroshot"}. In ID generalization, the task stays the
same. In zero-shot learning, we have a different task for deployment
about which we have no information in dev.
#### Large Language Models and Zero-Shot Learning
Large Language Models (LLMs) are capable of performing zero-shot
learning [@https://doi.org/10.48550/arxiv.2205.11916] (called
zero-shot-CoT prompting). They can encode the task description in
natural language, so there is sufficient information for the model to
solve the problem in principle. It is, however, still fascinating how
LLMs can figure out how to solve new kinds of tasks not presented before
that are an output of human creativity.
Nevertheless, we almost never have any guarantees about benchmarks truly
being zero shot for LLMs -- their datasets are *huge*, and we can never
be certain that the model did not have the same task in its training
dataset. [CLIP](https://openai.com/research/clip) also has zero-shot
learning capabilities.
#### Categorizing according to which tasks are available at the development and deployment stages
- In *ID generalization*, the task stays the same in dev and
deployment.
- In *zero-shot learning*, during deployment, we are faced with a new
task not present in dev.
- *$K$-shot learning* gives a "softened" setting where we have $K$
labeled samples of the deployment task in dev.
- *Meta-learning* has different tasks available during dev. This can
also be combined with $K$-shot learning.
### $K$-Shot Learning
![$K$-shot (few-shot) learning setting. $K$ samples are available from
task 2 during the development stage to aid the model towards robust
generalization.](gfx/02_kshot.pdf){#fig:kshot width="\\linewidth"}
K-shot learning is illustrated in
Figure [2.12](#fig:kshot){reference-type="ref" reference="fig:kshot"}.
People try to make zero-shot learning easier by introducing some labeled
samples for the target task during development. The $K$ samples per
class are for the target task. We learn to fit our model to the
deployment task using a large number of task 1 samples and a few
($K \times \#\text{class}$) task 2 samples.
**Example 1**: ImageNet pretraining followed by fine-tuning on a
downstream task.
**Example 2**: Linear probing in Self-Supervised Learning (SSL). Here,
we do not even need labeled samples for domain 1. We train a strong
feature representation in a self-supervised fashion, then we apply a
linear classifier on the learned features and fine-tune the model with
labeled task 2 data.
### Meta-Learning + $K$-Shot Learning
![Meta-learning + $K$-shot learning setting. Multiple (proxy) tasks are
available in the development stage. We further have access to $K$
samples per class from the deployment stage
task.](gfx/02_metakshot.pdf){#fig:metakshot width="\\linewidth"}
Meta-learning can be combined with $K$-shot learning, as shown in
Figure [2.13](#fig:metakshot){reference-type="ref"
reference="fig:metakshot"}. In this case, we have multiple tasks during
development, and we wish to learn features that generalize across tasks,
but we still need samples from the target task. In essence, we "learn to
learn a new task" with tasks 1-3 (that give rise to a compound task). We
then adapt our model to the deployment task 4, using the $K$ samples per
class for task 4.[^5]
### Task-Incremental Continual Learning
![Task-incremental continual learning setting. The development stage
continues in deployment, adding new tasks over
time.](gfx/02_clti.pdf){#fig:clti width="\\linewidth"}
Here, we consider a task-incremental version of continual learning,
which is illustrated in Figure [2.14](#fig:clti){reference-type="ref"
reference="fig:clti"}. Tasks are added over time. We label only a few
samples over time. (We can only utilize these.)[^6] We update our model
on the way. Ideally, the model should not forget the previous task.
## ML Dev as a Closed System of Information
To better illustrate the flow of information in the ML development
stage, we draw a parallel between it and a closed thermodynamic system.
This is illustrated in Figure [2.15](#fig:thermo){reference-type="ref"
reference="fig:thermo"}.
![Comparison between a closed thermodynamic system [@wikithermo] (left)
and an ML development system (right).](gfx/02_thermo.pdf){#fig:thermo
width="\\linewidth"}
Here, dev is represented as a closed system that consists of four main
parts: *dataset*, *annotation*, *inductive bias* and *knowledge*.
Inductive bias can appear in the form of the model architecture or the
way we pre-process the data. A good example of knowledge is the
expertise of people with much experience in training neural networks
(NNs).
In a closed lab environment where no dataset, annotation, or anything
else is given to the system, there should be no additional information
that suddenly appears. We should not expect new information to be born
out of this system. Equivalently: There is no change in the maximal
generalization performance we can get out of this system. There are lots
of papers that *violate* this principle (see
[2.5.2](#sssec:leakage){reference-type="ref"
reference="sssec:leakage"}) [@DBLP:journals/corr/abs-2007-01434; @DBLP:journals/corr/abs-2007-02454].
Note that it is possible to *kill* information by, e.g., averaging
things or replacing measurements with summary statistics.
### Information Leakage from Deployment
::: definition
Information leakage *Information leakage* refers to the situation where
information intended exclusively for the deployment stage becomes
accessible during the development stage. It is an influx of information
into a closed system.
:::
Let us consider information leakage in the domain generalization
setting, illustrated in
Figure [2.16](#fig:closedsystem){reference-type="ref"
reference="fig:closedsystem"}. When one defines domain generalization as
in Section [2.4.5](#ssec:domain){reference-type="ref"
reference="ssec:domain"}, information about domain 4 must not be
available during development. That is, we cannot inject new information
into this closed ML system that comprises the resources *at dev stage*.
If we *do* inject new information, we have to treat it as a new setting:
When information about domain 4 is available, we cannot call it a domain
generalization setup anymore. This also means we cannot compare against
previous domain generalization methods. We need to set up a new setting,
build a new benchmark, and compare against methods with the same
setting.[^7]
![Closed ML system in the development stage for the domain
generalization setting. Information about domain 4 must not be available
in this closed system.](gfx/02_closed_system.pdf){#fig:closedsystem
width="\\linewidth"}
#### Examples of Information Leakage in the Domain Generalization Setting
Consider the domain generalization setting
from [2.4.5](#ssec:domain){reference-type="ref"
reference="ssec:domain"}. There are several ways how information leakage
can surface and spoil our results.
**Scenario 1.** Some hyperparameters are chosen based on labeled samples
from domain 4. In a sense, our dev set is partly taken from domain 4. We
cannot talk about true generalization.
**Scenario 2.** Some hyperparameters are chosen by visually inspecting
domain 4. This is still information leakage, just in a less automated
way.
**Scenario 3.** The model is trained on labeled samples from domains 1-3
and unlabeled samples from domain 4. In the particular case of only two
domains -- domain 1 in the development stage and domain 2 in the
deployment stage -- and labeled or unlabeled samples being used from
domain 2, we are performing domain adaptation, not domain
generalization.
**Scenario 4.** Some hyperparameters are chosen to maximize publicly
available scores after evaluation on some benchmarks with domain 4.
Strictly speaking, these scores contain information about domain 4. One
way to overcome this information leakage is to provide only a ranking of
methods but not the scores.
#### Information Leakage from Domain Generalization Evaluation
Let us consider the particular problem of evaluation domain
generalization methods in a bit more detail. It makes perfect sense to
have a few labeled samples from deployment because we have to evaluate
our system on a new domain anyway. Still, strictly speaking, as soon as
we evaluate our model on the new domain, we *use* our target domain
(domain 4), so we cannot talk about generalization. We may need to shift
the definition of domain generalization into something that allows some
validation in the target domain (like domain adaptation). **Evaluating
or benchmarking domain generalization is, therefore, contradictory.**
Researchers still evaluate their methods on domain generalization
benchmarks (observing the test set corresponding to the deployment
domain multiple times through their lifetime), as we need to monitor
progress somehow.
#### Information Leakage from Pretraining {#sssec:test}
Another interesting question arises if we consider pretrained models. As
soon as there is a pretrained system in dev resources, we introduce not
just a single model but the entire pretraining dataset (which is
gigantic in the case of large language models (LLMs) or
CLIP [@https://doi.org/10.48550/arxiv.2103.00020]). Much expertise is
put into the dev scenario, which is also a dev resource. The consequence
is that it is hard to say that anything is zero-shot learning in such
settings. For example, if we give a new task to an LLM, we can never be
sure that the model has never seen that task during training. The use of
ImageNet-1K pretraining for zero-shot learning is also criticized: the
1k classes contain much information, and for certain classes we evaluate
our model on, we do not have true zero-shot learning at all.
#### When is information leakage not a problem?
The other side of the argument about the severity of information leakage
is that it might not always matter whether something is zero-shot
learning. If an LLM contains all information about the world, then there
is nothing new in the deployment stage, so we cannot have true zero-shot
learning. Nevertheless, the model works very well, so we can make good
use of it in many real-world scenarios. Take the example of face
recognition. Suppose there is a person our system has never seen before.
We want it to recognize (or verify) that this is the same person on
subsequent days. This setup is zero-shot verification. However, as soon
as our training set contains billions of identities, this does not
matter anymore. In the most extreme case of having seen all people in
the world, we do not have to generalize to unseen people. Nevertheless,
we are still happy with the system if it works well for everyone.
Zero-shot learning thus becomes less meaningful at a large scale.
### A Case Study on Information Leakage {#sssec:leakage}
We consider an example of a paper that is leaking information from
deployment, titled "[Self-Challenging Improves Cross-Domain
Generalization](https://arxiv.org/abs/2007.02454)" [@DBLP:journals/corr/abs-2007-02454].
It is straightforward to find such papers, even from highly regarded
research groups.
::: definition
Ablation Study We are changing one factor at a time in our method, as we
want to see the contribution of each factor towards the final
performance. Everything else is kept fixed. Then we can understand the
effect of the factor better, and we can also optimize that factor
(hyperparameter) separately.
:::
::: {#tab:reftab}
**Feature Drop Strategies** **backbone** **artpaint** **cartoon** **sketch** **photo** **Avg $\boldsymbol{\uparrow}$**
----------------------------- -------------- -------------- ------------- ------------ ----------- ---------------------------------
Baseline ResNet18 78.96 73.93 70.59 **96.28** 79.94
Random ResNet18 79.32 75.27 74.06 95.54 81.05
Top-Activation ResNet18 80.31 76.05 76.13 **95.72** 82.03
Top-Gradient ResNet18 **81.23** **77.23** **77.56** 95.61 **82.91**
: Benchmark results of various feature drop strategies. Explanation of
columns: e.g., for the art painting column, we train the model on
{cartoon, sketch, photo} and test it on art painting. Table taken
from [@DBLP:journals/corr/abs-2007-02454].
:::
In Tables 1-5 of [@DBLP:journals/corr/abs-2007-02454], an ablation study
is conducted. We show Table 1 of the paper in
Table [2.1](#tab:reftab){reference-type="ref" reference="tab:reftab"}
for convenience. We see various hyperparameters chosen based on the
performance on the domain they want to generalize to. (For example, the
"Feature Drop Strategy" hyperparameter considers different ways to drop
features to make the model better regularized.) They are looking at the
generalization performance to each of the domains using leave-one-out
domain generalization. They finally choose the hyperparameters based on
the average accuracy on the left-out domains. If we also validate on the
test set, we cannot talk about domain generalization anymore, as we have
information leakage. (Even if we consider the academic point of view of
the test set belonging to deployment.)
This hyperparameter configuration will be pretty good for the PACS
dataset [@DBLP:journals/corr/abs-1710-03077] (see below). However, this
does not guarantee that this is the best ingredient for non-PACS cases.
We might overfit to PACS severely by making such choices. (Of course,
this overfitting can also happen even if we do not use the test set as a
part of the validation set, but rather as a criterion for method
selection across different papers. However, that is a much less severe
case of overfitting. Here, the authors make use of the test set many
times in a *single* paper.)
**Takeaway**: Ablation studies are generally great for ID generalization
tasks, but one should be very careful with ablation studies for OOD
generalization.
### Solutions to Information Leakage
There are many (partial) solutions to combat information leakage,
discussed below.
**Select hyperparameters within dev resources.** Section 3 of "[In
Search of Lost Domain
Generalization](https://arxiv.org/abs/2007.01434)" [@DBLP:journals/corr/abs-2007-01434]
discusses information leakage and provides possible solutions for it.
Selecting hyperparameters, design choices, checkpoints, and other parts
of the system must be a part of the learning problem (i.e., part of the
ML dev system). When we propose a new domain generalization algorithm,
we must specify a method for selecting the hyperparameters rather than
relying on an unclear methodology that invites potential information
leakage.
**Use the test set once per project.** By using a specific test set
multiple times, we can always overfit to it. If our goal is to go
towards a distribution outside dev, then evaluating on the test set
multiple times can be harmful. However, as discussed previously, we *do*
have to use it multiple times to compare methods and evaluate our
approach. Solution: At least do not use the test set for hyperparameter
tuning; tune them on the validation set. For example, if we want to
generalize well to the art painting dataset of PACS, tune the
hyperparameters on {cartoon, sketch, photo}. Then we measure performance
on the art painting test set. We will do the same thing for a new,
genuinely unknown domain in deployment: find the hyperparameters on the
known domains. Thus, one should use the test set sparingly. A good rule
of thumb might be to use it once per paper. This way, we are less likely
to overfit to it. (The State-of-the-Art (SotA) architectures are also
likely to overfit to standard benchmarks, e.g., to
ImageNet-1K. [@https://doi.org/10.48550/arxiv.1902.10811])
**Update benchmarks.** Even if the test set labels are unknown, we can
overfit to the test set just based on the reported performance, e.g., on
leaderboards. Thus, for many reasons, the test set has to be changed
every once in a while. Another helpful idea is to use a non-fixed
benchmark, where the data stream changes over time. For comparability,
this is an issue: we have a continuously changing target over time.
However, it is usually not problematic: In human studies, researchers
have been dealing with a changing evaluation set. By using statistical
tests, they could always argue about statistical significance. One
particular example is the case of *clinical trials*. It is physically
impossible to test two related drugs on exactly the same set of people:
The test set changes from experiment to experiment. However, statistical
tests give a principled way to determine if the observed changes are
significant or if they could have happened by chance.
**Modify evaluation methods.** A different approach is to use a
differential-privacy-based evaluation method. This method adds a Laplace
noise to the accuracy before reporting it to the practitioner. This is
better than overfitting to a single benchmark.
**Summary**: We are trying to address an impossible problem: to truly
generalize to new domains, which requires them to be previously unseen.
We can never keep them completely unseen, as then we cannot *measure*
generalizability. However, as soon as we measure generalizability, we
cannot talk about generalization anymore. This is an unavoidable dilemma
for many ML fields, even more so for Trustworthy Machine Learning (TML),
as they deal with more challenging cases of generalization where
evaluation is very tricky.
## Domain Generalization Benchmarks
![Training (yellow) and test (blue) datasets in the domain
generalization setting. Shape is the task, color is the
domain.](gfx/02_dg_example.pdf){#fig:dg2 width="0.5\\linewidth"}
::: definition
Subpopulation Shift Benchmarks In subpopulation shift benchmarks, we
consider test distributions that are subpopulations of the training
distribution and seek to perform well even on the *worst-case*
subpopulation.
:::
In the previous section, we highlighted the importance of paying special
attention to benchmarks in the domain generalization setting. Now, let
us take a closer look and discuss some prominent examples in more
detail. Some of these benchmarks will be related to the problem of
subpopulation shift which is partially connected to domain
generalization.
### Examples of Domain Generalization Benchmarks
A toy domain generalization problem is shown in
Figure [2.17](#fig:dg2){reference-type="ref" reference="fig:dg2"}. Our
*goal* is to generalize well to blue images: This is a particular
instance of cross-domain generalization. The *inputs* are images with
mono-colored objects of some shape. The *labels* are $\{0, 1, 2\}$ -- we
have a three-way classification problem. The set of classes is shared
across the domains and is assigned according to the object's *shape*
(circle, triangle, or square). The model's *task* is to predict the
label of a given input. Here, we consider three domains: , , and colored
objects. This is not a real problem but we refer to it to illustrate the
scheme shared among the subsequent benchmarks.
![The PACS dataset can be used for domain generalization. Figure taken
from [@https://doi.org/10.48550/arxiv.1710.03077].](gfx/02_pacs.pdf){#fig:pacs
width="0.6\\linewidth"}
#### The PACS Dataset
The PACS dataset considers four domains: **P**hotos, **A**rt Paintings,
**C**artoons, and **S**ketches. Samples from each domain are shown in
Figure [2.18](#fig:pacs){reference-type="ref" reference="fig:pacs"}. The
set of classes is shared across the domains. To benchmark domain
generalization, leave-one-out evaluation is used. For example, one might
train on **PAC** and test on **S**.
#### DomainBed
DomainBed is a combination of some popular domain generalization
benchmarks into a single suite. It subsumes, e.g., PACS, Colored MNIST,
Rotated MNIST, and Office-Home. Office-Home contains [four
domains](https://paperswithcode.com/dataset/office-home). These are (1)
*art* that contains artistic images in the form of sketches, paintings,
ornamentation, and other styles; (2) *clipart* that is a collection of
clipart images; (3) *product* that contains images of objects without a
background; and (4) *real-world* that collects images of objects
captured with a regular camera. For each domain, the dataset contains
images of 65 object categories found typically in Office and Home
settings. Samples from each subsumed benchmark are shown in
Figure [2.19](#fig:domainbed){reference-type="ref"
reference="fig:domainbed"}.
![The DomainBed suite. Illustration taken
from [@https://doi.org/10.48550/arxiv.2007.01434].](gfx/02_domainbed.png){#fig:domainbed
width="0.6\\linewidth"}
#### The Wilds Benchmark
The Wilds benchmark [@pmlr-v139-koh21a] comprises several tasks and
domains for each task. IT contains domain generalization benchmarks and
also subpopulation shift benchmarks. A detailed illustration of the
dataset is shown in Figure [2.20](#fig:wilds){reference-type="ref"
reference="fig:wilds"}.
![The Wilds suite. Figure taken
from [@stanfordai].](gfx/02_wilds.png){#fig:wilds width="\\linewidth"}
#### ImageNet-C
ImageNet-C [@hendrycks2019robustness] is an extension to the ImageNet
dataset [@deng2009imagenet] with a focus on robustness. For the same
image, the dataset contains various corruptions. Corruptions include
Gaussian Noise, Defocus Blur, Frosted Glass Blur, Motion Blur, Zoom
Blur, JPEG Encoding-Decoding, Brightness Change, and Contrast Change.
Examples of these corruption types are shown in
Figure [2.21](#fig:imgnetc){reference-type="ref"
reference="fig:imgnetc"}. The ImageNet-C dataset consists of 75
corruptions, all applied to the ImageNet test set images. It simulates
possible corruptions under the deployment scenario, thereby measuring
the robustness of the model to the perturbation of the data generating
process.
![Illustration of the various corruptions ImageNet-C employs, taken
from [@hendrycksgithub].](gfx/02_imgnetc.png){#fig:imgnetc
width="0.6\\linewidth"}
#### ImageNet-A
ImageNet-A [@hendrycks2019nae] collects common failure cases of the
PyTorch ResNet-50 [@https://doi.org/10.48550/arxiv.1512.03385] on
ImageNet.[^8] It contains images that classifiers should be able to
predict correctly but cannot. Examples from ImageNet-A are shown in
Figure [2.22](#fig:imgnetao){reference-type="ref"
reference="fig:imgnetao"}.
![Sample from the ImageNet-A and ImageNet-O datasets, taken
from [@hendrycks2019nae].](gfx/02_imgnetao.pdf){#fig:imgnetao
width="0.4\\linewidth"}
#### ImageNet-O
ImageNet-O is another extension to ImageNet that contains anomalies of
unforeseen classes which should result in low-confidence predictions, as
the true class labels are not ImageNet-1K labels. ImageNet-O examples
are shown in Figure [2.22](#fig:imgnetao){reference-type="ref"
reference="fig:imgnetao"}.
## Domain Generalization Difficulties
We have discussed how easy it is to confuse a setting with domain
generalization just by not being careful enough with how one uses
information about the target distribution. For those who are ready to
accept this difficulty, we would like to point out that there are even
more complications with domain generalization. However, we hope that
these difficulties will not be an obstacle but rather an invitation to
challenge, which is why we gathered the most important ones in this
section.
### Ill-Defined Behavior and Spurious Correlations {#ssec:spurious}
![Diagonal training dataset and unbiased test set in cross-domain
generalization. During training, the model is only exposed to samples
where the shape and color labels
coincide.](gfx/02_diagdomain.pdf){#fig:diagdomain
width="0.5\\linewidth"}
Consider Figure [2.8](#fig:dg){reference-type="ref" reference="fig:dg"}.
For this setting, we outline two main difficulties: the ill-defined
behavior on novel domains and the spurious correlations between task
labels and domain labels.
#### Ill-Defined Behavior on Novel Domains
The model does not know what to do in regions without any training data.
One could ask how domain generalization is even possible. It works in
practice, but there are no rigorous theories as to why. We take it at
face value, without any guarantees of the model's behavior on novel
domains. This problem can be addressed through calibrated epistemic
uncertainty estimation ([4.2.3](#ssec:epi){reference-type="ref"
reference="ssec:epi"}) to make the model "know when it does not know".
#### Spurious Correlations between Task Labels and Domain Labels
::: definition
Spurious correlation A spurious correlation is the co-occurrence of some
cues, features, or labels, which happens in the development stage but
not in the deployment stage.
:::
For example, our prediction of shape may depend a lot on the color. If
we have a diagonal dataset, this can become a huge problem, as depicted
in Figure [2.23](#fig:diagdomain){reference-type="ref"
reference="fig:diagdomain"}. Here, we have a perfect correlation between
the two cues in the training dataset. In other words, there are spurious
correlations between *task labels* and *domain labels*. This results in
an ill-defined behavior on novel domains.
The problem of spurious correlations is also present in cross-bias
generalization. We will consider this setting as it is easier than
domain generalization, and ill-definedness is out of the picture.
## Cross-Bias Generalization
We will now discuss cross-bias generalization from Table
[\[tab:gentypes\]](#tab:gentypes){reference-type="ref"
reference="tab:gentypes"} that has a particular focus on the problem of
spurious correlations. As seen before, we can amplify the spurious
correlation between domain (bias) and target label (task) for OOD
generalization to arrive at a scenario like
Figure [2.23](#fig:diagdomain){reference-type="ref"
reference="fig:diagdomain"}. We also remove the issue with unseen
attributes: a model is guaranteed to encounter each attribute (e.g.,
possible shapes, colors) at least once, but in a heavily correlated
fashion.
![Cross-bias generalization setting with an unbiased deployment domain.
In the deployment stage, the model has to do well on samples where the
correlation between color and shape is broken.](gfx/02_cbg.pdf){#fig:cbg
width="\\linewidth"}
This leads us to textbook cross-bias generalization, a cleaner setup for
addressing the spurious correlations, for which an overview is given in
Figure [2.24](#fig:cbg){reference-type="ref" reference="fig:cbg"}. In
the test set, we have to recognize a diverse set of combinations of cues
that we have not seen during training.
In general, the situation could be better described as "We still have
more dominance along the diagonal, but we have a requirement that every
single subgroup (i.e., (color, shape) combination) has to have a similar
level of performance.". This formulation is roughly equivalent to having
an equal number of samples in each grid cell. It is also possible that
the deployment scenario is *still biased*, just has a different bias. We
impose no restrictions on the deployment distribution.
::: information
Compositionality and Cross-Bias Generalization Cross-bias generalization
has close ties to
compositionality [@andreas2019measuring; @lake2014towards] that aims to
disentangle semantically different parts of the input in the
representation of neural networks. If a network leverages
compositionality, i.e., treats semantically independent parts of the
input independently when making a prediction, spurious correlations
cannot arise by definition. This leads to robust cross-bias
generalization. Of course, achieving this in practice is much more
complicated.
:::
### Why is cross-bias generalization still challenging?
ID generalization is already an ill-posed problem. The No Free Lunch
Theorem states that without extra inductive bias in the dev scenario, we
cannot train a model that generalizes to the same distribution. We need
inductive biases to find well-generalizing models ID. Without inductive
biases, any model is equally likely to generalize well
ID [@wolpert1997no; @mitchell1980need].
OOD generalization (in particular, cross-bias generalization) poses
another layer of difficulty: the *ambiguity of cues*, discussed next. We
need further information in the ML dev system to solve it.
### The Feature Selection Problem
We mentioned that the ambiguity of cues brings an additional challenge
to cross-bias generalization. We would like to formally define this
ambiguity.
::: definition
Underspecification An ML setting is underspecified when multiple
features (e.g., color, shape, scale) let us achieve 100% accuracy on the
training set. The training set does not specify what kind of cue the
model should be looking at and how to generalize to new samples that do
not have a perfect correlation. If the model chooses the incorrect cue,
we say a *misspecification* happens.
**Note**: We assume a network with very high capacity that can get 100%
accuracy for every cue in the training set. For complex cues, the
decision boundary tends to be wiggly, but under our assumption, even
this decision boundary can be learned.
:::
Underspecification in the cross-bias generalization setting necessitates
the selection of the suitable feature(s) for good generalization to the
deployment scenario.
A model under the vanilla OOD (e.g., cross-bias) generalization setting
with a diagonal dataset lacks the information to generalize to an
arbitrary deployment task well. When predicting in the deployment
scenario (considering an uncorrelated dataset), the model cannot
simultaneously use all perfectly aligned cues on the training set, as
they contradict each other. Any cue the model adopts from training could
be correct; the answer depends on the deployment task (chosen by a
human, e.g., they can choose the most challenging cue for the model),
which is arbitrary out of the perfectly correlated cues.
- If we have an adversarial deployment task selector, it can always
fool the system into performing badly by choosing the most difficult
cue for the model as the task.
**Without any knowledge about the deployment task, cross-bias
generalization is not solvable with a diagonal training set.** Yet, it
happens a lot that someone claims this in ML conference papers. They
usually have a hidden ingredient that they implicitly assume. This is a
prime example of *information leakage*.
To select the right feature for the task, more information is needed.
This also holds for more general OOD settings: an example is shown in
Figure [2.25](#fig:underspec){reference-type="ref"
reference="fig:underspec"}.
![Underspecification in a more general toy OOD setting than cross-bias
generalization. We are faced with the same problem: We now know that
color is not the task, but shape and size can still be tasks. Figure
inspired
by [@https://doi.org/10.48550/arxiv.2110.03095].](gfx/02_general.pdf){#fig:underspec
width="0.8\\linewidth"}
#### The Feature Selection Problem in Fairness
The feature selection problem is also closely connected to the problem
of fairness. What is fairness? From the viewpoint of the equality of
opportunity as a notion of individual fairness: people who are similar a
task should be *treated* similarly. There can be attributes for
individuals that are relevant to the task and attributes that are
supposed to be irrelevant, e.g., demographic details, such as race or
gender. We want the model to only look at relevant features (task cue),
not sensitive/prohibited attributes (bias cue). This notion of fairness
is comparative: We are determining whether there are differences in how
similar people (according to the task cues) are treated.
Decision-makers should automatically avoid differential treatment
according to people's race, gender, or other possibly discriminatory
factors if we accept in advance that none of these characteristics can
be relevant to the task at hand.
### Extra Information to Make Cross-Bias Generalization Possible
As we have seen, without extra information, cross-bias generalization is
not solvable. We suggest considering a simplified generalization setting
where such information is available in the development stage. This is
much less exciting than true generalization, but we need this
simplification to make the problem feasible. We will consider two ways
to add extra information to the setting that makes the problem
well-posed.
![New setting that makes cross-bias generalization possible, referred to
as the "First way" in the text. We have a few unbiased samples in the
development resources and bias labels are also
available.](gfx/02_new.pdf){#fig:first width="\\linewidth"}
#### First way to make cross-bias generalization feasible: adding unbiased samples
This approach is illustrated in
Figure [2.26](#fig:first){reference-type="ref" reference="fig:first"}. A
small number of non-correlated samples are added to the dev resources
(these samples are not necessarily deployment samples). We have
attribute ($Z$s -- here bias, but it could also be domain) labels for
each sample that specify which bias category a sample corresponds to.
For example, $Z$s can correspond to different jackets. It is useful to
explicitly tell the model what *not to* use as cues (see DANN in
Section [2.12.2](#sssec:dann){reference-type="ref"
reference="sssec:dann"}) in the form of bias labels.
People control the percentage of unbiased samples using
$\rho \in [0, 1]$ in papers. We have to know what $\rho$ they are using;
it is a part of the setting. The lower the percentage of unbiased
samples, the harder the task becomes. The task can be made arbitrarily
hard, up to the point that it is impossible again ($\rho = 0$). The test
set is unbiased in this example. However, in the deployment domain, we
might just as well have biased samples that are biased in a different
way than the dev samples.
#### A word about domain generalization
As we discussed in [2.4.5](#ssec:domain){reference-type="ref"
reference="ssec:domain"}, *domain generalization* is supplying
additional information by providing domain labels. However, if we simply
treat the bias labels (color) as our domain labels for domain
generalization, we are sadly still not able to solve the problem: for
such a diagonal dataset, the task labels are the same as the bias
labels. Under this interpretation, domain generalization does not
directly make the problem solvable. That is why we still need access to
unbiased samples. In this case, we treat the domain labels as 'unbiased'
and 'biased', and the problem is solvable again. (This is very similar
to the first way, only the interpretation is different.)
#### Second way to make cross-bias generalization feasible: converting the problem to domain adaptation/test-time training
We can also consider *domain adaptation*. Here, the source of extra
information is access to the target distribution. This is different from
before when we only had unbiased samples that did not necessarily come
from the target domain. By performing domain adaptation, we make the
target distribution more accessible to the dev stage. Here, we assume
labeled samples from the target domain, and knowledge about the domain
of each sample.
The same logic applies if we convert the problem to *test-time
training*. The only difference is that in test-time training, the target
distribution changes continuously during deployment, therefore, we
constantly adapt our model to new situations.
### How to determine what cue our model learns to recognize?
To understand how well we solved the problem of cross-bias
generalization or to gain insights into the model's inner workings, it
is often helpful to understand which cue our model uses for predictions.
However, answering this question is not straightforward in general.
To diagnose our model, we require labels for different cues (e.g.,
labels $Y$ and $Z$ from Figure [2.26](#fig:first){reference-type="ref"
reference="fig:first"}). In that case, after training on our
close-to-diagonal dataset, we label unbiased (off-diagonal) samples from
a test set according to different cues and calculate the model's
accuracy each labeling scheme on this unbiased test set. The model
should achieve high accuracy for the cue it learned on the training
dataset and perform close to random guessing for all other cues.
## Shortcut (Simplicity) Bias {#ssec:simplicity}
We have seen that due to underspecification
(Definition [\[def:underspec\]](#def:underspec){reference-type="ref"
reference="def:underspec"}), models can learn different equally
plausible cues. But do models prioritize learning one cue over others?
It turns out the answer is yes, simpler cues are learned first. This
property is usually called *shortcut/simplicity bias*, defined below.
::: definition
Shortcut Bias/Simplicity Bias The shortcut bias is the ML models' inborn
preference for "simpler" cues (features) over "complex" ones.
When there are multiple candidates of cues for the model to choose from
for achieving 100% accuracy (i.e., the setting is underspecified), the
model chooses the *easier* cue.
:::
### Examples of Shortcut Bias
Let us first define the *Kolmogorov complexity*, which is needed for the
details of the first example.
::: information
Kolmogorov Complexity The Kolmogorov complexity measures the complexity
of strings (or objects in general) based on the minimal length among
programs that generate that string.
Kolmogorov Complexity of a cue $p_{Y \mid X}$
(KCC) [@https://doi.org/10.48550/arxiv.2110.03095]:
$$K(p_{Y \mid X}) = \min_{f:\cL(f; X, Y) < \delta} K(f) \qquad \delta > 0, f: X \rightarrow Y.$$
Intuitively, $K(p_{Y \mid X})$ measures the *minimal* complexity of the
function $f$ required to memorize the labeling $p_{Y \mid X}$ on the
training set (i.e., $\cL < \delta$).
:::
As a toy example, according to the paper "[Which Shortcut Cues Will DNNs
Choose? A Study from the Parameter-Space
Perspective](https://arxiv.org/abs/2110.03095)" [@https://doi.org/10.48550/arxiv.2110.03095],
Color $>$ Scale $>$ Shape $>$ Orientation in the order of models'
preference, regardless of the network architecture and the training
algorithm. Why could this be? The reason, according to the authors, is
that color is a simpler cue than the others, as measured by the
Kolmogorov complexity of the cues. The authors approximate $K(f)$ by the
minimal number of parameters of model $f$ to memorize the training set
with labels the cue in question.
To better illustrate what simplicity bias is, we provide several
examples below. An overview is shown in
Table [2.2](#tab:overview){reference-type="ref"
reference="tab:overview"}, which is further detailed in the individual
sections.
::: {#tab:overview}
**Problem** **Task** **Bias Cue** **Task Cue**
--------------------------------------------- ------------------------------------------- ----------------------------------------------------------------------------------- ------------------------------------------------------------------
Context bias Classify object Background context Foreground object(s)
Texture bias Classify object Texture of object Shape of object
Not understanding sentence structure Natural language inference Set of words in a sentence, lexical overlap cue, subsequence cue, constituent cue The entire sentence
Biased action recognition Recognize action that human is performing Scene, instrument, static frames Human movement
Using single modality for multi-modal tasks Visual question answering Question only Question and image
Use of sensitive attributes Predict possibility of future defaults Sensitive attributes (disability, gender, ethnicity, religion, etc.) Size of the loans, history of repayment, income level, age, etc.
: Overview of bias types and corresponding cues.
:::
#### Context Bias
Consider the task of object classification. The task cues are the
foreground objects, but a classifier focusing on the background context
bias cues can achieve high accuracy when the background is highly
correlated with the foreground. The examples, shown in
Figure [2.27](#fig:context){reference-type="ref"
reference="fig:context"}, are
from [@https://doi.org/10.48550/arxiv.1812.06707].
**Example 1**: We have a classification problem where one of the classes
is 'keyboard'. On nearly all images, keyboards are accompanied by
monitors. The model might learn a shortcut bias for detecting monitors
(detecting these might be easier than detecting keyboards): Then, the
context (monitor pixels) will influence the keyboard score (logit) more
than the actual keyboard presence. This process will not generalize to
novel scenes where keyboards and monitors appear separately. If we
remove the monitors from the image, the score for 'keyboard' will go
down. If we remove the keyboard from the image, the score for 'keyboard'
will stay quite high because the monitors are still present. Generally,
co-occurring cues/features (diagonal samples) often lead to spurious
correlations.
**Example 2**: The task is 'frisbee', and the bias is 'person'. It is
easier to detect people because they are usually larger in images. The
same phenomenon can be observed here as in **Example 1**.
**Note**: We humans also often look at the context to predict what is
present in an image (or scene).
![Context bias can arise in various settings. Figure taken
from [@https://doi.org/10.48550/arxiv.1812.06707].](gfx/02_context.png){#fig:context
width="0.4\\linewidth"}
#### Texture Bias
Consider the task of object classification again. In this case, the task
cue is the shape of the object and the bias cue is the texture of the
object.
**Example**: Training a cat/dog classifier on a diagonal dataset, where
the texture and shape are highly correlated. At test time, we want to
predict cats when changing their texture (e.g., to greyscale,
silhouette, edges, or to a marginally different texture). The accuracy
of humans stays consistently high because we like to look at global
shapes. Popular CNN models break down in such scenarios. However, when
only the true texture of the original object (cat) is presented, models
stay perfectly accurate while humans make more mistakes (90% accuracy).
The example is inspired by [@https://doi.org/10.48550/arxiv.1811.12231].
**Note**: Networks are prone to be biased towards textures because it is
much easier to learn. If the task is 'shape', such networks will
generalize poorly to no/different textures.
#### NLP Models Not Understanding the Exact Structure of the Sentence
Our task of interest is natural language inference: Given premise and
hypothesis, determine whether (1) the premise implies the hypothesis,
(2) they contradict each other, or (3) they are neutral. The task cue is
the whole sentence pair. However, the model might only use the set of
words in the sentences, the lexical overlap cue, the subsequence cue, or
the constituent cue. These are explained in the examples below, taken
from [@mccoy-etal-2019-right]. We consider three bias cues and
corresponding premise-implication pairs for each.
**Example 1**: Lexical overlap cue. Assumes that a premise entails all
hypotheses constructed from words in the premise.
::: center
The doctor was paid by the actor. $\implies$ The doctor paid the actor.
:::
**Example 2**: Subsequence cue. Assumes that a premise entails all of
its contiguous subsequences.
::: center
The doctor near the actor danced. $\implies$ The actor danced.
:::
**Example 3**: Constituent cue. Assumes that a premise entails all
complete subtrees in its parse tree.
::: center
If the artist slept, the actor ran. $\implies$ The artist slept.
:::
These can all lead to wrong implications, as seen above.
#### Biased Action Recognition
The model's task is to recognize the action that a human is performing
on a video. The task cue is the human movement, e.g., swinging, jumping,
or sliding. The bias cues might be the scene, the instrument (on/with
which the action is performed), or the static frames. The quiz below is
taken from [@https://doi.org/10.48550/arxiv.1912.05534].
**Quiz**: Can the reader guess what action the blocked person is doing
in the videos of Figure [2.28](#fig:quiz){reference-type="ref"
reference="fig:quiz"}? Even from the scene alone, we as humans can have
a good guess about what the person is likely doing. This tells us that
humans also use many cues in the context to make predictions. However,
we also know that there are many other possibilities; we are just giving
the most likely prediction. When we observe the actual task cue, we can
make predictions based on that. Machines fail miserably because they
*only rely on bias cues* from the dataset. We want ML models to be aware
that they can be tricked in such cases; a notion of uncertainty and
well-calibratedness is needed.
![Example of four frames in videos where it is remarkably easy to
predict a human's (very likely) action based on a single, static
frame.](gfx/02_quiz.png){#fig:quiz width="\\linewidth"}
#### Using a Single Domain for Multi-modal Tasks: Visual Question Answering
The task is to answer a question in natural language using both the
question and a visual aid (an image). The task cue is, therefore, both
the question and the image. The bias cue is *only* the question. When
one of the modalities is already sufficient for making good predictions
on the training set, the model can choose to only look at that cue
because of the simplicity bias. This generalizes poorly to situations
where both modalities are needed. The example below is inspired
by [@https://doi.org/10.48550/arxiv.1906.10169].
**Example**: The question is "What color are the bananas?". In the
image, we see a couple of green bananas. When the model only relies on
the question, it will probably get this question wrong. (Correct answer:
green, not yellow.)
#### ML-based Credit Evaluation System using Sensitive Attributes
The model is tasked to predict the possibility of future defaults for
each individual. (Will the person go bankrupt, or will they be able to
repay the loan?) The task cue is the size of the loans, history of
repayment, income level, age, and similar factors. The bias cues are
sensitive attributes that are not allowed to be used for the prediction,
such as disability, gender, ethnicity, or religion. When an ML system
learns to use bias cues to predict credit risks (that might not be
explicit features in a vector representation), the model is not fair.
The ML system requires further guidance to not use sensitive cues.
### Is the simplicity bias a bad thing?
Whether shortcut bias is a good or a bad thing depends on the task.
#### Simplicity Bias in ID Generalization
Simplicity bias is actually *praised* in ML in general, especially in ID
generalization. There are reports saying
::: center
\
:::
The parameter space is enormous. If there is no inductive bias (from the
training algorithm or the architecture), we can find whatever solution
in the parameter space, many of which do not generalize well. But
because of the simplicity bias, we will find some simpler rules that are
very likely to generalize well *to the same distribution*. (Here, we use
the assumption that preference for simple cues usually leads to simple
functions.)
![Example where the shortcut bias is favorable. For ID generalization
tasks, simple cues are often sufficient for
generalization.](gfx/02_favorable.pdf){#fig:favorable
width="0.8\\linewidth"}
The usefulness of the shortcut bias in ID generalization is illustrated
in Figure [2.29](#fig:favorable){reference-type="ref"
reference="fig:favorable"}. In the diagonal dataset case, any of the
perfectly correlated cues are valid for performing well in deployment,
considering ID generalization.
#### Simplicity Bias in OOD Generalization
![Example where the shortcut bias is unfavorable because of
misspecification and does not lead to robust generalization. For OOD
generalization tasks, simple cues may not work
anymore.](gfx/02_unfavorable.pdf){#fig:unfavorable
width="0.8\\linewidth"}
For OOD generalization, the picture is a bit different. Simplicity bias
is usually not welcome here because there are many OOD cases where the
simplest cue is not good for generalization, as it is not relevant to
the task. This causes problems during deployment, as the model's natural
choice does not necessarily concur with the cue that would let the model
generalize. For example, the background texture tends to be simple to
recognize because we only have to look at very local parts of the image.
The model might be able to use it to fit the training data well, but it
will not usually generalize to different domains. For fairness, simple
cues (e.g., parent's income) may also not be *ethical* to use. We wish
to prevent the model from using these cues. An example where the
shortcut bias is unfavorable is given in
Figure [2.30](#fig:unfavorable){reference-type="ref"
reference="fig:unfavorable"}.
## Identifying and Evaluating Misspecification {#sssec:identify}
As discussed in earlier sections, underspecification (as defined in
Definition [\[def:underspec\]](#def:underspec){reference-type="ref"
reference="def:underspec"}) poses significant challenges to domain and
cross-bias generalization. Therefore, it is crucial to diagnose whether
our ML system suffers from misspecification. There are two main
strategies to evaluate
misspecification [@https://doi.org/10.48550/arxiv.2011.03395] (e.g., to
determine whether the model uses too much context). Both are
*counterfactual* evaluation methods, i.e., they manipulate the input to
determine what cue the model is looking at. (Counterfactual evaluation
always seeks answers to questions of the form "What would be the
prediction if we changed ...?".) We either alter the task cue or the
bias cue to observe the behavior of the model.
**Altering the task cue.** Here, the needed ingredients are the test set
with task labels[^9] and cue disentanglement (the ability to change cues
in the input independently). The evaluation method is as follows.
- **Alter**: For every test sample, alter (or remove) the
task-relevant cue.
- **Decide**: If the model performance *does not drop* significantly,
our model is biased towards an irrelevant cue, meaning our system is
misspecified.
**Altering the bias cue.** The needed ingredients are the same as when
altering the task cue. The evaluation method is detailed below.
- **Alter**: For every test sample, alter (or remove) the bias cue.
- **Decide**: If the model performance *drops* significantly, our
model is biased towards the altered cue, which, again, means that
our system is misspecified.
**Note**: This way, we also know *what* our model is biased towards.
With the previous method, we could only determine *whether* our model is
biased.
These desiderata can be formulated in terms of *differences in
accuracy/loss*. As long as there is a straightforward method that ranks
biased and unbiased models correctly, it works well. Different papers do
it differently.
![Example of two ways to change the (possible) bias cue of texture while
preserving the task cue of shape.](gfx/02_twoways.pdf){#fig:twoways
width="0.5\\linewidth"}
#### Examples of changing the bias cue
**Example 1** (Figure [2.27](#fig:context){reference-type="ref"
reference="fig:context"}): The task cue is 'skateboard', and the bias
cue is 'person'. It is improbable to see a skateboard on the road
without a person on it: the task cue is highly correlated with the bias
cue. We remove the bias cue and see how the score for 'skateboard'
changes for a trained model. The needed ingredients are bounding box
annotations/segmentation masks for objects and a good inpainting model.
If the score for 'skateboard' drops a lot, the model has been relying on
the bias cue.
**Example 2** (Figure [2.31](#fig:twoways){reference-type="ref"
reference="fig:twoways"}): The task cue is 'shape', and the bias cue is
'texture'. We consider two ways to change the bias cue: (1) Obtain a
segmentation mask of the object and overlay a texture image of choice.
(2) Style-transfer [@https://doi.org/10.48550/arxiv.1508.06576] original
image with a texture image of choice. The latter causes a less abrupt
change: The image stays more reasonable. If the score of the true object
drops significantly, the model has been relying a lot on the texture
bias.
#### Example of changing the task cue
The following example is taken
from [@https://doi.org/10.48550/arxiv.1909.12434]. The task cue is the
overall positivity/negativity (sentiment) of the review. The bias cue is
"anything but the task cue," e.g., the bag of words representation of a
review. We let a human change the task cue (the sentiment analysis
labels) by introducing minimal changes (a few words) in the sentences.
If the score of the positive label does not change significantly after
the update, the model does not rely on the overall meaning of the
inputs.
To further illustrate the possible interventions the human annotator can
make, we list some examples of changes made to the reviews:
- Recasting fact as "hoped for".
- Suggesting sarcasm.
- Inserting modifiers.
- Replacing modifiers.
- Inserting negative phrases.
- Diminishing via qualifiers.
- Changing the perspective.
- Changing the rating and some words.
Some of these are indeed very subtle and a model that is biased to the
bag of words that appear in the review cannot react to such changes.
## Overview of Scenarios for Selecting the Right Features {#sssec:overview}
![Overview of possible cross-bias generalization scenarios where the
problem is made feasible by using different kinds of additional
information. Scenario 2 can use prior knowledge about what the bias will
be in the dataset. Under these assumptions, it can either detect
unbiased samples and put more weight on them or make the final and
intentionally biased models different (independent) in other ways. We
will not discuss the upper version of scenario 3 (paper: "[Test-Time
Training with Self-Supervision for Generalization under Distribution
Shifts](https://arxiv.org/abs/1909.13231)" [@sun2020testtime]), as we
are not yet convinced that it is a possible case to solve in general
deployment scenarios. We have no full trust
yet.](gfx/02_scenarios.pdf){#fig:scenarios width="\\linewidth"}
So far, we have seen that predictions of models are often based on
*bias* cues, while a key to generalization lies in their reliance on the
*task* cues. How could we ensure that our model uses the task cue for
its predictions? We will see approaches to selecting the right features
for many settings. Let us quickly review some possible scenarios with
extra information in Figure [2.32](#fig:scenarios){reference-type="ref"
reference="fig:scenarios"}. The figure considers several settings that
vary in their access to unbiased training samples or test samples as
well as corresponding labels. It is important to understand that if we
have no information apart from the diagonal dataset, the problem is
conceptually unsolvable (top left cell). All the remaining cells
describe different scenarios where generalization becomes possible again
and we will discuss them in the next sections.
## Scenario 1 for Selecting the Right Features
An example of this scenario is given in
Figure [2.26](#fig:first){reference-type="ref" reference="fig:first"}.
In this case, we have a small number of unbiased training samples (1% or
even less) with bias labels. This is the easiest setting, as we know
which samples are unbiased: we simply compare $Y$ with $Z$. When they
are equal, we have an on-diagonal sample. When they are unequal, the
sample is off-diagonal (unbiased). We up-weight the off-diagonal samples
and perform regular Empirical Risk minimization (ERM). This is the most
naive approach, but it can perform well.
::: information
How to find unbiased samples? What we discuss is, of course, a very
simplistic setup. It is much more challenging to tell what samples are
unbiased for the COCO dataset with 80 categories. However, if we know
all target and bias labels, we can compute a matrix of co-occurrences
between classes. We can then infer which images are more typical or
atypical (e.g. a skateboard without a person is very unlikely),
depending on the co-occurrence statistics of the labels. For very
unlikely samples, we can, e.g., give a large weight during training. We
generally weight samples more where the bias is either missing or
different. However, there is an important caveat detailed in the example
below.
**Example**: We have a dataset with many images of cats, dogs, and
humans appearing together. The task is to predict whether an image
contains a cat. If we see a sample with both a cat and a dog present,
can we call it an atypical (unbiased, off-diagonal) sample and give it a
large weight? *Only if the model is actually biased towards 'human'.* If
the model is biased towards 'dog', this only aggravates the problem.
Co-occurrence statistics are useful to give initial weights to samples
but are usually only coarse proxies. Many biases are subtle and do not
arise in an "interpretable" way. Determining weights post-hoc can
directly act upon the problems of our model.
We can only determine weights in such cases using the following routine:
1. Train the network normally.
2. Determine to which combination of cues (such as 'dog' and 'human'
jointly, just 'dog', or just 'human') it is biased towards using the
unbiased test set.
3. Combat these biases by increasing the weights of samples that
contain unlikely combinations of cues the present biases.
The computationally complex part here is annotation. Generally, it is a
very strong assumption that we have labels for all possible cues! Once
we have the task and bias labels, we create a counting matrix for
co-occurrences which is easily computable on the CPU. In the COCO object
detection dataset, there are many objects on a single image usually, so
co-occurrences are easy to calculate. (Our assumption here is that
labeling is complete.)
:::
::: information
Model becoming biased again What happens if we have biased fish images
(i.e., fish are always in the hands of fishermen on the images) and we
get unbiased images (e.g., fish in water), but the model learns
shortcuts again (water background $\implies$ fish)? There are two
solutions in general.
**Bottom-up, incremental approach.** We continuously search for the
current model's biases by testing it for different sets of correlations
(like testing our model's performance on fish images for a set of
potential biases using unit tests). Such sets can be constructed by
removing/replacing possible shortcuts (e.g., water background) in the
original images. If we find that our model now uses some shortcuts, we
incorporate new samples without the corresponding biases. We continue
doing this until the possible ways to learn shortcuts are saturated
(i.e., it becomes more complicated than the task itself), and we are
happy with the model. This approach does not guarantee that the ultimate
model is unbiased, and usually, it is quite complicated to extensively
test our model for potential biases.
**Top-down approach.** Let us assume that some explainability method
provided us with a comprehensive and complete list of cues the model is
actually looking at. In such a case, we first determine what cues are
task cues and what are bias cues by human inspection. Then, we remove
the bias cues from our model and include others they should be looking
at more. **Disclaimer**: There is no technique for this in general, but
it would be very nice to have one. This is very much the frontier of
research in explainability.
:::
### Group DRO
Let us see how the availability of a small set of unbiased samples can
be exploited in practice. In this section, we will discuss a method
introduced in the paper "[Distributionally Robust Neural Networks for
Group Shifts: On the Importance of regularization for Worst-Case
Generalization](https://arxiv.org/abs/1911.08731)" [@https://doi.org/10.48550/arxiv.1911.08731],
called Group Distributionally Robust Optimization (Group DRO). The goal
of this method is to have the same accuracy for different bias groups
(elements of the bias-task matrix depicted in
Figure [2.23](#fig:diagdomain){reference-type="ref"
reference="fig:diagdomain"}). This goal is achieved by minimizing the
maximum loss across the groups. In the following paragraphs, we will
discuss how this minimization is performed.
#### Optimization problem in Group DRO
In vanilla Empirical Risk Minimization (ERM), we have the following
optimization problem:
$$\argmin_{\theta \in \Theta} \nE_{(x, y) \sim \hat{P}}\left[\ell(\theta; (x, y))\right].$$
To achieve the goal of minimizing the maximum loss across the groups in
DRO, the optimization problem is modified to the following one:
$$\argmin_{\theta \in \Theta} \left\{\mathcal{R}(\theta) := \sup_{Q \in \mathcal{Q}} \nE_{(x, y) \sim Q} \left[\ell(\theta; (x, y))\right]\right\}.$$
Here, $\mathcal{Q}$ encodes the possible test distributions we want to
do well on. It should be chosen such that we are robust to distribution
shifts, but we also do not get overly pessimistic models that optimize
for implausible worst-case distributions $Q$.
Let us now choose
$\mathcal{Q} := \left\{\sum_{g = 1}^m q_gP_g : q \in \Delta_m\right\}$
where $\Delta_m$ is the $(m - 1)$-dimensional probability simplex and
$P_g$ are group distributions. These can correspond to arbitrary groups,
but for our use case, the groups are based on spurious correlations. If
we go back to our toy example of a (color, shape) dataset, then the
individual groups can correspond to all possible (color, shape)
combinations). Then
$$\mathcal{R}(\theta) = \max_{g \in \{1, \dotsc, m\}} \nE_{(x, y) \sim P_g}\left[\ell(\theta; (x, y))\right],$$
as the optimum of a linear program (the way we defined $\mathcal{Q}$) is
always attained at a vertex (a particular $P_g$). Now, if we consider
the empirical distributions $\hat{P}_g$, we get **Group DRO**:
$$\argmin_{\theta \in \Theta}\left\{\hat{\mathcal{R}}(\theta) := \max_{g \in \{1, \dotsc, m\}} \nE_{(x, y) \sim \hat{P}_g}\left[\ell(\theta; (x, y))\right]\right\}.$$
The learner aims to make predictions for *the worst-case group* better.
Ideally, at the end of training, we have the same loss for each group
(considering equal label noise across groups -- if one group has huge
corresponding label noise, the learner either overfits to the noise
severely, which is suboptimal, or we do not have the same loss for each
group at the end of training).
#### Examples for the groups in Group DRO
**Toy example.** In our previous example
(Figure [2.26](#fig:first){reference-type="ref" reference="fig:first"}),
all possible combinations of shape and color can be treated as a group.
This way, we take into account the underrepresented combinations
appropriately. We can also treat the on-diag and off-diag samples as the
two groups, which might be a more stable choice if there are very few
off-diag samples.
**Faces.** Assume a dataset of celebrities where the task is to predict
gender from the image. The hair color annotation is also available.
Further, assume that we have access to many diagonal samples and a small
amount of off-diagonal samples where $$\begin{aligned}
&P_1\colon \text{ blonde female} &\text{50\%}\\
&P_2\colon \text{ dark-haired male} &\text{40\%}\\
&P_3\colon \text{ blonde male} &\text{3\%}\\
&P_4\colon \text{ dark-haired female} &\text{7\%}.\hspace{-0.35em}
\end{aligned}$$ If we just performed ERM/Regularized Risk Minimization
(RRM), the model would usually predict based on a mixture of cues that
would still favor the larger groups more and still be able to achieve
high accuracy as we explicitly optimize on the average loss. For
example, it could predict based on hair color: for dark-haired people,
we could predict 'male', and for blonde individuals, we could predict
'female'. However, Group DRO helps us optimize on the worst-case
combination, which can help prevent shortcuts.
**Humans and skateboards.** We consider one group comprising samples
that contain a skateboard but not a human and another group comprising
samples of skateboards with a human.
#### The Group DRO algorithm
Roughly speaking, Group DRO minimizes its optimization objective by
performing the following steps:
1. Calculate losses for all groups.
2. Select the group with the maximal loss.
3. Set the model's gradient active only on the training samples from
the worst-performing group.
4. Repeat.
The actual algorithm
(Algorithm [\[alg:group_dro\]](#alg:group_dro){reference-type="ref"
reference="alg:group_dro"}) is a bit more complicated. It considers an
exponential moving average for the weights of different groups and
performs gradient steps these weights. This modification allows the
method to be trained with SGD. It also has nice convergence
guarantees [@https://doi.org/10.48550/arxiv.1911.08731].
::: algorithm
Initialize $\theta^{(0)}$ and $q^{(0)}$
:::
::: information
Comments for the Group DRO algorithm
**Smoothed group-wise updates.** In
Algorithm [\[alg:group_dro\]](#alg:group_dro){reference-type="ref"
reference="alg:group_dro"}, $q^{(t)}_g$ influences the step size for the
sample (and the corresponding group in general). This formulation can be
considered a smoothed version of the original one, as we do not select
the worst-performing group but still base the update on the group-wise
performances.
**Looking at the worst-group metric.** In general, the method performs
worse than ERM on the average accuracy metric, as ERM directly optimizes
on that. However, Group DRO shines on the worst-group accuracy metric,
which is directly optimized by the method. ERM usually breaks down
completely on the worst-group accuracy metric when there are notable
group imbalances in the dataset.
:::
#### Ingredients for Group DRO
In Group DRO, we have
$$\text{samples for } (X, Y, \red{G}) = (\text{input}, \text{output}, \red{\text{group}})$$
where the groups come from, e.g., spurious correlations or demographic
groups. As we have group labels in addition to the usual setup (the
difference is highlighted in red), we expect better worst-case accuracy.
By explicitly optimizing on the worst-case spurious correlation/group,
our model might generalize better in deployment.
::: definition
Attribute Label Attribute labels are indicators of all possible factors
of variation in our data. Domain labels are a particular case of these.
:::
The group label can not only be a bias or domain label, but even a
general attribute label
(Definition [\[def:attrlab\]](#def:attrlab){reference-type="ref"
reference="def:attrlab"}). This additional cue makes cross-domain
generalization less ill-posed.[^10]
### Domain-Adversarial Training of Neural Networks (DANN) {#sssec:dann}
![Overview of the DANN method. The feature extractor is encouraged to
provide strong representations for predicting the class label and not
contain any information about the domain label. Figure taken from the
paper [@https://doi.org/10.48550/arxiv.1505.07818].](gfx/02_dann.pdf){#fig:dann
width="\\linewidth"}
Apart from Group DRO, we have one more algorithm for Scenario 1 to
discuss, called "Domain-Adversarial Training of Neural Networks" (DANN).
The DANN method was introduced in the paper "[Domain-Adversarial
Training of Neural
Networks](https://arxiv.org/abs/1505.07818)" [@https://doi.org/10.48550/arxiv.1505.07818]
and is another method to select good cues given bias labels by removing
domain information from the intermediate features. An overview of the
method taken from the original paper is shown in
Figure [2.33](#fig:dann){reference-type="ref" reference="fig:dann"}.
The idea of the method is to add an additional head to the model (
magenta in the image) that would predict bias labels (named domain
labels in the paper) and adversarially train a feature extractor ( in
the image) such that the features it extracts are maximally
non-informative for the additional head to predict bias labels but still
informative for the original head ( in the image) to solve the main
task.
This is achieved by splitting the training process into two parts. In
the first part, the original head and feature extractor are jointly
trained with gradient descent for the main task. In the second part, the
bias-predicting head is trained with gradient descent for domain label
prediction. The feature extractor parameters are adversarially trained
with gradient *ascent* to maximize the loss of the bias-predicting head.
Intuitively, we optimize the bias-predicting head to "squeeze out" any
domain information left in the extracted features.
DANN can be assigned to the group of methods that select task cues given
bias labels by removing information about the bias from the intermediate
features.
#### DANN Optimization Objective
We denote the prediction loss by
$$\cL^i_y(\theta_f, \theta_y) = \cL_y(G_y(G_f(x_i; \theta_f); \theta_y), y_i)$$
and the domain loss by
$$\cL^i_d(\theta_f, \theta_d) = \cL_d(G_d(G_f(x_i; \theta_f); \theta_d), d_i).$$
The training objective of DANN is
$$E(\theta_f, \theta_y, \theta_d) = \frac{1}{n}\sum_{i = 1}^n \cL^i_y(\theta_f, \theta_y) - \lambda \left(\frac{1}{n}\sum_{i = 1}^n \cL^i_d(\theta_f, \theta_d) + \frac{1}{n'}\sum_{i = n + 1}^N \cL^i_d(\theta_f, \theta_d)\right),$$
and the optimization problem is finding the saddle point
$\hat{\theta}_f, \hat{\theta}_y, \hat{\theta}_d$ such that
$$\begin{aligned}
\left(\hat{\theta}_f, \hat{\theta}_y\right) &= \argmin_{\theta_f, \theta_y} E\left(\theta_f, \theta_y, \hat{\theta}_d\right),\\
\hat{\theta}_d &= \argmax_{\theta_d} E\left(\hat{\theta}_f, \hat{\theta}_y, \theta_d\right).
\end{aligned}$$
#### Breaking DANN apart
First, we discuss the above formulation, which is for *domain
adaptation*. The DANN method was originally proposed for this task. The
first term of the training objective is the loss term for correct task
label prediction on domain 1. The second term is the loss term for
correct domain label prediction. We have two sums in the second term for
domain 1 and domain 2 samples, respectively. For domain 2, we only have
*unlabeled samples*, but we *do* have domain labels. The set of domain
labels we have is simply {domain 1, domain 2}. Obtaining
$\left(\hat{\theta}_f, \hat{\theta}_y\right)$ means minimizing the first
term in $\theta_f, \theta_y$ and maximizing the second term in
$\theta_f$. Similarly, we obtain $\hat{\theta}_d$ by minimizing the
second term in $\theta_d$.
#### Using DANN for cross-bias generalization
We can easily adapt the DANN formulation to cross-bias generalization.
In particular, we treat $y$ as the task label (e.g., shape: {circle,
triangle, square}) and $d$ as the bias label (e.g., color: {red, green,
blue}). Here, the first term enforces correct predictions on both the
biased and unbiased samples and the second term is used to kill out
information about the bias from the representation. On off-diagonal
samples, the bias label is not the task label, thus, $f$ will be
optimized to "forget" the bias labels while predicting the task labels
correctly. We do not need unbiased samples as long as we have access to
labeled samples from the target domain. It could happen that, e.g., the
target domain is also biased, just in a different way than the training
set.
We could also treat the set of biased samples as domain 1, the set of
unbiased samples as domain 2, and use the original formulation of DANN
for cross-bias generalization. This approach also works with target
domain samples instead of unbiased ones.
#### Results of DANN for domain adaptation
To obtain good results with DANN, it is crucial to choose the
hyperparameters well. All hyperparameters are chosen fairly in the
paper, and there is no information leakage (e.g., by using the test set
for choosing hyperparameters). Most hyperparameters are chosen using
cross-validation and grid search on a log scale. Some are kept fixed or
chosen among a set of sensible values. This is a usual practice in
machine learning research. For large-scale experiments, the authors give
fixed formulas for the LR decay and the scheduler for the domain
adaptation parameter $\lambda$ for the feature extractor (from 0 to 1),
and fixed values for the momentum and the domain adaptation parameter
for the domain classifier ($\lambda = 1$ to ensure that the domain
classifier trains as fast as the label predictor).
The model is evaluated on generalizability between different *Amazon
review topics* on the sentiment analysis task. The results are shown in
the top table of
Table [\[tab:dannres\]](#tab:dannres){reference-type="ref"
reference="tab:dannres"}. There is no significant difference between how
NNs, SVMs, and DANN generalize. DANN is very slightly better on most
review topic combinations. DANN is also evaluated on generalizability
between MNIST and MNIST-M, SVHN and MNIST, and other datasets for the
same task. The results of these experiments are shown in
Table [\[tab:dannres2\]](#tab:dannres2){reference-type="ref"
reference="tab:dannres2"}. On these benchmarks, DANN performed a lot
better than NNs and SVMs.
::: table*
+:------+:------+:-----:+:-----:+:-----:+:-----:+:-----:+:-----:+
| | | **Ori | | | * | | |
| | | ginal | | | *mSDA | | |
| | | d | | | r | | |
| | | ata** | | | epres | | |
| | | | | | entat | | |
| | | | | | ion** | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| (l | T | DANN | NN | SVM | DANN | NN | SVM |
| r)3-5 | arget | | | | | | |
| (l | | | | | | | |
| r)6-8 | | | | | | | |
| S | | | | | | | |
| ource | | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| (l | dvd | .784 | .790 | **. | .829 | .824 | **. |
| r)1-2 | | | | 799** | | | 830** |
| (l | | | | | | | |
| r)3-5 | | | | | | | |
| (l | | | | | | | |
| r)6-8 | | | | | | | |
| | | | | | | | |
| books | | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| books | e | .733 | .747 | **. | **. | .770 | .766 |
| | lectr | | | 748** | 804** | | |
| | onics | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| books | ki | **. | .778 | .769 | **. | .842 | .821 |
| | tchen | 779** | | | 843** | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| dvd | books | .723 | .720 | **. | .825 | .823 | **. |
| | | | | 743** | | | 826** |
+-------+-------+-------+-------+-------+-------+-------+-------+
| dvd | e | **. | .732 | .748 | **. | .768 | .739 |
| | lectr | 754** | | | 809** | | |
| | onics | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| dvd | ki | **. | .778 | .746 | .849 | **. | .842 |
| | tchen | 783** | | | | 853** | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| e | books | **. | .709 | .705 | **. | .770 | .762 |
| lectr | | 713** | | | 774** | | |
| onics | | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| e | dvd | **. | .733 | .726 | **. | .759 | .770 |
| lectr | | 738** | | | 781** | | |
| onics | | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| e | ki | **. | **. | .847 | .881 | **. | .847 |
| lectr | tchen | 854** | 854** | | | 863** | |
| onics | | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| ki | books | **. | .708 | .707 | .718 | .721 | **. |
| tchen | | 709** | | | | | 769** |
+-------+-------+-------+-------+-------+-------+-------+-------+
| ki | dvd | **. | .739 | .736 | **. | **. | .788 |
| tchen | | 740** | | | 789** | 789** | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| ki | e | **. | .841 | .842 | .856 | .850 | **. |
| tchen | lectr | 843** | | | | | 861** |
| | onics | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
:::
::: table*
::: small
::: sc
::: tabular
l r \| c c c c & Source & MNIST & Syn Numbers & SVHN & Syn Signs\
& Target & MNIST-M & SVHN & MNIST & GTSRB\
& $.5225$ & $.8674$ & $.5490$ & $.7900$\
& $.5690 \; (4.1\%)$ & $.8644 \; (-5.5\%)$ & $.5932 \; (9.9\%)$ &
$.8165 \; (12.7\%)$\
& $\mathbf{.7666} \; (52.9\%)$ & $\mathbf{.9109} \; (79.7\%)$ &
$\mathbf{.7385} \; (42.6\%)$ & $\mathbf{.8865} \; (46.4\%)$\
& $.9596$ & $.9220$ & $.9942$ & $.9980$\
:::
:::
:::
:::
We should always take a look at how papers choose hyperparameters. For a
more complicated model, like DANN, there are many hyperparameters to
choose from. Depending on how smartly we choose them, we get
dramatically different results. When comparing methods, we also need to
make sure that we spend the same resources for tuning the
hyperparameters of all methods. The DANN paper provides fair
comparisons.
#### Ingredients for DANN
In DANN, like in Group DRO, we also have access to
$$\text{samples for } (X, Y, G) = (\text{input}, \text{output}, \text{group}).$$
The group label can again be a bias or domain label, but even a general
attribute label. By using group supervision, we make cross-domain
generalization less ill-posed.
## Scenario 2 for Selecting the Right Features
Let us consider another cross-bias generalization setting from
Figure [2.32](#fig:scenarios){reference-type="ref"
reference="fig:scenarios"}: Scenario 2. Here, we consider an abundance
of biased samples, a few available unbiased training samples ($<$ 1%),
and no bias labels. As we do not know which samples are biased (we only
have task labels), we need additional assumptions/information on the
bias to solve the problem.[^11] The question becomes how to identify
unbiased samples and how to amplify them.
Before answering this question, let us first think about what
assumptions we can make about the bias. The usual assumption is that the
bias cue is simple and the task cue (what we want to learn) is more
complex. For example, when the task is 'shape', and bias is 'color',
this assumption holds. When reversing the roles, the assumption is
violated.
This assumption on simplicity leads us to the following possible
additional assumptions:
1. Bias is the first cue that a generic model learns.
2. Bias is the cue that is learned by a model of a certain limited
capacity (i.e., by a short-sighted, myopic model).
**Note**: Sometimes, the assumption of the bias cue being a simpler cue
than the task cue is violated. Practitioners have to understand the
complexity of task cues and possible bias cues to successfully leverage
methods with the above assumptions.
In the next sections, we will describe a set of methods that identify
unbiased samples based on these assumptions. The framework depicted in
Figure [2.34](#fig:scenario2){reference-type="ref"
reference="fig:scenario2"} is a clear basis for our discussion. Before
diving into it, we would like to explain two important modules from this
framework: "Intentionally biased model" and "Be different" supervision.
![A general framework for selecting the right features, referred to as
"Scenario 2" in the text. The *intentionally based model* is trained on
the entire training set using task
supervision.](gfx/02_scenario2.pdf){#fig:scenario2 width="\\linewidth"}
::: definition
Intentionally Biased Model An intentionally biased model is designed to
learn bias cues quickly, based on the assumptions we made before.
We consider several examples of an intentionally biased model:
- The model is trained for a small number of epochs. Whatever pattern
that can already be learned in the first few epochs is considered
bias.
- The model is not trained for a few epochs, but its initial correct
predictions are amplified during training. This is conceptually very
similar to the previous example but is perhaps more performant.
- The model has an architectural constraint: (1) CNN with a smaller
receptive field. It can only extract very local information (e.g.,
texture patterns), not global shape. When the bias is 'texture',
this is the way to go. (2) Transformer with shallow depth. It can
only learn very simplistic relationships. When our bias is simple,
this can work. (3) Single-modality model. This is one way to go when
the actual task requires looking at multiple modalities to solve the
problem.
:::
::: definition
"Be different" Supervision "Be different" supervision is a type of
regularization that forces the final model to be different from the
intentionally biased model. The final model is trained on the original
task loss with regularization based on the biased model. The biased
model might be trained *before* the final model or *in tandem* (Learning
from Failure: Section [2.13.1](#ssec:lff){reference-type="ref"
reference="ssec:lff"}, ReBias:
Section [2.13.2](#ssec:rebias){reference-type="ref"
reference="ssec:rebias"}).
Examples of the "be different" supervision:
- Sample weighting based on biased model.
- Achieving representational independence.
:::
### Learning from Failure {#ssec:lff}
![Overview of the Learning from Failure method. The intentionally biased
model is used for determining the sample weights in the loss of the
debiased model based on relative difficulty. Figure taken from the
paper [@DBLP:journals/corr/abs-2007-02561].](gfx/02_lff.png){#fig:lff
width="0.8\\linewidth"}
The method we consider now was introduced in the paper "[Learning from
Failure: Training Debiased Classifier from Biased
Classifier](https://arxiv.org/abs/2007.02561)" [@DBLP:journals/corr/abs-2007-02561].
An overview, taken from the paper, is shown in
Figure [2.35](#fig:lff){reference-type="ref" reference="fig:lff"}.
Here, an *intentionally biased* model is obtained by training with the
following special loss that amplifies biases:
$$\cL_\mathrm{GCE}(p(x; \theta), y) = \frac{1 - p_y(x; \theta)^q}{q}$$
where $y$ is the GT class and $q > 0$.
This loss forces the intentionally biased model to focus on samples for
which the predicted ground truth probability is already high. To
understand why it happens, it can be shown that
$$\frac{\partial \cL_\mathrm{GCE}(p(x; \theta), y)}{\partial \theta} = p_y(x; \theta)^q \frac{\partial \cL_\mathrm{CE}(p(x; \theta), y)}{\partial \theta}$$
and as $q \downarrow 0, \cL_\mathrm{GCE} \rightarrow \cL_\mathrm{CE}$.
The final model $f_D$ is trained to be *different* from the
intentionally biased model by assigning the following sample weights:
$$\cW(x) = \frac{\cL_\mathrm{CE}(f_B(x), y)}{\cL_\mathrm{CE}(f_B(x), y) + \cL_\mathrm{CE}(f_D(x), y)}$$
where $$\cL_\mathrm{CE}(p(x; \theta), y) = -\log p_y(x; \theta).$$ Such
weights force the final model to focus on the samples on which an
intentionally biased model makes more mistakes. The final training
algorithm, as presented in the paper, is shown in
Algorithm [\[alg:lff\]](#alg:lff){reference-type="ref"
reference="alg:lff"}.
::: algorithm
Initialize two networks $f_B(x; \theta_B)$ and $f_D(x; \theta_D)$
:::
#### Breaking LfF Apart
The intentionally biased model is trained with $\cL_\mathrm{GCE}$. It
amplifies whatever is predicted at the first iterations through the rest
of the training. For example, if the model first learns 'color', then
the loss amplifies color-based predictions and enforces the same
predictions throughout training.
The final model is then forced to think of different hypotheses than the
first model. If the biased model correctly predicts a sample, it gets
less weight in the loss for the final model.
With $q > 0$, $\cL_\mathrm{GCE}$ assigns more weight on confident
samples, which results in larger gradient updates for these. The larger
$q$ is, the more the perfect predictions are weighted compared to
imperfect ones. We train wrong predictions very slowly and initial
predictions are strengthened over time.
**Assumption on bias**: Biases are the cues that are learned first. The
method rewards easy samples to be learned quickly, and harder samples
that were not predicted correctly to be given up by the intentionally
biased model. Thus, this model is indeed biased towards easy cues.
For hard samples, $\cL_\mathrm{CE}(f_B(x), y)$ is large throughout the
training procedure. Both $\cL_\mathrm{CE}(f_B(x), y)$ and
$\cL_\mathrm{CE}(f_D(x), y)$ are high for all $x \in \cD$ in the
beginning. The better $f_D$ becomes on a sample, the more it is weighted
(as $\cL_\mathrm{CE}(f_D(x), y)$ decreases). However, the weight is
multiplied by $\cL_\mathrm{CE}(f_D(x), y)$, which balances this trend
out. An illustration of $\cW(x) \cdot \cL_\mathrm{CE}(f_D(x), y)$ is
given in Figure [2.36](#fig:gce){reference-type="ref"
reference="fig:gce"}. Samples with high $\cW(x)$ are ones that the
biased model cannot handle well. Under our assumptions on the bias,
samples with high $\cW(x)$ are the unbiased ones. Thus, $\cW(x)$
replaces the missing bias labels. Sample weights have a similar effect
as the "upweighting" of the underrepresented group in Group DRO.
**Note**: In LfF, depending on the predictions of the first iteration,
we choose the samples on which we wish and do not wish to train further.
As a simpler baseline, we could also just train the intentionally biased
model for 1-2 epochs but with the original cross-entropy loss. However,
researchers usually prefer more 'continuous' solutions rather than such
thresholds and rules of thumb.
![Illustration of $\cW(x) \cdot \cL_\mathrm{CE}(f_D(x), y)$ as a
function of $\cL_\mathrm{CE}(f_D(x), y)$ for
$\cL_\mathrm{CE}(f_B(x), y) \in \{0.5, 5\}$. Samples with a higher loss
for the biased model are more important for the unbiased
model.](gfx/02_gce.pdf){#fig:gce}
#### Results of LfF
The paper showcases results on the Colored
MNIST [@https://doi.org/10.48550/arxiv.1907.02893] dataset where the
task is the shape of the digit and the bias is the color of the digit. A
sample from this dataset can be seen in
Figure [2.37](#fig:colormnist){reference-type="ref"
reference="fig:colormnist"}, and the results are shown in
Figure [2.38](#fig:lffcmnist){reference-type="ref"
reference="fig:lffcmnist"}. The results show that if we train a model
for digit classification, it tends to pick up color much more quickly
than the actual digit shape. The fact that we are improving performance
by using LfF shows that
1. color is indeed learned first; and
2. color was indeed a bias that should be removed from consideration
for digit recognition.
The lower the percentage of unbiased samples we include, the larger the
relative effect LfF has over vanilla ERM. As expected, if we change the
bias cue to digit and the task cue to color, LfF fails.
::: information
Changing the task on Colored MNIST If color were the task and we
evaluated LfF on Colored MNIST, we would see a drop in accuracy, as
color is learned first, not digit. Thus, compared to the vanilla
baseline, the final model generalization performance can verify whether
the biased model learned the bias cue and whether what was learned was
indeed a bias cue.
:::
![A representative sample from the Colored MNIST
dataset [@DBLP:journals/corr/abs-2007-02561].](gfx/02_colormnist.png){#fig:colormnist
width="0.7\\linewidth"}
![Results of LfF on Colored
MNIST [@https://doi.org/10.48550/arxiv.1907.02893]. LfF is significantly
better than vanilla training but also shows improvements compared to
other debiasing methods. There are \[Ratio\]% biased samples and \[1 -
Ratio\]% unbiased samples. Table taken from the
paper [@DBLP:journals/corr/abs-2007-02561].](gfx/02_lffres.png){#fig:lffcmnist
width="0.8\\linewidth"}
#### Ingredients for LfF
In LfF, we use the usual ingredients for supervised learning
($\text{samples for } (X, Y) = (\text{input}, \text{output})$) plus an
additional assumption:
$$\text{Biased samples are the ones that the intentionally biased model learns first.}$$
Simply put: the bias is the simplest cue out of the ones with high
predictive performance on this biased dataset. This is sometimes true,
sometimes not. However, whenever it *is* true, we have a great solution
for it. It can still happen, however, that the bias is not the easiest
cue to learn. Then, the procedure misses the point.
::: information
When is something a "bias"? What is bias is defined by humans. It is not
an algorithmic concept. Only when humans declare something as a bias
does it become a bias. It depends on the task (i.e., the setting we wish
to generalize to) that humans specify. Whatever is not the task is a
potential bias. Once we have a fixed task, we identify biases by, e.g.,
performing counterfactual evaluation.
:::
::: information
Possible Extension of LfF In the first few epochs, we could already
condition the intentionally biased model to look for parameter regions
where there are a lot more correct solutions with a bit more complex
cues. This is already achieved in a way for regular LfF: when a very
simple cue results in very poor training performance, it will not be
chosen, no matter how simple it is.
:::
### ReBias: Representational regularization {#ssec:rebias}
![High-level and informal overview of the ReBias method. The
intentionally biased model has a small receptive field to amplify
texture bias. The debiased model is encouraged to be different from the
intentionally biased one.](gfx/02_rebias.pdf){#fig:rebias
width="0.8\\linewidth"}
Another method which introduces a similar concept to LfF is"[Learning
De-biased Representations with Biased
Representations](https://arxiv.org/abs/1910.02806)" [@https://doi.org/10.48550/arxiv.1910.02806].
An intuitive overview is given in
Figure [2.39](#fig:rebias){reference-type="ref" reference="fig:rebias"}.
The paper considers texture bias as the key problem to solve. We build
CNNs that are *intentionally biased* towards texture by reducing their
receptive fields. By constraining the intentionally biased model to this
architecture, it is forced to capture local cues like texture.
The *final model* has a large receptive field. It might be, e.g., a
ResNet-50. The *intentionally biased model* has a small receptive field,
like the BagNet [@https://doi.org/10.48550/arxiv.1904.00760] model. A
large receptive field can capture both local and global cues. However,
the model might not look at global cues if the dataset is structured so
that the net can simply learn very local cues to perform well.
::: information
Receptive Fields Beyond the Input Image We usually use padding to have
the kernel centered at every pixel and influence the output
dimensionality. If we use padding and regular (e.g., $3 \times 3$)
convolutions, the receptive field of a deeper layer can be even beyond
the image (but there, neurons only output zeros, constants, mirrors, or
other redundant values). The field of view is huge in this case.
:::
How can we perform "be different" supervision in this setup? The ReBias
method leverages *statistical independence* instead of giving specific
weights to samples. We train a debiased representation by encouraging
the final model's outputs to be statistically independent from the
intentionally biased model's outputs. We measure this independence with
the Hilbert-Schmidt Independence Criterion (HSIC) between two random
variables $U, V$:
$$\operatorname{HSIC}^{k, l}(U, V) = \Vert C_{UV}^{k, l} \Vert_{\mathrm{HS}}^2$$
where $C$ is the cross-covariance operator in the Reproducing Kernel
Hilbert Space (RKHS) corresponding to kernels $k$ and $l$, and
$\Vert \cdot \Vert_{\mathrm{HS}}$ is the Hilbert-Schmidt norm which is,
intuitively, a "non-linear version of the Frobenius norm of an
infinite-dimensional covariance matrix." Kernels $k$ and $l$ correspond
to random variables $U$ and $V$, respectively. Essentially, we embed $U$
and $V$ in the infinite-dimensional RKHS corresponding to the kernels
$k$ and $l$, and compute their covariance there.
We use this criterion to make the invariances learned by these two
models different. Our "be different" supervision is to minimize the HSIC
between the two models.
**Important property**: It is well
known [@https://doi.org/10.48550/arxiv.1910.02806] that for two random
variables $U, V$ and RBF kernels $k, l$,
$$\operatorname{HSIC}^{k, l}(U, V) = 0 \iff U \indep V.$$
::: information
Why is HSIC needed? Why is making $U$ and $V$ uncorrelated not enough?
If we have a covariance matrix and try to make it the identity matrix,
we can enforce the correlation between the variables to be 0, but they
will not necessarily be independent. There can be higher-order,
non-linear dependencies. However, the HSIC lifts our random variables to
an infinite-dimensional Hilbert space, and we consider the covariance
"matrix" there. By doing so, we remove higher-order dependencies too at
the same time, making the two variables truly independent.
:::
If we just train a model $f$ on some image classification dataset, it is
very likely that the model finds a solution that is also representable
by the small receptive field network $g$, as the model can usually
perform well by looking at very small patches for predictions and we
have previously discussed the simplicity bias of DNNs. Therefore, for
our final model $f$ and the intentionally biased model $g$, we want to
enforce statistical independence $f(X) \indep g(X)$ (that are random
variables in $\nR^C$) to ensure that the model $f$ we find is not
equivalent to some other network $g$ with a small receptive field. The
paper uses a finite-sample unbiased estimator
$\operatorname{HSIC}^{k}_1(f(X), g(X))$ and the authors choose $k$ and
$l$ to be both RBF kernels. Therefore, we consider the shorthand
$\operatorname{HSIC}_1(f, g)$.
We know that $$\begin{aligned}
\operatorname{HSIC}(f(X), g(X)) = 0 &\iff \text{\(f(X)\) and \(g(X)\) are independent}\\
&\iff \text{The models \(f, g\) have ``orthogonal invariances''.}
\end{aligned}$$ Let us detail the last equality further. If $g$
discriminates color (i.e., its decision boundary separates objects of
different colors), then $f$ should learn invariance for color (i.e.,
changing of object color does not influence the distance decision
boundary of $f$), and vice versa: if $g$ is treating two samples
similarly, then $f$ should consider these far away from each other in
the feature representation.[^12] We train a de-biased representation by
encouraging our model to be statistically independent of the
intentionally biased representation.
#### ReBias Optimization Problem
The optimization problem in ReBias is
$$\argmin_{g \in G} \underbrace{\cL(g, x, y)}_{\text{Original task loss}} - \lambda_g \underbrace{\operatorname{HSIC}_1(f(x), g(x))}_{\text{Minimize independence}}$$
for the intentionally biased model and
$$\argmin_{f \in F} \underbrace{\cL(f, x, y)}_{\text{Original task loss}} + \lambda_f \underbrace{\operatorname{HSIC}_1(f(x), g(x))}_{\text{Maximize independence}}$$
for our model. The minimax game being solved is thus
$$\min_{f \in F}\max_{g \in G} \cL(f) - \cL(g) + \lambda \operatorname{HSIC}_1(f, g).$$
During training, we update $f$ once, then update $g$ for a fixed $f$ $n$
times ($n = 1$ in the [official
implementation](https://github.com/clovaai/rebias/blob/master/trainer.py#L115)).
There are many other options, e.g., training $f$ and $g$ together on the
same loss value.
#### Illustration of Training ReBias
![*Left.* Illustration of the ReBias minimax optimization problem. The
function $f$ is optimized to be highly different from $g$ while still
solving the task. The function $g$ is incentivized to stay as similar to
$f$ as possible. *Right.* The optimal, de-biased function $f^*$ leaves
hypothesis space $G$. Therefore, no function exists in $G$ that can
match $f^*$.](gfx/02_rebias_t.pdf){#fig:rebias_t width="\\linewidth"}
The training procedure is illustrated in
Figure [2.40](#fig:rebias_t){reference-type="ref"
reference="fig:rebias_t"}. Functions $f$ and $g$ are elements of
function spaces $F$ and $G$, respectively. The function $g$ is
architecturally constrained and we have $G \subset F$. (We can pad
kernels of $g$ by zeros to get a valid model $f \in F$ that simulates a
model with a small receptive field.) During the optimization procedure,
$g$ tries to catch up to $f$ (solve the task and maximize dependence).
In turn, $f$ tries to be different (run away) from $g$ (solve the task
and minimize dependence). Eventually, after doing this for a few
iterations, $f$ finally escapes the set of models $G$. Thus, no function
in $G$ can represent $f$ anymore (due to the architectural constraint),
and $f$ cannot leverage the simple cue that $g$ uses. Now, e.g., $f$
looks at global shapes instead of texture: $f$ becomes debiased.
![A versatile sample from the Colored MNIST dataset variant used
in [@https://doi.org/10.48550/arxiv.1910.02806], taken from the
paper.](gfx/02_colormnist2.png){#fig:colormnist2 width="0.6\\linewidth"}
::: table*
:::
#### Results of ReBias
Let us first consider the results of the method on the Colored MNIST
dataset. In Colored MNIST, the color highly (or perfectly) correlates
with the digit shape in the training set. Learning color is a shortcut
to achieving high accuracy. Naively trained models will be biased
towards color because of simplicity bias. The paper uses a variant of
Colored MNIST in which all digits are white, but the background colors
are perfectly correlated with the digits. A versatile sample from the
dataset can be seen in
Figure [2.41](#fig:colormnist2){reference-type="ref"
reference="fig:colormnist2"}. The model we wish to debias is a LeNet
architecture that can capture both color and shape. The intentionally
biased model is a BagNet architecture that uses $1 \times 1$
convolutions. This is very much liable to overfit to color. The
evaluation is performed both on biased and unbiased test sets. When
evaluating the trained model on a test set with bias identical to the
training set, we measure ID generalization performance. When using a
test set with unbiased samples (colors randomly assigned to samples),
the model relying on the bias cue would perform poorly. The exact
results are shown in
Table [\[tab:rebiasres1\]](#tab:rebiasres1){reference-type="ref"
reference="tab:rebiasres1"}. ReBias improves unbiased accuracy while
managing to retain biased accuracy.
Let us now turn to the task of action recognition with a strong static
bias. The authors use the Kinetics
dataset [@DBLP:journals/corr/abs-1907-06987] for training the model,
which has a strong bias towards static cues. For evaluation, the
Mimetics dataset [@mimetics] is used that is ripped off the static cues
and only contains the pure actions. The model to be debiased is a
3D-ResNet-18 [@DBLP:journals/corr/abs-1711-11248] that can capture both
temporal and static cues. The intentionally biased model is a
2D-ResNet-18, which can only capture static cues (i.e., cues from
individual frames). As the results in
Table [2.3](#tab:rebiasres2){reference-type="ref"
reference="tab:rebiasres2"} show, ReBias improves unbiased accuracy
while also managing to improve biased accuracy.
::: {#tab:rebiasres2}
------------------------------------- -- ------------ ------------ --
Biased Unbiased
Model description (Kinetics) (Mimetics)
Vanilla (`3D-ResNet18`) 54.5 18.9
Biased (`2D-ResNet18`) 50.7 18.4
`LearnedMixin` (Clark et al., 2019) 12.3 11.4
`RUBi` (Cadene et al., 2019) 22.4 13.4
`ReBias` **55.8** **22.4**
------------------------------------- -- ------------ ------------ --
: Results of ReBias on the Kinetics (biased) and Mimetics (unbiased)
datasets, compared to various previous methods we do not cover in the
book. Notably, ReBias is the most performant approach on *both* the
biased and unbiased datasets. The vanilla and biased results show the
performance of $f \in F$ and $g \in G$, respectively, trained using
ERM. The results are taken from the
paper [@https://doi.org/10.48550/arxiv.1910.02806].
:::
#### The Myopic Bias in Machine Learning
Let us first provide a definition for a *myopic model*.
::: definition
Myopic Model A myopic (short-sighted) model in ML refers to a model that
is limited in its scope or focus, and, therefore, may not be able to
capture all of the relevant features or information needed for robust
prediction and decision-making.
For example, a myopic model that only looks at texture may not be able
to capture other important visual cues such as shape, motion, or
context, which can be critical for accurate image recognition or object
detection. Similarly, a myopic model that only considers static frames
in a video may miss important information conveyed by the temporal
dynamics of the video, such as motion or changes over time, which can be
critical for accurate action recognition or activity detection. A
language model may also focus on word-level cues for the overall
sentiment of the sentence (e.g., frequency of 'not's).
:::
The intentionally biased models we are considering in ReBias are myopic.
The myopic bias appears a lot in ML in general: A very large model that
is capable of modeling all kinds of relationships in the data does not
learn complex relationships if the data itself is too simple and very
conducive to simple cues.
To avoid myopic models, we introduce a second network that is very
myopic, and use "be different" supervision, just like in LfF or ReBias.
Our model will then be able to leverage complex cues and relationships
better.
**Example**: Considering a language model $f \in F$ biased to word-level
cues, we can "subtract" a simple Bag-of-Words (BoW) model (or a simple
word embedding) $g \in G$ from the language model by using "be
different" supervision to obtain more global reasoning and a more robust
model.
#### Ingredients of ReBias
In ReBias, we use the usual ingredients for supervised learning
($\text{samples for } (X, Y) = (\text{input}, \text{output})$), plus
additional assumptions:
1. The bias is "myopic".
2. One can intentionally confine a family of functions to be myopic.
Using "be different" supervision by enforcing statistical independence,
we aim to obtain unbiased models that leverage robust cues.
## Scenario 3 for Selecting the Right Features
The last cross-bias generalization scenario from
Figure [2.32](#fig:scenarios){reference-type="ref"
reference="fig:scenarios"} we would like to discuss is Scenario 3. A
more detailed overview of this setting can be seen in
Figure [2.42](#fig:scenario3){reference-type="ref"
reference="fig:scenario3"}. Here, we assume biased training samples (a
labeled diagonal training set) without bias labels and a few labeled
test samples. In such a case, we can train multiple models with diverse
OOD behaviors, i.e., that have substantially different decision
boundaries in the input space.[^13] Considering the shape-color dataset,
the decision boundaries do not have to clearly cut any of the
human-interpretable cues (predict only based on color vs. predict only
based on shape). By having a diverse set of models, we can recognize
samples according to many cues that *might be* task cues in deployment.
We hope that one of them encodes what we want in the deployment
scenario. At deployment time, we choose the right model from this set
based on *a few labeled test samples*, then use it during deployment.
This corresponds to *domain adaptation* or *test-time training* --
different OOD generalization types where we have access to labeled
deployment (test) samples. In practice, this is usually done in the
context of *test-time training*, as the models are usually updated
through the deployment procedure. By labeling samples on the fly
(test-time training), one can perform model selection robustly.
![Overview of "Scenario 3" for selecting the right
features.](gfx/02_scenario3.pdf){#fig:scenario3 width="\\linewidth"}
**Note**: If we have deployment samples that are not unbiased and we
also have bias labels, we can still use group DRO, sample weighting, and
DANN.
In this scenario, the deployment domain is not necessarily unbiased. It
can be equally biased, just in other ways. The labeled test samples
decide the *task*. We select the best-performing model on the test
dataset (which is usually very small in size), e.g., based on accuracy.
::: information
Difference between having a few test samples and a few unbiased training
samples In practice, we are unlikely to have unbiased samples at test
time. When we do (e.g., as depicted in
Figure [2.42](#fig:scenario3){reference-type="ref"
reference="fig:scenario3"}), these scenarios *can* be the same, but
there can also be other distributional shifts between train/test. The
most likely case is that the deployment scenario contains many biased
samples but with biases that differ from the training set biases. In
this case, we aim to fine-tune/adapt our model to the specific bias at
test time rather than aiming to do well on an unbiased set. Scenario 3
ensures that we can adapt to any shift at deployment (test) time, as we
have direct access to deployment-time (test-time) data. This is a more
straightforward setting, providing more information about the deployment
scenario.
:::
Here is one of the possible *recipes* to deal with such a setting:
1. Train an ensemble of models with some "diversity" regularization.
2. At test time, use a few labeled samples or human inspection (if it
costs less than annotation time or we have special selection
criteria) to select the appropriate model that generalizes well.
This recipe gives rise to two questions:
1. How can we know that the samples we base our decision on are
representative of the whole test domain as time progresses?
2. How can we make sure that the set of models uses a diverse set of
cues?
For the first question, we have two possible answers:
- Adapt the model very frequently (e.g., every batch of data we
obtain).
- Trust that the deployment distribution is not going to change, e.g.,
for the next month, and update only every month.
By choosing either of the above, we also assume that these few labeled
samples are enough to determine the most performant model in the
deployment scenario.
For the second question, we cannot give a quick answer. If we naively
train $n$ models separately, all of them will likely focus on easy cues
because of the simplicity bias of DNNs. That is why we need explicit
regularization to enforce diversity. In the next section, we will focus
on one of the methods that do exactly that.
### Predicting is not Understanding
To look at one of the methods for diversifying models, let us discuss
the paper "[Predicting is not Understanding: Recognizing and Addressing
Underspecification in Machine
Learning](https://arxiv.org/abs/2207.02598)" [@https://doi.org/10.48550/arxiv.2207.02598].
The intuition behind this method is that diverse ensemble training can
be achieved by enforcing "independence" between models through the
orthogonality of input gradients.
One way to achieve this is to add an orthogonality constraint to the
loss.[^14] Such a constraint can be represented as the squared cosine
similarity of the input gradients for the same input:
$$\cL_\mathrm{indep}\left(\nabla_x f_{\theta_{m_1}}(x), \nabla_x f_{\theta_{m_2}}(x)\right) = \cos^2\left(\nabla_x f_{\theta_{m_1}}(x), \nabla_x f_{\theta_{m_2}}(x)\right).$$
Our goal is to have orthogonal input gradients. As this constraint is
differentiable, we optimize it using Deep Learning (DL).
::: information
Shape of Gradients In the orthogonality constraint, the gradients are of
the logits, not of the loss. This results in a 4D tensor for multi-class
classification. We simply flatten this tensor and calculate the squared
cosine similarity. We only have a 1D output for binary classification,
so the gradients will have the same shape as the input image. The paper
focuses on binary classification. The independence loss used by the
paper requires $\cO(M^2)$ network evaluations, where $M$ is the number
of models in our diverse set.
:::
#### Intuition for orthogonal input gradients
Suppose that we have two models, $m_1$ and $m_2$, and two different
regions of the image: background and foreground. If $m_2$ is looking at
the background, there is a significant focus on the background parts in
the input gradient. We want the input gradient of $m_1$ to be orthogonal
to that of $m_2$, as that will result in $m_1$ focusing more on the
foreground.
#### Formal reasoning about independence
We define "independence" as the statistical independence of model
outputs for a local Gaussian perturbation around every $x$ in the input
space. We measure the change in output for model 1 and model 2 using
this Gaussian perturbation. The perturbation is small enough to
approximate a model via its linear tangent function (input gradient).
For infinitesimally small perturbations ($\sigma \downarrow 0$), changes
in logits between $x$ and $\tilde{x}$ can be approximated through
linearization by the input gradients $\nabla_x f$. In particular, for
$\sigma \downarrow 0$, the relative change in the logits from $x$ to
$\tilde{x}$ is exactly given by the directional derivative
$\left\langle\nabla_x f(x), \frac{\tilde{x} - x}{\Vert \tilde{x} - x \Vert} \right\rangle$.
Why can we use the orthogonality of the input gradients for measuring
statistical independence? It can be shown that the statistical
independence of the model outputs is equivalent to the geometrical
orthogonality of the input gradients when $\sigma \downarrow 0$ for the
local Gaussian perturbation. The local independence for a particular
input $x$ is defined as
$$f_{\theta_1}(\tilde{x}) \indep f_{\theta_2}(\tilde{x}), \tilde{x} \sim \cN(x, \sigma I) \in \nR^{d_{\mathrm{in}}},$$
and global independence for a particular input $x$ means that in a set
of predictors $\{f_{\theta_1}, \dots, f_{\theta_M}\}$, all pairs are
locally independent around $x$.
We need one more ingredient to ensure that the models are diverse in
meaningful ways. The set of orthogonal models increases exponentially
with the input dimensionality. For images, we have overwhelmingly many
orthogonal models -- the input space might be close to being
1M-dimensional. However, the relevant subset of images that make sense
inside this space is quite low-dimensional. This low-dimensional subset
is the *data manifold*. We want to confine our exploration of decision
boundaries to the manifold rather than the entire space. The reason is
that diversification regularization without on-manifold constraints may
result in models that are only diversified in the vast non-data-manifold
dimensions, which means that they behave similarly on on-manifold
samples.
We visualize an intuitive example of how models with orthogonal input
gradients might still behave identically on the data manifold in
Figure [2.43](#fig:onmanifold){reference-type="ref"
reference="fig:onmanifold"}.
![Example that highlights the importance of the on-manifold constraint
in Predicting is not
Understanding [@https://doi.org/10.48550/arxiv.2207.02598]. We consider
a 1D line as our data manifold and a binary classification problem. In
the case of a linear classifier, the normal of the decision boundary is
exactly the input gradient. If we project the decision boundaries onto
the data manifold, they become identical. This means that even though
the weights of the two models are orthogonal, they make identical
decisions on the data manifold.](gfx/02_onmanifold.pdf){#fig:onmanifold
width="0.6\\linewidth"}
#### On-Manifold Constraints
In Predicting is not Understanding, the input gradient is regularized to
be "on" the data manifold. We use a Variational Autoencoder
(VAE) [@https://doi.org/10.48550/arxiv.1312.6114] to learn an
approximation of the data distribution from unlabeled samples, i.e., to
learn the data manifold $\cM$. One can then project any vector
$\in \nR^{d_{\mathrm{in}}}$ in the input space onto this data manifold
by using the VAE
$\operatorname{proj}_\cM\colon \nR^{d_\mathrm{in}} \times \nR^{d_\mathrm{in}} \rightarrow \cM$.
This VAE is trained to be capable of projecting a vector $v$ (the
gradient in the application) to the tangent plane of the manifold at
point $x$. For OOD samples, this means that we want
$$\operatorname{proj}_\cM(x, v) \approx v\quad \forall x \sim P_{\mathrm{OOD}}, x + v \sim P_\mathrm{OOD},$$
which is achieved by training the VAE to reconstruct the OOD images and
applying a similar series of transformations to the vector $v$ as well.
Further details can be read in the paper.
The on-manifold constraint is
$$\cL_\mathrm{manifold}(\nabla f(x)) = \Vert \operatorname{proj}_{\cM}(x, \nabla_x f(x)) - \nabla_x f(x) \Vert_2^2,$$
where $\operatorname{proj}_\cM$ is the projection of the gradient onto
the tangent space of the manifold at point $x$. This loss term forces
the input gradient to be aligned with the data manifold.
Used together with the independence constraint, the model is constrained
to have orthogonal gradients that are roughly inside the data manifold.
Intuitively, when the independence constraint influences a model's
gradients in dimensions oriented outwards from the manifold, it does not
impact its predictions on natural data. Consequently, models that
produce identical predictions on every natural input could satisfy the
independence constraint because their decision boundaries are identical
when projected onto the manifold. This drastically reduces the search
space for new models and ensures that the next model in the ensemble
will look at a *meaningful* new cue.
::: information
How to choose the dimensionality of the VAE latent space? We do not have
to know the dimensionality of the manifold, as it is perfectly fine if
we choose the number of models more than that. Adding new models can
still lead to more diversity, but it will be impossible to enforce the
perfect orthogonality of the input gradients. It is also not a problem
if we have fewer dimensions than the actual number of dimensions of the
manifold in the VAE latent space, as one can embed higher-dimensional
factors of variation into lower dimensions.
:::
#### Putting it all together
![Overview of the Predicting is not Understanding method, taken
from [@https://doi.org/10.48550/arxiv.2207.02598].](gfx/02_pinu.pdf){#fig:pinu
width="0.6\\linewidth"}
Our final loss function is $$\begin{aligned}
\cL(\cD_{\mathrm{tr}}, \theta_1, \dots, \theta_M) = \sum_{(x, y) \in \cD_{\mathrm{tr}}} &\Bigg[\frac{1}{M}\sum_{m = 1}^M \cL_\mathrm{pred}\left(y, \sigma(f_{\theta_m}(x))\right)\\
&+ \frac{1}{M^2} \sum_{m_1 = 1}^M \sum_{m_2 = 1}^M \lambda_{\mathrm{indep}} \cL_\mathrm{indep}\left(\nabla_x f_{\theta_{m_1}}(x), \nabla_x f_{\theta_{m_2}}(x)\right)\\
&+ \frac{1}{M} \sum_{m = 1}^M \lambda_\mathrm{manifold} \cL_{\mathrm{manifold}}\left(\nabla_x f_{\theta_m}(x)\right)\Bigg]
\end{aligned}$$ that encapsulates the prediction losses, the
independence losses, and the on-manifold losses for the $M$ models.
![Examples of collages of four tiles in Predicting is not
Understanding [@https://doi.org/10.48550/arxiv.2207.02598].](gfx/02_collage.png){#fig:collage
width="0.8\\linewidth"}
::: {#tab:pinu_res}
**Collages** dataset (accuracy in %) Best model on
------------------------------------------------------------------------------------ --------------- ---------- ---------- ---------- ----------
Upper bound (training on test-domain data) 99.9 92.4 80.8 68.6 85.5
ERM Baseline 99.8 50.0 50.0 50.0 62.5
Spectral decoupling [@https://doi.org/10.48550/arxiv.2011.09468] 99.9 49.8 50.6 49.9 62.5
With penalty on L1 norm of gradients 98.5 49.6 50.5 50.0 62.1
With penalty on L2 norm of gradients [@https://doi.org/10.48550/arxiv.1908.02729] 96.6 52.1 52.3 54.3 63.8
Input dropout (best ratio: 0.9) 97.4 50.7 56.1 52.1 64.1
Independence loss (cosine similarity) [@https://doi.org/10.48550/arxiv.1911.01291] 99.7 50.4 51.5 50.2 63.0
Independence loss (dot product) [@teney2021evading] 99.5 53.5 53.3 50.5 64.2
With many more models
Independence loss (cosine similarity), [1024]{.underline} models 99.5 58.1 66.8 63.0 71.9
Independence loss (dot product), [128]{.underline} models 98.7 84.9 71.6 61.5 79.2
Proposed method (only 8 models)
Independence + on-manifold constraints, PCA 97.3 69.8 62.2 60.0 72.3
Independence + on-manifold constraints, VAE ($^\ast$) 96.5 85.1 61.1 62.1 76.2
($^\ast$) + FT (fine-tuning) 99.7 90.9 81.4 67.4 84.8
($^\ast$) + FT + pairwise combinations (1$\times$) 99.9 92.2 79.3 66.3 84.4
($^\ast$) + FT + pairwise combinations (2$\times$) 99.9 92.5 80.2 67.5 85.0
**($^\ast$) + FT + pairwise combinations (3$\times$)** **99.9** **92.3** **80.8** **68.5** **85.4**
: Results of Predicting is not
Understanding [@https://doi.org/10.48550/arxiv.2207.02598]. The ERM
baseline only learns to look at the MNIST tile and performs random
prediction for all other cues. The independence loss is also not
enough by itself. **Fine-tuning**: After training a set of models, the
authors remove the independence and on-manifold constraints and
fine-tune the models by applying a binary mask on the pixels/channels
such that each model is fine-tuned only on the parts of the image most
relevant to themselves (as measured by the magnitude of the gradient
$\nabla_{\theta_m} f_{\theta_m}(x)$ among the models for each
pixel/channel). **Pairwise combinations**: After training and
fine-tuning a set of models, they combine the best of them (as given
by our metric of choice on the OOD validation set) into a global one
that uses all of the most relevant features. They train this global
model from scratch, without regularizers, on masked data, using masks
from the selected models combined with a logical OR. They repeat this
pairwise combination as long as the accuracy of the global model
increases. They always append the new models into the set of models.
Independence + on-manifold constraints + VAE + FT + pairwise
combinations (3x) performs best, achieving almost the upper bound
(training on test-domain data). The upper bound accuracy can be
achieved, e.g., if we have four models specializing perfectly in
different quadrants. Based on these results, the models are indeed
very diverse.
:::
#### Results of Predicting is not Understanding
The method is evaluated on a *collage dataset* with controllable
correlation among the four collage images. The four datasets used are
MNIST, Fashion MNIST [@https://doi.org/10.48550/arxiv.1708.07747],
CIFAR [@krizhevsky2009learning], and SVHN [@37648]. We have ten classes
for each dataset. We put one sample from each dataset in a window of
four elements, as shown in
Figure [2.45](#fig:collage){reference-type="ref"
reference="fig:collage"}. During training, a dataset with a perfect
correlation between the four labels is used (e.g., the meta-class 0 is
the quadruple (zero, pullover, automobile, zero)). This is a biased,
diagonal dataset. For evaluation, we use a dataset with no correlation
between the four labels. This is an unbiased dataset with off-diagonal
samples as well. We label this , e.g., MNIST or CIFAR, and -- based on
the results -- we find out what cue (which quadrant) each model learned.
This is *cross-bias generalization*: At test time, we break the
correlation among the four quadrants. Our expectation is that by
training independent models, we should be able to get four different
models that look at different quadrants. The results are shown in
Table [2.4](#tab:pinu_res){reference-type="ref"
reference="tab:pinu_res"}. To measure performance, the authors perform
test-time oracle model selection: they are using test-time information
by choosing the best model on each test-time dataset. A possible
justification of test-time oracle model selection is that, in practice,
we have a few labeled samples at test time to select the most performant
model. These reported numbers show the upper bound on the performance in
the scenario above because they are chosen based on the entire test set,
not just a few samples.
#### Ingredients for Predicting is not Understanding
The Predicting is not Understanding method uses the usual ingredients
for supervised learning
($\text{samples for } (X, Y) = (\text{input}, \text{output})$) to find a
set of diverse hypotheses $m_1, \dots, m_N$. The model selection takes
place at *test time*. This work only shows the upper bound of attainable
performance using the perfect test-time model selection.
::: definition
Kullback-Leibler Divergence The Kullback-Leibler divergence from
distribution $Q$ to distribution $P$ with densities $q, p$ is given by
$$\operatorname{KL}\left(P \Vert Q\right) = \int_\cX p(x) \log \frac{p(x)}{q(x)}\ dx.$$
:::
::: information
Independence and Input Gradients
::: proposition
A pair of predictors $f_{\theta_1}, f_{\theta_2}$ are locally
independent at $x$ iff the mutual information
$\operatorname{MI}(f_{\theta_1}(\tilde{x}), f_{\theta_2}(\tilde{x})) = 0$
with
$\tilde{x} \sim \cN(x, \sigma^2I)$ [@https://doi.org/10.48550/arxiv.2207.02598].
:::
::: proof
*Proof.* A pair of predictors $f_{\theta_1}, f_{\theta_2}$ are defined
to be locally independent at x iff their predictions are statistically
independent for Gaussian perturbations around $x$:
$$f_{\theta_1}(\tilde{x}) \indep f_{\theta_2}(\tilde{x})$$ with
$\tilde{x} \sim \cN(x, \sigma^2I)$.
The definition of mutual information is
$$\operatorname{MI}(f_{\theta_1}(\tilde{x}), f_{\theta_2}(\tilde{x})) = D_{\operatorname{KL}}(P_{f_{\theta_1}(\tilde{x}), f_{\theta_2}(\tilde{x})} \Vert P_{f_{\theta_1}(\tilde{x})} \otimes P_{f_{\theta_2}(\tilde{x})}).$$
It is a well-known fact that
$D_{\operatorname{KL}}(P \Vert Q) = 0 \iff P \equiv Q$. From this, we
immediately see that
$$\operatorname{MI}(f_{\theta_1}(\tilde{x}), f_{\theta_2}(\tilde{x})) = 0 \iff f_{\theta_1}(\tilde{x}) \indep f_{\theta_2}(\tilde{x}).$$ ◻
:::
For infinitesimally small perturbations ($\sigma \downarrow 0$), the
variables $f_{\theta_1}(\tilde{x}), f_{\theta_2}(\tilde{x})$ can be
approximated through linearization wrt. the input gradients
$\nabla_x f$:
$$f(\tilde{x}) \approx f(x) + \nabla_x f(x)^\top (\tilde{x} - x) =: \hat{f}(\tilde{x}).$$
::: claim
Following the above definition, $\hat{f}_{\theta_1}(\tilde{x})$ and
$\hat{f}_{\theta_2}(\tilde{x})$ are 1D Gaussian random variables.
:::
::: proof
*Proof.* By definition,
$\hat{f}(\tilde{x}) = f(x) + \nabla_x f(x)^\top (\tilde{x} - x) = \underbrace{\nabla_x f(x)^\top}_{A :=}\tilde{x} + \underbrace{f(x) - \nabla_x f(x)^\top x}_{b :=}$.
As $\tilde{x} \sim \cN(x, \sigma^2I)$, we know that $$\begin{aligned}
\hat{f}(\tilde{x}) = A\tilde{x} + b &\sim \cN(Ax + b, \sigma^2 AA^\top)\\
&\sim \cN(Ax + f(x) - Ax, \sigma^2 AA^\top)\\
&\sim \cN(f(x), \sigma^2 \nabla_x f(x)^\top \nabla_x f(x)).
\end{aligned}$$ Substituting $f_{\theta_1}$ or $f_{\theta_2}$ into $f$
directly gives the statement. ◻
:::
::: claim
The correlation of $\hat{f}_{\theta_1}(\tilde{x})$ and
$\hat{f}_{\theta_2}(\tilde{x})$ is given by
$\cos(\nabla_x f_{\theta_1}(x), \nabla_x f_{\theta_2}(x))$.
:::
::: proof
*Proof.* We know from before that $$\begin{aligned}
\hat{f}_{\theta_1}(\tilde{x}) &\sim \cN(f_{\theta_1}(x), \sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x))\\
\hat{f}_{\theta_2}(\tilde{x}) &\sim \cN(f_{\theta_2}(x), \sigma^2 \nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)).
\end{aligned}$$ It follows that $$\begin{aligned}
\rho(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x})) &= \frac{\Cov(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x}))}{\sqrt{\sigma^4 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x)\nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)}}.
\end{aligned}$$ First, we calculate the covariance: $$\begin{aligned}
\Cov(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x})) &= \nE[\hat{f}_{\theta_1}(\tilde{x})\hat{f}_{\theta_2}(\tilde{x})] - \nE[\hat{f}_{\theta_1}(\tilde{x})]\nE[\hat{f}_{\theta_2}(\tilde{x})]\\
&= \nE[\hat{f}_{\theta_1}(\tilde{x})\hat{f}_{\theta_2}(\tilde{x})] - f_{\theta_1}(x)f_{\theta_2}(x)\\
&=\begin{multlined}[t] \nE[(f_{\theta_1}(x) + \nabla_x f_{\theta_1}(x)^\top (\tilde{x} - x))(f_{\theta_2}(x) + \nabla_x f_{\theta_2}(x)^\top (\tilde{x} - x))] \\- f_{\theta_1}(x)f_{\theta_2}(x)\end{multlined}\\
&= f_{\theta_1}(x)f_{\theta_2}(x) + f_{\theta_1}(x)\nE[\nabla_x f_{\theta_2}(x)^\top (\tilde{x} - x)] + f_{\theta_2}(x) \nE[\nabla_x f_{\theta_1}(x)^\top (\tilde{x} - x)]\\
&\quad+ \nE[\nabla_x f_{\theta_1}(x)^\top (\tilde{x} - x)\nabla_x f_{\theta_2}(x)^\top (\tilde{x} - x)] - f_{\theta_1}(x)f_{\theta_2}(x)\\
&= \nE[\nabla_x f_{\theta_1}(x)^\top (\tilde{x} - x)\nabla_x f_{\theta_2}(x)^\top (\tilde{x} - x)]\\
&= \nabla_x f_{\theta_1}(x)^\top \nE[(\tilde{x} - x)(\tilde{x} - x)^\top] \nabla_x f_{\theta_2}(x)\\
&= \nabla_x f_{\theta_1}(x)^\top \left(\Cov(\tilde{x} - x) + \nE[(\tilde{x} - x)]\nE[(\tilde{x} - x)]^\top\right) \nabla_x f_{\theta_2}(x)\\
&= \nabla_x f_{\theta_1}(x)^\top (\sigma^2I) \nabla_x f_{\theta_2}(x)\\
&= \sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_2}(x).
\end{aligned}$$ Plugging this back into the correlation formula, we
obtain $$\begin{aligned}
\rho(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x})) &= \frac{\Cov(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x}))}{\sqrt{\sigma^4 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x)\nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)}}\\
&= \frac{\sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_2}(x)}{\sigma^2 \sqrt{\nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x)}\sqrt{\nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)}}\\
&= \frac{\nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_2}(x)}{\sqrt{\nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x)}\sqrt{\nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)}}\\
&= \cos(\nabla_x f_{\theta_1}(x), \nabla_x f_{\theta_2}(x)).
\end{aligned}$$ ◻
:::
::: claim
The mutual information of $\hat{f}_{\theta_1}(x)$ and
$\hat{f}_{\theta_2}(x)$ is given by
$-\frac{1}{2} \log(1 -\cos^2(\nabla_x f_{\theta_1}(x), \nabla_x f_{\theta_2}(x)))$.
:::
::: proof
*Proof.* $$\begin{aligned}
\operatorname{MI}(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x})) &= D_{\operatorname{KL}}(P_{\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x})} \Vert P_{\hat{f}_{\theta_1}(\tilde{x})} \otimes P_{\hat{f}_{\theta_2}(\tilde{x})})\\
&= \int_{x_1 \in \cX} \int_{x_2 \in \cX} p(\hat{f}_{\theta_1}(x_1), \hat{f}_{\theta_2}(x_2)) \log \frac{p(\hat{f}_{\theta_1}(x_1), \hat{f}_{\theta_2}(x_2))}{p(\hat{f}_{\theta_1}(x_1))p(\hat{f}_{\theta_2}(x_2))}dx_2dx_1\\
&= \nH(\hat{f}_{\theta_1}(\tilde{x})) + \nH(\hat{f}_{\theta_2}(\tilde{x})) - \nH(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x})).
\end{aligned}$$ Now we notice that $\hat{f}_{\theta_1}(\tilde{x})$ and
$\hat{f}_{\theta_2}(\tilde{x})$ are also *jointly* Gaussian:
$$\begin{aligned}
\begin{pmatrix}\hat{f}_{\theta_1}(\tilde{x}) \\ \hat{f}_{\theta_2}(\tilde{x})\end{pmatrix} &\sim \cN\left(\begin{pmatrix} \mu_{\hat{f}_{\theta_1}(\tilde{x})} \\ \mu_{\hat{f}_{\theta_2}(\tilde{x})}\end{pmatrix}, \begin{bmatrix}\Var(\hat{f}_{\theta_1}(\tilde{x})) & \Cov(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x})) \\ \Cov(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x})) & \Var(\hat{f}_{\theta_2}(\tilde{x}))\end{bmatrix}\right)\\
&\sim \cN\left(\begin{pmatrix}f_{\theta_1}(x) \\ f_{\theta_2}(x)\end{pmatrix}, \begin{bmatrix}\sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x) & \sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_2}(x) \\ \sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_2}(x) & \sigma^2 \nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)\end{bmatrix}\right).
\end{aligned}$$ Below, we derive the formula for the entropy of a
multivariate Gaussian $x \sim \cN(\mu, \Sigma) \in \nR^n$:
$$\begin{aligned}
\nH(x) &= -\int p(x) \log p(x) dx\\
&= -\nE_x[\log \cN(x \mid \mu, \Sigma)]\\
&= -\nE_x\left[\log \left(\frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}} \exp\left(-\frac{1}{2}(x - \mu)^\top \Sigma^{-1}(x - \mu)\right)\right)\right]\\
&= \nE_x \left[\frac{n}{2} \log(2\pi) + \frac{1}{2}\log |\Sigma| + \frac{1}{2}(x - \mu)^\top \Sigma^{-1}(x - \mu)\right]\\
&= \frac{n}{2}\log(2\pi) + \frac{1}{2}\log |\Sigma| + \frac{1}{2}\nE_x[(x - \mu)^\top \Sigma^{-1} (x - \mu)]\\
&= \frac{n}{2}\log(2\pi) + \frac{1}{2}\log |\Sigma| + \frac{1}{2}\nE_x[\operatorname{tr}((x - \mu)^\top \Sigma^{-1} (x - \mu))]\\
&= \frac{n}{2}\log(2\pi) + \frac{1}{2}\log |\Sigma| + \frac{1}{2}\nE_x[\operatorname{tr}(\Sigma^{-1} (x - \mu)(x - \mu)^\top)]\\
&= \frac{n}{2}\log(2\pi) + \frac{1}{2}\log |\Sigma| + \frac{1}{2}\operatorname{tr}(\Sigma^{-1} \underbrace{\nE_x[(x - \mu)(x - \mu)^\top]}_{\Sigma})\\
&= \frac{n}{2}(1 + \log(2 \pi)) + \frac{1}{2} \log |\Sigma|.
\end{aligned}$$ Finally, we plug this into our formula for the mutual
information: $$\begin{aligned}
\operatorname{MI}(\hat{f}_{\theta_1}(\tilde{x})&, \hat{f}_{\theta_2}(\tilde{x})) = \nH(\hat{f}_{\theta_1}(\tilde{x})) + \nH(\hat{f}_{\theta_2}(\tilde{x})) - \nH(\hat{f}_{\theta_1}(\tilde{x}), \hat{f}_{\theta_2}(\tilde{x}))\\
&= \frac{1}{2}(1 + \log (2\pi)) + \frac{1}{2} \log \left(\sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x)\right)\\
&\quad+ \frac{1}{2}(1 + \log (2\pi)) + \frac{1}{2} \log \left(\sigma^2 \nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)\right)\\
&\quad- 1 - \log(2\pi) - \frac{1}{2} \log(\sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x) \cdot \sigma^2 \nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)\\
&\hspace{9.67em}- \sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_2}(x) \cdot \sigma^2 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_2}(x))\\
&= \frac{1}{2}\log \frac{\sigma^4 \nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x) \cdot \nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)}{\sigma^4 \left(\nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x) \cdot \nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x) - \left(\nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_2}(x)\right)^2\right)}\\
&= -\frac{1}{2}\log \left(1 - \frac{\left(\nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_2}(x)\right)^2}{\nabla_x f_{\theta_1}(x)^\top \nabla_x f_{\theta_1}(x) \cdot \nabla_x f_{\theta_2}(x)^\top \nabla_x f_{\theta_2}(x)}\right)\\
&= -\frac{1}{2}\log(1 - \cos^2(\nabla_x f_{\theta_1}(x), \nabla_x f_{\theta_2}(x))).
\end{aligned}$$ ◻
:::
**Putting everything together**: For an infinitesimal perturbation
($\sigma \downarrow 0$), we know that
$\hat{f}_{\theta_1}(\tilde{x}) = f_{\theta_1}(\tilde{x})$ and
$\hat{f}_{\theta_2}(\tilde{x}) = f_{\theta_2}(\tilde{x})$, i.e., the
linearization is exact. Of course, we have to re-linearize after every
gradient step. By driving the mutual information to zero, we enforce
statistical independence between $f_{\theta_1}(\tilde{x})$ and
$f_{\theta_2}(\tilde{x})$. It is also easy to see that for
$\sigma \downarrow 0$, $$\begin{aligned}
\min_{\theta_1, \theta_2} \operatorname{MI}(f_{\theta_1}(\tilde{x}), f_{\theta_2}(\tilde{x})) &= \min_{\theta_1, \theta_2} -\frac{1}{2}\log(1 - \cos^2(\nabla_x f_{\theta_1}(x), \nabla_x f_{\theta_2}(x)))\\
&= \max_{\theta_1, \theta_2} \log(1 - \cos^2(\nabla_x f_{\theta_1}(x), \nabla_x f_{\theta_2}(x)))\\
&= \max_{\theta_1, \theta_2} 1 - \cos^2(\nabla_x f_{\theta_1}(x), \nabla_x f_{\theta_2}(x))\\
&= \min_{\theta_1, \theta_2} \cos^2(\nabla_x f_{\theta_1}(x), \nabla_x f_{\theta_2}(x)).
\end{aligned}$$ Therefore, the local independence loss
$$\cL_{\mathrm{indep}}(\nabla_x f_{\theta_{m_1}}(x), \nabla_x f_{\theta_{m_2}}(x)) = \cos^2(\nabla_x f_{\theta_{m_1}}(x), \nabla_x f_{\theta_{m_2}}(x))$$
for a pair of models $(m_1, m_2)$ indeed encourages the statistical
independence of the models' outputs, considering an infinitesimal
Gaussian perturbation around the input $x$. The obvious minimizer of the
term is any constellation where the two input gradients are orthogonal.
**Note**: It is also easy to see from the correlation and mutual
information formulas that for Gaussian variables, zero correlation is
equivalent to independence. This is, of course, not true in general.
:::
## Adversarial OOD Generalization
OOD generalization is about dealing with uncertainty. It is easy to make
a model generalize well to a single possible environment. As we
introduce more environments, this becomes harder and harder until we
arrive at an infinite number of environments or "any environment". This
tendency is illustrated in
Figure [2.46](#fig:knowledge){reference-type="ref"
reference="fig:knowledge"}. As we have more and more knowledge about
what will happen at deployment time, the space of possible environments
shrinks, and thus we become more certain. A parallel can be drawn with
the notion of entropy: If we already have much knowledge, additional
information has a small entropy.
![The size of the space of possible environments shrinks as we have more
and more information about
deployment.](gfx/02_knowledge.pdf){#fig:knowledge
width="0.8\\linewidth"}
The question is: How can we take care of an infinite number of possible
environments? There are two general methods for dealing with uncertainty
when we do not know the deployment environment perfectly (or the *enemy*
who is trying to give us a hard environment):
1. **Make an educated guess.** For a good guess, this is a nice,
practical solution that is easy to carry out. However, we obtain no
guarantees: We do not know if we made the right guess. In the worst
case, we are not making any progress. Many methods seen so far fall
into this category, e.g., ReBias, Predicting is not Understanding,
and Learning from Failure (by guessing the bias).
2. **Prepare for the worst.** Here, we have a so-called *adversarial
environment*. By following this principle, we can obtain theoretical
lower bound guarantees: Our model's performance against the
worst-case environment provides a lower bound on its performance
against the space of possible environments. (We are safe for the
worst-case scenario from a set of possible scenarios, so we are also
safe for all of them.) An important caveat is that the guarantee is
only within the pre-set space of possible environments (the strategy
space). Outside of this, we have no guarantees. This approach can
also lead to unrealistically pessimistic solutions.
As they both have their pros and cons, there is no single right answer:
it is a matter of choice and depends on our application.
So far, we have only considered OOD generalization methods for making an
educated guess. Now, let us discuss *adversarial generalization* that
comprises methods that prepare for the worst.
::: definition
Adversarial Generalization Adversarial generalization is an ML technique
for "preparing for the worst-case scenario" when we do not know the
target scenario/distribution in deployment.
:::
The following subsections will describe this type of OOD generalization
in more detail.
### Formulation of a General Adversarial Environment
Before discussing adversarial generalization, let us first introduce the
notion of a *devil*.
::: definition
Devil The devil is a (known or unknown) adversary that actively tries to
find the worst environment for us from the strategy space according to
the adversarial goal and knowledge. The more knowledge it has, the worse
environments it can specify for us.
:::
A general adversarial environment is specified by three parts: the
*adversarial goal*, the *strategy space*, and the *knowledge*.[^15] The
exact definitions of these parts are given below.
::: definition
Adversarial Goal The adversarial goal is a key component of an
adversarial setting that specifies which environment is considered
"worst" for our model.
:::
::: definition
Strategy Space The strategy space in an adversarial setting defines the
space of possible environments the devil can choose from.
:::
::: definition
Knowledge The knowledge of the devil in an adversarial setting specifies
the devil's ability to pick the worst environment for our model. In
short, it defines what the devil knows about the model.
:::
In the next sections, we will discuss how exactly the devil can achieve
their goals.
### Fast Gradient Sign Method (FGSM)
First, we start with the definition of a *white-box attack*, as the Fast
Gradient Sign Method (FGSM) falls into this category.
::: definition
White-Box Attack When the adversary knows the model architecture and the
weights, we call the attack a white-box attack.
:::
If the devil does not want to think too much, then FGSM can be a popular
first choice, as it is one of the simplest ways to achieve adversarial
goals. The FGSM attack, introduced in "[Explaining and Harnessing
Adversarial
Examples](https://arxiv.org/abs/1412.6572)" [@https://doi.org/10.48550/arxiv.1412.6572]
is a type of $L_\infty$ adversarial attack. Its three ingredients are
listed below.
- **Adversarial Goal**: Reducing classification accuracy while being
imperceptible to humans.
- **Strategy Space**: For every sample, the adversary may add a
perturbation $dx$ with norm $\Vert dx \Vert_\infty \le \epsilon$ (to
make sure it is imperceptible).
- **Knowledge**: Access to the model architecture, weights, and thus
gradients. (White-box attack.)
The iconic image from [@https://doi.org/10.48550/arxiv.1412.6572], shown
in Figure [2.47](#fig:iconic){reference-type="ref"
reference="fig:iconic"}, depicts the attack, where a small perturbation
applied completely destroys the model's prediction performance ("gibbon"
with 99.3% confidence). A more general informal illustration of this
scenario is given in Figure [2.48](#fig:fgsm){reference-type="ref"
reference="fig:fgsm"}.
![Demonstration of FGSM, taken
from [@https://doi.org/10.48550/arxiv.1412.6572]. By adding some noise
of small magnitude, the network very confidently predicts an incorrect
class, destroying the performance of the model. $J$ is the cost function
(loss) we wish to *maximize*.](gfx/02_iconic.png){#fig:iconic
width="0.8\\linewidth"}
![Informal illustration of the FGSM method's strategy space. The devil
aims to find an adversarial sample in the $L_\infty$ $\epsilon$-ball
around the original input $x$.](gfx/02_fgsm.pdf){#fig:fgsm
width="0.35\\linewidth"}
#### FGSM Method
The FGSM attack perturbs the image $x$ as
$$x + \epsilon\ \operatorname{sgn}\left(\nabla_x \cL(\theta, x, y)\right)$$
where $\cL$ is the loss function used for model $\theta$, $y$ is the
ground truth label, and the sign function is applied element-wise. One
also has to take care about the image staying in the range
$[0, 1]^{H \times W \times 3}$ by clipping or normalizing. $\epsilon$ is
the size of the perturbation, which is determined by the strategy space.
It defines the maximal $L_\infty$ norm of the perturbation.
Let us consider the pros and cons of FGSM below.
- **Pros**: The method is very simple. We take a binary map of the
gradient of the loss, i.e., the direction in which the loss
increases the most around $x$. The method is also cheap. It only
requires one forward and backward pass per sample to create an
adversarial perturbation which makes it swift to obtain.
- **Cons**: The method does not give an optimal result. The perturbed
image does not necessarily correspond to the worst-case sample in
the $L_\infty$ ball (but it generally gives a good adversarial
attack still for unprotected networks).
### Projected Gradient Descent (PGD)
If the devil wants to do something more sophisticated to succeed, the
Projected Gradient Descent (PGD) might be a favorable choice for them.
The PGD attack, introduced in the paper "[Towards Deep Learning Models
Resistant to Adversarial
Attacks](https://arxiv.org/abs/1706.06083)" [@https://doi.org/10.48550/arxiv.1706.06083],
is a type of $L_p$ adversarial attack ($1 \le p \le \infty$), which is
the strongest white-box attack to date (also because not many people are
looking into strong attacks anymore). The three ingredients of it are
detailed below.
- **Adversarial Goal**: Same as for FGSM.
- **Strategy Space**: For every sample, the adversary may add a
perturbation $dx$ with norm $\Vert dx \Vert_p \le \epsilon$.
- **Knowledge**: Same as for FGSM. The devil knows everything about
the model, both structural details and the weights. It tries to
generate a critical perturbation direction based on $\theta$.
An illustration depicting this scenario is shown in
Figure [2.49](#fig:pgd){reference-type="ref" reference="fig:pgd"}. The
devil is *trying to find* the worst-case sample for a fixed $x$ in the
$L_p$ ball.
![Informal illustration of the PGD method's strategy space. The devil
aims to find a strong attack in the $L_p$ ball around input
$x$.](gfx/02_pgd.pdf){#fig:pgd width="0.35\\linewidth"}
#### PGD Method
The PGD attack solves the optimization problem $$\begin{aligned}
&\max_{dx \in \nR^{H \times W \times 3}} \cL(f(x + dx), y; \theta)\\
&\text{s.t. } x + dx \in [0, 1]^{H \times W \times 3}\\
&\text{and } \Vert dx \Vert_p \le \epsilon.
\end{aligned}$$ It perturbs the image $x$ iteratively as
$$x^{t + 1} = \prod_{x + S} \left(x^t + \alpha \operatorname{sgn}\left(\nabla_x \cL(f(x^t), y; \theta)\right)\right)$$
where $\cL$ is the loss function used for model $\theta$, $y$ is the
ground truth label, $t$ is the iteration index, $\alpha$ is the step
size for each iteration, and $\prod_{x + S}$ is the projection on the
$L_p$ $\epsilon$-sphere around $x$.[^16] $\epsilon$ is the size of the
perturbation, which is determined by the strategy space. It defines the
maximal $L_p$ norm of the perturbation.
Being an iterative algorithm, PGD usually finds an even worse-case
sample than FGSM (which only performs a single step). We iteratively
follow the sign of the gradient with step size $\alpha$ and project back
onto the $L_p$ $\epsilon$-ball around $x$. According to the properties
of the sign function, in each step, we go in an angle of
$\beta \in \{\pm 45^\circ, \pm 90^\circ, \pm 135^\circ, 0^\circ, 180^\circ\}$
from the previous $x^t$ before projecting back onto the
$\epsilon$-ball.[^17] (Usually, in visualization, this means traveling
along the boundary of the $L_p$ ball.) Now we can go out of the $L_p$
ball of $\epsilon$ even in a single step (especially around the
'corners' of the $L_p$ ball), depending on how we choose $\alpha$. Like
in FGSM, we also take care of the image staying in the range
$[0, 1]^{H \times W \times 3}$ using clipping or normalization.
Convergence happens when, e.g.,
$\Vert x^{t + 1} - x^t \Vert_2 \le 1\mathrm{e}{-5}$ or some similar
criterion is satisfied.
::: information
Using the Gradient's Sign Why do we use the sign of the gradient in
these methods and not the magnitude? Either case works. However, e.g.,
Adam [@kingma2017adam] is also taking the sign of the gradient for
updates (considering the formula without the exponential moving average)
and is one of the SotA methods. In high dimensions, the choice of taking
the sign does not matter much. This is usually a choice we make based on
empirical observations.
:::
### FGSM vs. PGD
Let us briefly compare the two attacks we have seen so far, FGSM and
PGD. In both cases, the optimization problem for the adversary is
non-convex, as the loss surface is non-convex in $x$. We also have no
guarantee for the globally optimal solution, even within a small
$\epsilon$-ball (which is very tiny in a high-dimensional space). The
strength of the attack depends a lot on the optimization algorithm. We
have many design choices, and not all Gradient Descent (GD) variants
perform similarly. **PGD is generally much stronger than FGSM; it finds
better local optima.** FGSM does not even find local optima in general,
as it consists of just a single gradient step. PGD is generally a SotA
white-box attack even as of 2023.
::: information
Size of the $\epsilon$-ball in High-Dimensional Spaces and Distribution
of Volume Why is the $\epsilon$-ball tiny in a high-dimensional space
for a small value of $\epsilon$? The volume of a ball with radius $r$ in
$\nR^d$ is
$$V_d = \frac{\pi^{d/2}}{\Gamma\left(1 + \frac{d}{2}\right)}r^d$$ and
$\Gamma(n) = (n - 1)!$ for a positive integer $n$. Therefore, the
denominator increases much faster than the numerator, driving the volume
to 0 as $d \rightarrow \infty$.
The volume is thus concentrated near the surface in high-dimensional
spaces: For a fixed dimension $d$, the fraction of the volume of a
smaller ball with radius $r < 1$ inside a unit ball is $r^d$ (as the
scalar multiplier cancels). For $r \approx 1$ but $d$ very large, this
is around 0.
:::
::: information
How to choose $\epsilon$ in $L_p$ attacks?
The hyperparameter $\epsilon$ is usually chosen to be very small. Even
more importantly, one should fix it across studies, as we typically wish
to compare against previous attacks/defenses. There are unified values
in the community but the exact value does not matter that much, as below
a certain threshold, the perturbations are (mostly) not visible to
humans anyway.
:::
### Different Strategy Spaces for Adversarial Attacks
![Example of two image pairs where humans would choose the *left* pair
as more similar, but regarding $L_2$ distances, the *right* pair is much
closer. This is because of the translation in the first image pair.
Figure-snippet taken
from [@DBLP:journals/corr/abs-1801-03924].](gfx/02_unaligned.png){#fig:unaligned
width="0.7\\linewidth"}
So far, we have considered perturbations inside an $L_p$ ball determined
by $\epsilon$. The problem with this strategy space is that it is not
aligned with human perception -- it is in the pixel space. It is missing
some perturbations that are not visible to humans (such as shifting all
pixels up by one), but it also captures some changes apparent to humans
(such as additive noise at initially very clear and homogeneous surfaces
in images). The $L_p$ strategy space is thus not well-aligned with the
adversarial goal. Sometimes it does not satisfy the goal (as the
adversary's goal is to produce imperceptible perturbations), and
sometimes it technically satisfies the goal but could do it even better
(as the adversary's goal is usually also to decrease accuracy as much as
possible). The misalignment of additive perturbations and the
adversarial goal is further illustrated in
Figure [2.50](#fig:unaligned){reference-type="ref"
reference="fig:unaligned"}. According to this observation, in the
following subsections, we will consider strategy spaces that are
different from the $L_p$-ball-based ones.
#### Flow-Based Perturbations
In general, images closer in perception space (ones that look more
similar to humans) can have a larger $L_p$ difference than obviously
different image pairs. Suppose that we have a robust model against any
$L_p$ perturbations a ball parameterized by a small $\epsilon$. In this
case, the adversary could still be able to find a one-pixel shift of the
image that destroys the model's predictions, even though this
perturbation is imperceptible. This is a "blind spot" of an adversary
that uses an $L_p$-ball-based strategy space.
::: definition
Total Variation The total variation of a vector field
$f\colon \nR^2 \rightarrow \nR^2$ is defined as
$$\Vert f \Vert_\mathrm{TV} = \int \Vert \nabla f_1(x) \Vert_2 + \Vert \nabla f_2(x) \Vert_2\ dx =: \int \Vert \nabla f(x) \Vert_2\ dx.$$
It is often considered a generalization of the $L_2$ (or $L_1$) norm of
the gradient to an entire vector space.
:::
Such small image translations (shifts) generally result in huge $L_2$
distances (as images are not smooth, and along the object boundaries, we
have a significant pixel distance), but correspond to perceptually minor
differences. Luckily, we can define a metric that assigns small
distances to small *per-pixel* image translations. We consider *optical
flow* transformations, and we measure small changes by the *total
variation (TV)* norm, for which translations of an image have a "size"
of zero. Such attacks are discussed in detail in
Section [2.15.6](#sssec:flow){reference-type="ref"
reference="sssec:flow"}.
#### Physical Attacks
The plausibility of the previously discussed strategy spaces is
questionable. $L_p$ attacks and other attacks (e.g., flow-based ones)
alter the *digital image*. Do such adversaries exist in the real world?
Are the previous attacks plausible at all? Basic security technology can
already prevent such adversaries, with access as depicted in
Figure [2.51](#fig:plausibility){reference-type="ref"
reference="fig:plausibility"}. Therefore, looking into other strategy
spaces is well-motivated.
![The PGD adversary, being a white-box attack, has access to the model
in the digital realm.](gfx/02_plausibility.pdf){#fig:plausibility
width="\\linewidth"}
::: definition
Black-Box Attack When the adversary only observes the inputs and outputs
of a model and does not know the model architecture and the weights, we
call the attack a black-box attack.
:::
The PGD adversary perturbs pixels of a digital image after it is
captured in the real world. Does this scenario make sense? Should we
even defend against such an adversary? We just have to ensure that no
one gets to see our compiled code and that no one can change the data
stream. This is basic information security. Furthermore, even if one
gains access to the data stream, one also needs access to the exact
model for white-box attacks. (Once the adversary is that deep in, they
might as well just change the prediction directly...) For black-box
attacks in the digital realm, this is not needed, but it is still a
strange scenario where one has access to the data stream but not the
model output. We even go one step further in the discussion about
plausibility: When we use an API and have no access to internal data
streams, we can indeed construct black-box attacks for the model (as we
will see in Sections [2.15.9](#sssec:sub){reference-type="ref"
reference="sssec:sub"} and [2.15.10](#sssec:zero){reference-type="ref"
reference="sssec:zero"}). However, this only ruins the accuracy for us,
which seems to be a very poor adversarial goal. **By focusing on attacks
in the digital realm, we are probably looking at a non-existent
problem.**
Another very similar scenario in the digital realm is when images are
uploaded to the cloud, as shown in
Figure [2.52](#fig:plausibility2){reference-type="ref"
reference="fig:plausibility2"}. It is very unrealistic for an adversary
to come into this pipeline and make changes.
![In a cloud setting, the PGD adversary still acts in the digital
realm.](gfx/02_plausibility2.pdf){#fig:plausibility2
width="\\linewidth"}
This lack of realism in attacks in the digital realm inspires the search
for a new strategy space in the real world: Let us discuss *physical
attacks*. In contrast to previously mentioned attacks, they induce
physical changes in objects in the real world. These involve, e.g.,
putting a carefully constructed sticker (or graffiti) on stop signs to
make sure that self-driving cars do not detect it or printing a pattern
on cardboard (and e.g. wearing it around the neck) such that the person
carrying the sign does not get detected. These options are illustrated
in Figure [2.53](#fig:physical){reference-type="ref"
reference="fig:physical"}. This is much more realistic, as the adversary
intervenes in the real world, not in a secure stage in a pipeline. The
adversaries usually *do* have the necessary access to real-world
objects. We argue that we should instead be focusing on defending
against such attacks, shown in
Figure [2.54](#fig:plausibility3){reference-type="ref"
reference="fig:plausibility3"}. **Note**: Such attacks can be both
black-box and white-box attacks.
![Physical attacks are more realistic than those in the digital
realm [@9025518; @physicalreview].](gfx/02_physical.png){#fig:physical
width="0.6\\linewidth"}
![In a physical adversarial setting, the adversary has access to the
object in the real world. The adversary might also know the internals of
the model (considering a white-box setting), but still only intervene in
the physical world.](gfx/02_plausibility3.pdf){#fig:plausibility3
width="\\linewidth"}
#### Object Poses in the 3D World
We briefly discuss an interesting boundary between adversarial
robustness and OOD generalization that also introduces a new strategy
space. This is the paper "[Strike (with) a Pose: Neural Networks Are
Easily Fooled by Strange Poses of Familiar
Objects](https://arxiv.org/abs/1811.11553)" [@https://doi.org/10.48550/arxiv.1811.11553]
which focuses on changing poses of objects in 3D space (which is similar
to physical attacks but can also be done digitally given a sophisticated
image synthesis tool). A collage of synthetic and real images the
authors considered is shown in
Figure [2.55](#fig:poses){reference-type="ref" reference="fig:poses"}.
![Collage of synthetic and real images with the model's corresponding
max-probability predictions. According to the human eye, images in (row,
column) positions (1, 4), (2, 2), (2, 4), (4, 1), (4, 4) are quite
plausible.](gfx/02_poses.pdf){#fig:poses width="0.6\\linewidth"}
The three ingredients of this "attack" are as follows.
- **Adversarial Goal**: Reducing classification accuracy by changing
object poses.
- **Strategy Space**: For every sample, the adversary may arbitrarily
change the object poses.
- **Knowledge**: Same as for FGSM. (White-box attack.)
Here, the adversary does not necessarily care about small changes in the
object pose. Larger changes can still be plausible for the human eye.
Once it becomes obvious to humans, they can, of course, intervene. The
devil knows everything about the model, both structural details and the
weights. It tries to generate a critical pose perturbation based on
weights $\theta$.
One may ask how this is a real threat at all. The threatening
observation is that the model completely breaks down for the plausible
examples, even though these could be observed in the real world. This
work is on the boundary of adversarial robustness and OOD generalization
to real-world domains.[^18] If the perturbation grows larger and we do
not have a notion of a devil and worst-case samples anymore, we enter
the realm of generalization to plausible real-world domains, across
biases, or in other OOD generalization schemes.
### Optical Flow {#sssec:flow}
Let us now discuss the optical flow approach in more detail. Optical
flow is used a lot for visual tracking and videos. It provides the
smallest warping of the underlying image mesh to transform image $x_1$
into $x_2$.[^19] It specifies the *apparent* movement of pixels which is
needed to transform image $x_1$ into $x_2$. We obtain it by performing
(regularized) pixel matching between images/frames.
Optical flow is represented as a vector field over the 2D image plane.
Each point of the 2D pixel plane corresponds to a 2D vector. Hence, the
size of the warping may be readily computed via total variation (TV)
(Definition [\[def:totalvariation\]](#def:totalvariation){reference-type="ref"
reference="def:totalvariation"}). This vector field is usually encoded
by colors for visualization. The pixel intensity gives the 2D vector
magnitude at the pixel, and the pixel color specifies the 2D vector
direction at the pixel.
**Example**: Consider a ball flying across the sky. The ball pixels are
translated across the frames by a tiny bit, but the $L_2$ distance
between the frames is large. Our task is to find *pixel correspondences*
between the two frames based on apparent motion. We set up a vector
going from pixel $(i, j)$ in frame $t$ to the corresponding pixel
$(i', j')$ in frame $(t + 1)$. For example, if
$(i, j) = (4, 5), (i', j') = (7, 2)$, then the forward flow is
$(u, v) = (3, -3)$. We measure the distance between pixels by taking the
$L_2$ norm of this vector and taking the average of these distances for
every pixel. This is precisely what we do when calculating TV. This
gives an idea of how much warping has taken place between the two
frames. A small flow, however, can also correspond to human-perceptible
changes: a small ball flying fast between two frames on a huge,
otherwise static image will have a low TV value, but humans are able to
point out the differences quickly. Nevertheless, the perturbed images
are still deemed plausible by human inspection.
### Adversarial Flow-Based Perturbation
How can we use optical flow to find adversarial patterns? Instead of
estimating the flow between 2 consecutive frames, we *generate* a flow
with a small total variation that fools our model, as done in the paper
"[Spatially Transformed Adversarial
Examples](https://openreview.net/forum?id=HyydRMZC-)" [@xiao2018spatially].
The three ingredients of their method are:
- **Adversarial Goal**: Reducing classification accuracy while being
imperceptible to humans.
- **Strategy Space**: For every sample, the adversary may choose a
flow $f$ with $\Vert f \Vert_\mathrm{TV} \le \epsilon$.
- **Knowledge**: Same as for FGSM. (White-box attack.)
This perturbation method is better aligned with human perception (i.e.,
it is a good proxy for it). It finds pixel-wise movement instead of
additive perturbation. The adversary warps the underlying image mesh of
image $x$ according to $f$ such that the classification result is wrong.
If the vector field is aligned in the same direction (constant map),
there is no total variation. On the contrary, if the vector field
comprises vectors with large magnitudes that are closely spaced and
point in different directions, it results in a large TV norm. These
abrupt changes in nearby vectors correspond to steep gradients in the
field. This is why penalizing total variation encourages images to be
*smoother*.
![Overview of a flow-based adversarial attack using bilinear
interpolation to obtain its final adversarial image from the *backward*
flow, taken from [@xiao2018spatially]. See
information [\[inf:flowadv\]](#inf:flowadv){reference-type="ref"
reference="inf:flowadv"} for details.](gfx/02_flow.pdf){#fig:flowadv
width="0.9\\linewidth"}
How can we obtain the final adversarial image from the adversarial flow?
Figure [2.56](#fig:flowadv){reference-type="ref"
reference="fig:flowadv"} shows a possible way using the *backward flow*
and *bilinear interpolation*.
::: information
Interpolation between source and adversarial images in flow-based
adversarial attack []{#inf:flowadv label="inf:flowadv"} During the
adversarial attack in Figure [2.56](#fig:flowadv){reference-type="ref"
reference="fig:flowadv"}, the image $x$ is fixed. The devil comes up
with a *backward* optical flow that takes the target pixels to the
original pixels. The reason to predict backward flow instead of forward
flow is easier bilinear interpolation. When the backward flow is
available, each pixel of the adversarial image can be computed after
querying some known pixel of the original image (source). On the
contrary, using the forward flow to obtain the adversarial example would
result in "holes" in the image. The actual *magnitude* of the warps does
not matter when calculating the TV norm, so translations of any kind are
allowed. At borders, we might copy the pixels of the original image.
:::
### White-Box vs. Black-Box Attacks
So far, we have discussed white-box
(Definition [\[def:whitebox\]](#def:whitebox){reference-type="ref"
reference="def:whitebox"}) and black-box
(Definition [\[def:blackbox\]](#def:blackbox){reference-type="ref"
reference="def:blackbox"}) attacks. Let us discuss some pros and cons of
these paradigms.
**White-box attacks are powerful**. The adversary can obtain the input
gradients from the model. (Examples: FGSM, PGD, Flow-Based
Perturbation.) White-box attacks are, however, not so realistic. For an
ML model on the cloud/as an API, we are never allowed to look into the
details of the model. It is intellectual property, and exposing it would
make the model vulnerable to various attacks. The quick solution that
most companies follow is to not open source their model. *Black-box
attacks are much weaker than white-box attacks but also much more
realistic.*
Many real-world applications are based on API access. There are also
further limitations to a realistic scenario:
- The number of queries within a time window is limited (rate limit).
- Malicious query inputs are possibly blocked.
For example, consider a face model recognizing the user in a photo
album: If we start sending strange patterns like random noise or
non-face images, it can easily be detected, and we can be blocked from
the service.
**Examples of black-box APIs.**
[GPT-3.5/4](https://chat.openai.com/) [@https://doi.org/10.48550/arxiv.2005.14165; @openai2023gpt4]
produces text output given text input. It is an interesting objective to
attack GPT-N based on only input/output observation pairs. One example
is Jailbreak Prompts [@shen2023do]: Here, the adversarial goal is making
the model tell us information about immoral or illegal topics; the
strategy space of the devil is giving any prompt to the model; and the
knowledge of the adversary is the observed answers of the model. The
attack is black-box by definition because we do not have access to the
model's internal structure.
[DALL-E](https://labs.openai.com) [@https://doi.org/10.48550/arxiv.2102.12092]
produces an image given a textual description.
In the following sections, we will discuss black-box attacks in more
detail.
### Black-Box Attack via a Substitute Model {#sssec:sub}
::: definition
Substitute Model A substitute model is a network that is used to mimic a
model we wish to attack. Prior knowledge about the attacked model is
incorporated into the substitute model, such as the type of
architecture, the size of the model, or the optimizer it was trained
with.
:::
![Illustration of a method for using substitute models to generate
black-box adversarial attacks. We only need query inputs and outputs to
train the substitute model.](gfx/02_substitute.pdf){#fig:substitute
width="0.8\\linewidth"}
We will start an overview of the black-box attacks with the seminal work
"[Practical Black-Box Attacks against Machine
Learning](https://arxiv.org/abs/1602.02697)" [@https://doi.org/10.48550/arxiv.1602.02697].
It introduces the idea of using a substitute model to attack the
original model. An overview of the method is given in
Figure [2.57](#fig:substitute){reference-type="ref"
reference="fig:substitute"}. Using this approach, we might need a lot of
input-output pairs from the original model, depending on how complex the
model is. Ideally, we want to follow the architecture of the target
model. If we, e.g., know that the original model is a Transformer, we
should also use one. We then attack the original model by creating
adversarial inputs that attack the substitute model $g$. By using $g$,
the adversary can generate *white-box* attacks. The hope is that this
attack also works for $f$. Based on empirical observations, this can
work quite well.
For smaller models, this method might be feasible to attack model $f$
well. However, for larger models, we need tons of data and extreme
computational effort to train the substitute model. In particular, only
a handful of companies in the world could mimic GPT-3 with a substitute
model. It would be easy to trace back who is responsible for the
attacks. Usually, such large companies focus on problems other than
these black-box attacks. They already have many other problems, e.g.,
private training data leakage by querying, bias issues, or
explainability, that are way more realistic.
### Black-Box Attack via a Zeroth-Order Attack {#sssec:zero}
Another type of black-box attack is based on the approximation of the
model gradient with a lot of API calls. One way to do this is described
in the work "[ZOO: Zeroth Order Optimization based Black-box Attacks to
Deep Neural Networks without Training Substitute
Models](https://arxiv.org/abs/1708.03999)" [@Chen_2017]. The idea comes
from the fact that one can approximate the gradient of the loss
numerically using finite differences, so that for a small enough
$h \in \nR$:
$$\frac{\partial \cL(x)}{\partial x_i} \approx \frac{\cL(x + he_i) - \cL(x)}{h}$$
where $e_i$ is the $i$th canonical basis vector. One can also use a more
stable symmetric version that gives better approximations in general
(but requires more network evaluations):
$$\frac{\partial \cL(x)}{\partial x_i} \approx \frac{\cL(x + he_i) - \cL(x - he_i)}{2h},$$
where $x \in \nR^d$ is a flattened image and $\cL(x) \in \nR$ is the
loss function of choice, based on our target class $y$ that we want the
model to classify $x$ as.
**Example**: Consider a $200 \times 200$ image. In this case,
$x \in \nR^{120,000}$ and $\nabla \cL(x) \in \nR^{120,000}$. We need
120,001 API calls to approximate the gradient *of a single image*, or
240,000 if we consider the symmetric approximation. No API will let us
do this in a manageable time. For this to work, we also need access to
the logits $z(x)$ or the probabilities $f(x)$ from the model to compute
the objective $\cL$, not just the predicted class label. For example, we
might use the objective
$\cL(x) = \max\{\max_{i \ne y} z(x)_i - z(x)_y, -\kappa\}$, as given
in [@Chen_2017].
Still, the worst case with such black-box attacks for the model owners
is that the performance drops. This is not a realistic goal for an
adversary, as it only happens to the attacker and only on the
adversarial samples they create. If the attacker wants to decrease
performance for others, too, they need access to the data stream.
**Note**: The paper was published in 2017. Back then, these attacks were
focused on theoretical possibilities. Nowadays, the field is focusing
more on realistic threats we have to tackle. The focus has shifted.
We can even be more imaginative and train a local model that predicts
the pixel location most likely to generate the highest response by the
attacked model. In this case, we need a dataset of (image, pixel) pairs
where the pixel changes the prediction of many locally available models
the most. Leveraging this dataset, we can get the model's pixel
prediction and find the pixel's perturbation via API calls that result
in the desired behavior (e.g., compute gradients for that pixel using
finite differences and update that pixel of the image using the sign of
the gradient).
We simplify the previous approach and pick random coordinates to perform
stochastic coordinate descent, as shown in [@Chen_2017]:
::: algorithm
:::
This is, of course, not very efficient. It is better to pick $i$ smartly
and perturb that pixel using a few API calls to determine a suitable
perturbation.
### Defense against Attacks: Adversarial Training
We have discussed many adversarial attacks. Is there any way to defend
against them? To answer this, we will touch upon one instructive defense
method that gave rise to the research direction of defense methods,
called adversarial training. This method was introduced in the paper
"[Towards Deep Learning Models Resistant to Adversarial
Attacks](https://arxiv.org/abs/1706.06083)" [@https://doi.org/10.48550/arxiv.1706.06083].
It is generally perceived as one of the best-working defenses against
$L_p$ attacks.
Adversarial training has a minimax formulation: Optimize $\theta$ the
worst-case perturbation of $x$ as
$$\min_{\theta} \nE_{(x, y) \in \cD}\left[\max_{dx \in S} \cL(x + dx, y; \theta)\right]$$
with, e.g., $S = [-\epsilon, \epsilon]^N$ corresponding to an $L_\infty$
attack. In practice, we do a few PGD steps for each $x$, generate an
attack $dx$, and use that for training $\theta$.
#### Breaking down gradients does not give us any guarantees.
Even if the previously mentioned methods worked in making gradient-based
adversarial attacks impossible, the model being safe is not equivalent
to no gradient-based algorithm being able to find an attack. There can
still be some adversarial image within the $L_p$ ball (or neighborhood
in general). When we break down the gradients, PGD cannot attack the
image in the right way *directly*. Using PGD naively results in a benign
image, i.e., the network can still recognize it well.
*The model is safe when there is absolutely no adversarial sample within
the attack space.* If this is not guaranteed, there can still be some
algorithm that can find the working attack.
#### How can we make the gradients malfunction?
One way to make the gradients malfunction is to transform the inputs
before feeding them to the DNN. These are called
input-transformation-based defenses. They apply image transformations
(and possibly random combinations thereof) to the original input image.
The idea is that if we just transform our image in different ways using
a discrete set of transformations, that does not change the content of
the image much, and if we have many variations of possible
transformations (of which we select one at test time), that is supposed
to be very effective against adversarial attacks. This is because
adversarial attacks are minimal changes in the image, and if we are
killing these small changes using transformations, the attack will
probably not harm the model anymore. We want to remove adversarial
effects from the input image before feeding the result to the DNN. As we
will soon see, this intuitive reasoning is *flawed* in most cases, as
input-transformation-based defenses only work when considering chained
random transformations with a combinatorial scaling of possibilities.
#### Examples for Input Transformations
![Example input transformations that can be used for gradient
obfuscation. Figure taken
from [@DBLP:journals/corr/abs-1711-00117].](gfx/02_input_transformations.pdf){#fig:input_transformations
width="0.5\\linewidth"}
Several examples of input transformations are shown in
Figure [2.58](#fig:input_transformations){reference-type="ref"
reference="fig:input_transformations"} that are detailed below.
**Cropping and rescaling of the original image.** We crop the part that
contains the gist of what is going on in the image or rescale the image
to the input size of the network.
::: definition
Bit Depth The bit depth of an image refers to the number of colors a
single pixel can represent. An 8-bit image can only contain $2^8 = 256$
unique colors. A 24-bit image can contain $2^{24} = 16,777,216$ unique
colors.
:::
**Bit depth reduction.** Reducing the bit depth kills some information,
but by doing this denoising (from the perturbation's viewpoint), we can
also remove critical adversarial perturbations.
**JPEG encoding and decoding.** JPEG uses Discrete Cosine Transform
(DCT). This is a typical transformation included in image viewers -- a
natural way to defend against perturbations.
**Removing random pixels and inpainting them** The inpainting can be
done, e.g., via TV
(Definition [\[def:totalvariation\]](#def:totalvariation){reference-type="ref"
reference="def:totalvariation"}) minimization. When removing a boundary
region, such inpainting will not result in a constant region (having the
average value of the neighboring pixels) but rather a very smoothed
version of the original image. (The boundary will be followed to some
extent.)
**Image quilting.** This method reconstructs images using small patches
from *other images* in a database. The used patches are chosen to be
similar to the original patches. These are also usually tiny. Before
feeding it to the network, we replace the original image with the
reconstruction.
#### Results of naive FGSM, DeepFool [@https://doi.org/10.48550/arxiv.1511.04599], and Carlini-Wagner [@https://doi.org/10.48550/arxiv.1608.04644] after input transformations
To see whether the input transformation defense works, we take a look at
the results of FGSM, DeepFool, and the Carlini-Wagner method in
Figure [2.59](#fig:carlini){reference-type="ref"
reference="fig:carlini"}. The general message of these results is that
applying the previously listed input transformation to an image protects
it against gradient-based adversarial attacks. We will see that this is
an *incorrect conclusion*.
![Top-1 classification accuracy of ResNet-50 on adversarial samples of
various kinds. If we use no input transformations, the model's
predictions break down completely. If we use the transformations listed
in text *individually*, the methods start failing. The stronger the
adversary (i.e., the more $L_2$ dissimilarity we allow), the better the
attack methods do, but they still perform quite poorly. Figure taken
from [@DBLP:journals/corr/abs-1711-00117].](gfx/02_defense_res.pdf){#fig:carlini
width="0.8\\linewidth"}
#### Straight-Through Gradient Estimator
One of the reasons why previous methods using input transformations
*still fail to defend our networks* is the fact that we can still
"approximate" the gradient of the defended model by using a
straight-through gradient estimator.
::: definition
Straight-Through Gradient Estimator The straight-through estimator
generates gradients for a non-differentiable transformation as if the
forward pass were the identity transformation; i.e., it lets the
gradient flow through in the computational graph.
:::
A successful application of the straight-through estimator is attacking
JPEG encoding/decoding defenses. The *forward* pass is JPEG encoding and
decoding, which is non-differentiable (because of quantization) but
close to an identity mapping. In the *backward* pass, we compute the
gradient as if the forward were the identity mapping. The fact that this
is a successful application of the estimator for an attack shows that
this transformation only helped for gradient obfuscation because it made
the computations non-differentiable. A Python example of a JPEG
transformer is shown in
Listing [\[lst:snippet\]](#lst:snippet){reference-type="ref"
reference="lst:snippet"}.
::: booklst
lst:snippet class JPEGTransformer(nn.Module): def forward(self, x):
\"\"\"JPEG encoding and decoding.\"\"\" encoded_x = self.jpeg_encode(x)
transformed_x = self.jpeg_decode(encoded_x) return transformed_x
def backward(self, x, dy): \"\"\"Straight-through estimator. Computes
gradient as if self.forward = lambda x: x. \"\"\" return dy
:::
#### The problem with naive gradient obfuscation methods
When we attack models employing gradient obfuscation methods detailed
above as a white box, we *also have access to the transformations*.[^20]
First, assume that there is a single deterministic transformation. We do
not have to know what this transformation precisely is; we just need
access to it.
**Cropping and rescaling.** This is a differentiable transformation
(cropping is just indexing, which is differentiable; rescaling is
linear), therefore, we can attack the joint network, i.e., the entire
pipeline. The defense does not work at all -- we can generate successful
attacks again. This is depicted in
Figure [2.60](#fig:attackcrop){reference-type="ref"
reference="fig:attackcrop"}.
**Other discrete transformations.** For example, consider JPEG encoding
and decoding. Such transformations are not differentiable. However, we
can still "differentiate through" quantization layers, using the
*straight-through gradient estimator*
(Definition [\[def:stgrad\]](#def:stgrad){reference-type="ref"
reference="def:stgrad"}). We can generate successful attacks again, as
depicted in Figure [2.61](#fig:straightthrough){reference-type="ref"
reference="fig:straightthrough"}.
**Mixture of random transformations.** Now, assume that there are
multiple transformations, and one of them (or a mixture of them) is
chosen randomly. When there is uncertainty in what transformation is
used, the white box partially becomes a black box, as we do not know
what is taking place in the random transformation. Still, for easier
cases, the attacker can generate an attack that works for *any* of the
transformations (defenses) by performing *Expectation over
Transformations* (EoT).[^21]
**Expectation over Transformations (EoT).** One can observe that
$$\nabla \nE_{t \sim T} f(t(x)) = \nE_{t \sim T} \nabla f(t(x)),$$ as
the gradient and integral can be exchanged when a function is
sufficiently smooth, which DNNs are. (For discrete transformations, we
use the straight-through estimator anyway, which makes them also work.)
The formula tells us that to attack the expected output of $f$
$t \sim T$, we take the gradient for each transformation and then take
the expectation the transformations. This procedure can be trivially
Monte Carlo estimated. We update the input the expected gradient's
approximation iteratively. Python code for a simple EoT attack is given
in Listing [\[lst:eot\]](#lst:eot){reference-type="ref"
reference="lst:eot"}.
With sufficient capacity for the attacker, the defense can become
ineffective. This is pushing the limit of the capacity of the attacker.
If the attacker has full capacity to address many possibilities for
transformations at test time, we attack all of them simultaneously. The
ICML'18 attack applies all the techniques we mentioned before. It
destroys the defense that uses random transformations and makes the
network have 0% adversarial accuracy.
![An easy way to circumvent obfuscated gradient defenses when the
applied transformations are
differentiable.](gfx/02_attackcrop.pdf){#fig:attackcrop
width="0.8\\linewidth"}
![Circumventing obfuscated gradient defenses when the applied
transformations are *non-differentiable*, using the straight-through
gradient estimator.](gfx/02_straightthrough.pdf){#fig:straightthrough
width="0.8\\linewidth"}
::: booklst
lst:eot def generate_eot_attack(x, model, transformation_list,
num_samples): random_transformations =
np.random.choice(transformation_list, num_samples)
grad_eot = np.zeros_like(x) for transformation in
random_transformations: y = model(transformation(x)) grad_x =
compute_input_gradient(y, x)
\# Approximate expectation by averaging. grad_eot += grad_x /
num_samples
return x + grad_eot
:::
### Effectiveness of Adversarial Training
Adversarial training (AT) does not introduce obfuscated gradients. It
was hard for the ICML'18 method to attack adversarially trained models
with greater attack success rates. AT is, therefore, an effective
defense. Notably, the authors
of [@https://doi.org/10.48550/arxiv.1802.00420] use vanilla adversarial
training without EoT. Performing EoT additionally would increase
computation costs but would likely result in even stronger defenses.
**Note**: Even after adversarial training, there might still be some
adversarial samples within the $L_p$ ball -- we get no guarantees.
However, adversarial training is still understood as a solid defense.
The critical caveat of AT is that it is complicated to perform at scale.
If we are dealing with an ImageNet scale, it is possible but also very
impressive. The training time increases notably: adversarial training
takes at least $T + 1$ times as long as regular training
(Subsection [\[ssec:complexity\]](#ssec:complexity){reference-type="ref"
reference="ssec:complexity"}), but here we also have to perform EoT,
resulting in a triple `for` loop.
### Barrage of Random Transforms (BaRT) {#sssec:bart}
As we have seen, we always have a loop of improvement in adversarial
settings between attackers and defenders. Once a defense with a mixture
of random transformations is broken (e.g., EoT effectively beats a
defender with a reasonable number of candidate transformations), the
question naturally arises: What happens when the set of transformations
is gigantic on the defense side?
If the defender starts using random combinations of transformations, the
number of possibilities grows exponentially as the number of individual
transformations and the length of the transformation sequence grows.
The paper "[Barrage of Random Transforms for Adversarially Robust
Defense](https://openaccess.thecvf.com/content_CVPR_2019/papers/Raff_Barrage_of_Random_Transforms_for_Adversarially_Robust_Defense_CVPR_2019_paper.pdf)" [@8954476]
was a "reply" to the EoT paper that introduced an enormous set of
possible transformations.
#### BaRT Method
The method introduces ten groups of possible image transformations
listed below.
- Color Precision Reduction
- JPEG Noise
- Swirl
- Noise Injection
- FFT Perturbation
- Zoom
- Color Space
- Contrast
- Greyscale
- Denoising
Each group contains some number of transformations. In total, we have 25
transformations, each of which has parameters $p$ that alter their
behavior.
The choice of transformations is made as follows.
1. Randomly select $k$ out of $n$ transforms where each transform by
itself is randomized.
2. Apply the selected transforms in a random sequence:
$$f(x) = f(t_{\pi(1)}(t_{\pi(2)}(\dots(t_{\pi(k)}(A(x)))\dots))),$$
where $A$ is the adversary.
Selecting the transformations randomly and applying them in a random
sequence generates an exponential number of possibilities
($n! / (n - k)!$) that still do not change the semantic meaning of the
image. Even after applying all transformations, the model can still
recognize the objects pretty well. However, the sheer number of
possibilities makes it very hard for the attacker to prepare against all
kinds of defenses. It must have a large enough capacity and many samples
are required to Monte Carlo sample the expectation. To establish
resilience against such input transformations, they are applied both
during training and inference. Therefore, this is *not* a post-hoc
algorithm.
The method has some overhead in the cost of training, but it boils down
to selecting an input transformation sequence with can be done very
efficiently on the CPU. The overhead is, therefore, similar to that of
data augmentation. One can also influence this overhead by changing how
often the transformations are resampled.
#### BaRT Results
![BaRT defends a model against PGD (which is not surprising). BaRT also
defends a model against the ICML'18 methods with EoT (10 or 40 samples),
designed to break gradient obfuscations. Using BaRT, performance does
not drop too much by increasing the max adversary distance $\epsilon$.
It is even more effective than adversarial training -- the attacker
cannot push the scores down to 0, not even for
$\Vert x - \hat{x} \Vert_\infty < 32$. (!) Top-k refers to top-k
accuracy. Figure taken from [@8954476].
](gfx/02_bartres.png){#fig:bartres width="0.5\\linewidth"}
The results of BaRT are shown in
Figure [2.62](#fig:bartres){reference-type="ref"
reference="fig:bartres"}. The key message here is that *BaRT is one of
the SotA adversarial defense methods even in 2023.*
### Certified defenses
Let us discuss *certifications of robustness*. Certified defense methods
make sure there is *no successful attack* in the strategy space (e.g.,
the $L_p$ ball) under some assumptions. The "[Certified Defenses against
Adversarial
Examples](https://arxiv.org/abs/1801.09344)" [@https://doi.org/10.48550/arxiv.1801.09344]
paper can give certifications of robustness by considering many
simplifying assumptions for the network and the adversarial objective.
The typical chain of thought for certified defenses is to come up with a
trainable objective, and then show that solving this trainable objective
will ensure that there is no worse attack than a certain type.
The authors consider a binary classification setting and a two-layer
neural network where the score is calculated as $$f(x) = V\sigma(Wx).$$
Here, $V \in \nR^{2 \times m}$, $W \in \nR^{m \times d}$, and $\sigma$
is an elementwise non-linearity with bounded gradients to $[0, 1]$,
e.g., ReLU or sigmoid. Notably, the authors calculate the score of both
positive and negative classes instead of considering a single score for
the ease of formalism. A certificate of defense is given by bounding the
margin of the incorrect class over the correct one for any adversarial
perturbation inside the $L_\infty$ $\epsilon$-ball centered at a
particular input $x$, denoted by $B_\epsilon(x)$. Further details are
discussed in
Information [\[inf:cert_def\]](#inf:cert_def){reference-type="ref"
reference="inf:cert_def"}.
::: definition
Fundamental Theorem of Line Integrals Consider a parametric curve
$r: [a, b] \rightarrow \nR^d$ and a differentiable function
$f: \nR^d \rightarrow \nR$. Then
$$\int_a^b \underbrace{\left\langle \nabla f(r(t)), r'(t) \right\rangle\ dt}_{\left\langle \nabla f(r(t)), dr \right\rangle,\ dr = r'(t)dt} = f(r(b)) - f(r(a)).$$
In words: The integral of directional derivatives along the curve $r$ of
the function $f$ is equal to the difference of boundary values of $f$.
In short, the shape of the curve $r$ does not matter.
**Connection to single variable calculus**: The fundamental theorem of
integrals states that for a differentiable $f: \nR \rightarrow \nR$:
$$\int_a^b f'(x)\ dx = f(b) - f(a).$$ In this case, we have a single
possible way from $a$ to $b$, which is generalized for line integrals.
:::
::: information
Formulation of the Certified Defenses Method []{#inf:cert_def
label="inf:cert_def"} The authors consider the following worst-case
adversarial attack:
$A_\mathrm{opt}(x) = \argmax_{\tilde{x} \in B_\epsilon(x)}\tilde{f}(\tilde{x}),$
where
$$\tilde{f}(x) := \underbrace{f^1(x)}_{\text{score of incorrect label}} - \underbrace{f^2(x)}_{\text{score of correct label}}.$$
The attack is successful if $\tilde{f}(A_\mathrm{opt}(x)) > 0$ as the
incorrect class is predicted.
We derive the following upper bounds on the severity of any adversarial
attack $A(x)$:
$$\tilde{f}(A(x)) \overset{(i)}{\le} \tilde{f}(A_\mathrm{opt}(x)) \overset{(ii)}{\le} \tilde{f}(x) + \epsilon \max_{\tilde{x} \in B_\epsilon(x)}\Vert \nabla \tilde{f}(\tilde{x}) \Vert_1 \overset{(iii)}{\le} \tilde{f}_\mathrm{QP}(x) \overset{(iv)}{\le} \tilde{f}_\mathrm{SDP}(x).$$
\(i\) Arises from the optimality of $A_\mathrm{opt}$. (ii) leverages the
fundamental theorem of line intergrals
(Definition [\[def:lineintegrals\]](#def:lineintegrals){reference-type="ref"
reference="def:lineintegrals"}): $$\begin{aligned}
\tilde{f}(\tilde{x}) &= \tilde{f}(x) + \int_0^1 \nabla \tilde{f}(t\tilde{x} + (1 - t)x)^\top(\tilde{x} - x)dt\\
&\le \tilde{f}(x) + \max_{\tilde{x}' \in B_\epsilon(x)} \epsilon \Vert \nabla \tilde{f}(\tilde{x}) \Vert_1
\end{aligned}$$ where $\tilde{x} \in B_\epsilon(x)$ and the inequality
holds because the linear interpolation $t\tilde{x} + (1 - t)x$ of two
elements $x$ and $\tilde{x}$ of $B_\epsilon(x)$ is also an element of
$B_\epsilon(x)$ for any $t \in [0, 1]$. In (iii),
$\tilde{f}_\mathrm{QP}(x)$ denotes the optimal value of a (non-convex)
quadratic program. This is a specific bound for two-layer networks where
$\tilde{f}(x) = f^1(x) - f^2(x) = v^\top \sigma(Wx)$ with
$v := V^1 - V^2$ being the difference of last-layer weights of the
correct and incorrect class. In this specific case, we upper-bound
$\tilde{f}(x) + \epsilon \max_{\tilde{x} \in B_\epsilon(x)}\Vert \nabla \tilde{f}(\tilde{x}) \Vert_1$
by noting that for $\tilde{x} \in B_\epsilon(x)$:
$$\Vert \nabla \tilde{f}(\tilde{x}) \Vert_1 = \Vert W^\top \operatorname{diag}(v)\sigma'(W\tilde{x})\Vert_1 \le \max_{s \in [0, 1]^m} \Vert W^\top \operatorname{diag}(v)s\Vert_1 = \max_{s \in [0, 1]^m, t \in [-1, 1]^d} t^\top W^\top \operatorname{diag}(v)s$$
where the last equality shows a different way to write the $L_1$ norm.
Therefore,
$$\tilde{f}(x) + \epsilon \max_{\tilde{x} \in B_\epsilon(x)} \Vert \nabla \tilde{f}(\tilde{x}) \Vert_1 \le \tilde{f}(\tilde{x}) + \epsilon \max_{s \in [0, 1]^m, t \in [-1, 1]^d} t^\top W^\top \operatorname{diag}(v)s =: \tilde{f}_\mathrm{QP}(x).$$
The reason why we do not stop here is that this quadratic program is
still a non-convex optimization problem. This is why we turn to (iv),
which gives a *convex* semidefinite bound. First, the authors
of [@https://doi.org/10.48550/arxiv.1801.09344] reparameterize the
optimization problem in $s$ as
$$\tilde{f}_\mathrm{QP}(x) := \tilde{f}(x) + \epsilon \max_{s \in [-1, 1]^m, t \in [-1, 1]^d} \frac{1}{2} t^\top W^\top \operatorname{diag}(v)(\bone + s)$$
where $\bone$ is a vector of ones. Then, one needs to define auxiliary
vectors and matrices to obtain the form of a semidefinite program:
$$\begin{aligned}
y &:= \begin{pmatrix}1 \\ t \\ s\end{pmatrix}\\
M(v, W) &:= \begin{bmatrix}0 & 0 & \bone^\top W^\top\operatorname{diag}(v)\\ 0 & 0 & W^\top\operatorname{diag}(v)\\ \operatorname{diag}(v)^\top W\bone & \operatorname{diag}(v)^\top W & 0\end{bmatrix}.
\end{aligned}$$ Now, we rewrite $\tilde{f}_\mathrm{QP}(x)$ as
$$\tilde{f}_\mathrm{QP}(x) = \tilde{f}(x) + \epsilon \max_{y \in [-1, 1]^{(m + d + 1)}} \frac{1}{4}y^\top M(v, W)y = \tilde{f}(x) + \frac{\epsilon}{4} \max_{y \in [-1, 1]^{(m + d + 1)}} \left\langle M(v, W), yy^\top \right\rangle.$$
Finally, we note that $\forall y \in [-1, 1]^{(m+d+1)}$, $yy^\top$ is a
positive semidefinite matrix[^22] and the diagonal of $yy^\top$ is a
vector of ones. Defining $P = yy^\top$, we obtain the convex
semidefinite program
$$\max_{y \in [-1, 1]^{(m + d + 1)}} \frac{1}{4}\left\langle M(v, W), yy^\top \right\rangle \le \max_{P \succeq 0, \operatorname{diag}(P) \le 1} \frac{1}{4}\left\langle M(v, W), P\right\rangle$$
where the notation $P \succeq 0$ refers to $P$ being positive
semidefinite, which allows us to define $\tilde{f}_\mathrm{SDP}(x)$ as
$$\tilde{f}_\mathrm{SDP}(x) := \tilde{f}(x) + \frac{\epsilon}{4} \max_{P \succeq 0, \operatorname{diag}(P) \le 1} \left\langle M(v, W), P\right\rangle.$$
Notably, the optimization problem in $f_\mathrm{SDP}(x)$ is fixed in the
neural network weights $v$ and $W$ and does not depend on $x$.
Therefore, obtaining it is very much feasible, as we only need to
calculate the input-agnostic upper bound once for each model.
Sadly, our story does not end here. One may assume that the post-hoc
application of the above upper bound is enough. While we can indeed
calculate such a certificate post-hoc, it might be arbitrarily loose.
Regular cross-entropy training encourages $\tilde{f}(x)$ to be large in
magnitude on training samples. However, the term
$\frac{\epsilon}{4} \max_{P \succeq 0, \operatorname{diag}(P) \le 1} \left\langle M(v, W), P\right\rangle$
is *not encouraged to be small* to tighten the bound. One might naively
consider the following, non-post-hoc objective instead to obtain tighter
bounds:
$$W^*, V^* = \argmin_{W, V} \sum_n \cL(V, W; x_n, y_n) + \lambda \max_{P \succeq 0, \operatorname{diag}(P) \le 1} \left\langle M(v, W), P\right\rangle$$
where $\lambda$ controls the regularization strength. Using this
objective is clearly infeasible, however: For each gradient step, we
need the solution to the inner semidefinite program. Without going too
much into detail, one can obtain a [*dual
formulation*](https://en.wikipedia.org/wiki/Duality_(optimization)) of
the semidefinite program to eliminate the inner optimization problem.
First, we state the dual formulation:
$$\max_{P \succeq 0, \operatorname{diag}(P) \le 1} \left\langle M(v, W), P\right\rangle = \min_{c \in \nR^{(d + m + 1)}} (d + m + 1) \cdot \lambda^+_\mathrm{max}(M(V, W) - \operatorname{diag}(c)) + \sum_i max(c_i, 0)$$
where $\lambda_\mathrm{max}^+(\cdot)$ calculates the maximal eigenvalue
of the input matrix or returns zero if all eigenvalues are negative. How
can we use this to eliminate the inner optimization problem? As the
inner problem becomes the unconstrained minimization of an objective in
$c \in \nR^{(d + m + 1)}$, we optimize $c$ in the same optimization loop
as parameters $V$ and $W$. Therefore, we only have an additional
parameter we have to optimize over and we can still use gradient-based
unconstrained optimization.
This leads us to the final objective: We optimize $$\begin{aligned}
&(W^*, V^*, c^*)=\\
& \argmin_{W, V, c}\sum_n \cL(V, W; x_n, y_n) + \lambda \cdot \left[(d + m + 1) \cdot \lambda^+_\mathrm{max}\left(M(V, W) - \diag\left(c\right)\right) + \sum_i max(c_i, 0)\right]
\end{aligned}$$ which can be done quite efficiently. This encourages the
network to be robust while also allowing us to provide a certification
of robustness.
Given $V[t], W[t]$ and $c[t]$ values at iteration $t$ solving the above
optimization problem, one obtains the following guarantee for any attack
$A$:
$$\tilde{f}(A(x)) \le \tilde{f}(x) + \frac{\epsilon}{4} \left[D \cdot \lambda^+_\mathrm{max}\left(M(V[t], W[t]) - \diag\left(c[t]\right)\right) + + \sum_i max(c[t]_i, 0)\right].$$
We get a certificate of the defense: Whatever perturbation there is in
the $L_\infty$ $\epsilon$-ball, the loss is bounded from above. This is
theoretically meaningful but not yet in practice: The study is confined
to 2-layer networks.
:::
### History and Possible Future of Adversarial Robustness in ML
A Coarse Timeline of Adversarial Robustness is listed below.
- **First attack**: L-BFGS attack
(2014) [@https://doi.org/10.48550/arxiv.1312.6199]. This is a
complicated method that does not work too well.
- **First practical attack**: FGSM attack (2014). As discussed
previously, this is a straightforward method that works reasonably
well.
- **Stronger iterative attack**: DeepFool
(2015) [@https://doi.org/10.48550/arxiv.1511.04599].
- **First defense**: Distillation
(2015) [@https://doi.org/10.48550/arxiv.1503.02531]. Training labels
of the distilled network are the predictions of the initially
trained network. Both networks are trained using temperature $T$.
- **First black-box attack**: Substitute model (2016).
- **Strong attack**: PGD (2017).
- **Strong defense**: Adversarial Training (2017).
- **First detection mechanisms**: Adversarial input detection methods
(2017). Instead of making the model stronger, we train a second
model to detect adversarial patterns. However, the attackers can
also generate patterns that avoid these detections (fool the
detector in a white-box fashion).
- "It is easy to bypass adversarial detection methods."
(2017) [@https://doi.org/10.48550/arxiv.1705.07263].
- Defenses at ICLR'18 (2018): input perturbation, adversarial input
detection, adversarial training, etc.
- "Defenses at ICLR'18 are mostly ineffective.": Obfuscated gradients
(2018).
- Barrage of Random Transforms (2019). One only needs to apply many
transformations sequentially in a random fashion.
2020- : We should *stop* the cat-and-mouse game between attacks and
defenses. It is a dead end. We are spiraling around attack and defense.
Diversifying and randomizing (e.g., BaRT) is a promising approach.
However, the constant spiral of whether the attacker or defender has
more capacity to generate attacks/defenses is not very interesting from
an academic perspective.
There are two main alternatives one may choose to work on:
- Certified defenses making sure there is *no attack* in the $L_p$
ball.
- Dealing with realistic threats rather than unrealistic worst-case
threats.
### Towards Less Pessimistic defenses
Usually, the considered attacks are way too strong. Instead, we should
work more on (1) defenses against black-box attacks, which is an
exciting subfield of adversarial attacks, or (2) defenses against
non-adversarial, non-worst-case perturbations (OOD gen., domain gen.,
cross-bias gen.). These are what we have learned in the previous
chapters. Many researchers who used to study adversarial perturbations
are now working on general OOD generalization and naturally shifting
distributions.
# Explainability
## Introduction
![Schematic illustration of the main transmission channels of monetary
policy decisions, taken from [@asset]. Controlling price developments
requires fine-grained control.](gfx/03_asset.jpg){#fig:control
width="0.6\\linewidth"}
If we understand a system and its underlying mechanisms well, we can use
the system to control something. An example is the economy: As we
control official interest rates,[^23] we control the amount of money in
the market. Official interest rates affect many components of the
economy (e.g., bank rates, exchange rates, or asset prices) and finally
also affect price developments (e.g., domestic prices and import
prices), all through a highly complex procedure. This is illustrated in
detail in Figure [3.1](#fig:control){reference-type="ref"
reference="fig:control"}. There is always a new situation coming (shocks
outside the central bank's control). One cannot solely rely on
experience, as we do not have so much history of the market to base our
decisions on previous experiences when we face new situations. It is
essential to *know what is happening in the system* to perform control.
When faced with a black box system, we do not understand its inner
workings. For example, we do not exactly understand why a self-driving
car is following the road in one case but not following it in the other.
As we cannot control the system precisely, we cannot fix it when it is
malfunctioning.
### Ways to Control Undefined Behavior
::: definition
Explanation An explanation is an answer to a
*why*-question. [@DBLP:journals/corr/Miller17a]
:::
::: definition
Interpretability Interpretability is the degree to which an observer can
understand the cause of a decision. [@biran2017explanation]
:::
::: definition
Explainability Explainability is post-hoc
interpretability. [@lipton2018mythos] It is the degree to which an
observer can understand the cause of a decision after receiving a
particular explanation.
:::
::: definition
Justification A justification explains why a decision/prediction is good
but does not necessarily aim to explain the actual decision-making
process. [@biran2017explanation] It is also not necessarily sound.
:::
Two general ways exist to control undefined behavior in OOD (novel)
situations: using unit tests and fixing only after understanding.
**An infinite list of unit tests and data augmentation.** We were
looking into this in previous sections. In particular, in
Section [2.10](#sssec:identify){reference-type="ref"
reference="sssec:identify"}, we saw how we can identify spurious
correlations in our model, and in
Section [2.11](#sssec:overview){reference-type="ref"
reference="sssec:overview"}, we saw how we can *incorporate* samples
from different domains (e.g., unbiased samples) into the training
procedure to obtain more robust models. Our goal is to let a model work
well in any new environment. For evaluation purposes, we introduce a new
evaluation set every time, e.g., introduce ImageNet-{A, B, C, D, ...}. A
natural next step is to augment our network's *training* with samples
from ImageNet-{A, B, C, D, ...} and seek new evaluation sets. We are
sequentially conquering different unit tests, hoping that we eventually
get a strong system that works well in any situation. But is that
*really* going to happen?
**Understand first, fix after.** The goal here is the same as before:
make a model that works well in any new environment. For evaluation, we
examine cues utilized by the model (explainability). If we understand
that the model is not utilizing the right cue for recognition, then we
have a way to control this. We regularize the model later to choose cues
that are generalizable. We do *not* evaluate whether the model works
well on ImageNet-{A, B, C, D, ...}, as we directly control the used
cues. We regularize the model to choose generalizable cues (using
feature selection). This seems to be the more scalable approach. An
infinite number of unit tests will probably not solve all our problems.
### Explainability as a Base Tool for Many Applications
There are numerous applications that require the *selection of the
'right' features*. In fairness, we wish to eliminate, e.g., demographic
biases which requires us to select features that do not take demographic
aspects into account. In the field of robustness, we also have to select
powerful features to combat distribution shifts.
There are also many applications that require *better understanding* and
*controllability*. One example is ML for science where the aim is to
discover scientific facts from (usually) high-dimensional data. Here,
understanding and control is the *end goal*. We can also consider the
task of quickly adapting ML models to downstream tasks (e.g., GPT-3 and
other LLMs). If we understood what GPT-3 or other LLMs do/know, we could
probably quickly adapt them to downstream tasks by only choosing the
parts or subsystems responsible for useful utilities for the downstream
task. In that case, we might not even need any fine-tuning.
::: definition
Attribution Attribution can be understood as the assignment of a reason
for a certain event. It is often used in the field of Explainable
Artificial Intelligence (XAI) to describe attributing factors to a
model's behavior in an explanation. Such factors are often selected from
(1) the input's features we are explaining (be it the raw input features
or intermediate feature representations of NNs), (2) the elements of the
training set, or (3) the model's parameters.
:::
::: definition
Explanation by Attribution An explanation by attribution method is a
function that takes an input $x$ and a model $f$ and outputs the
explanation of which features/training samples/parameters contribute
most to the prediction $f(x; \theta)$ of the model for input $x$.
:::
Several applications require a *better understanding of the training
data*. Consider the detection of private information in a training
dataset. Instead of attributing to the test data, we can also reason
back to the training data. For example, if our model seems to have
learned something private from user data and users can even be
identified based on this information, it would be very informative to be
able to trace back specific predictions to the training data (and remove
private data from the training set or make sure that such information
cannot leak). If an LLM outputs something that looks like someone's home
address, tracking down where this information came from in the training
set is very informative for those who audit the training data.
Attributing to the original authors in the training data is an
increasingly popular and useful task. Example questions include "What
prior art made DALL-E generate a certain image? What authors can be
attributed?" XAI can give answers to such questions.
Finally, let us discuss applications requiring *greater trust*. One
example is ML-human expert symbiosis where a human expert is working
with ML to generate better outcomes. Trust is also needed in high-stakes
decision areas: for example, finance, law, and medical applications.
### Explainability as a Data Subject's Right
Nowadays, explanations are stipulated in law [@Goodman_2017]. XAI has
close ties to national security -- the research field originated from
the [DARPA XAI
program](https://www.darpa.mil/program/explainable-artificial-intelligence)
of the US, in 2016. The EU also considered AI legislation crucial --
GDPR has an article about automated decision-making, and the [AI
act](https://artificialintelligenceact.eu) is an even newer *proposal*
of harmonized rules on general artificial intelligence systems. A common
theme of AI legislation is that suitable measures are needed to
safeguard a data subject's rights, freedom, and legitimate interests.
Data subjects have a right to request explanations in automated
decision-making and to obtain human intervention. Critical decisions are
made about humans based on automatized systems (ML) using their personal
data, e.g., in CV preselection or loan applications. The data owner has
the right to know which feature has caused, e.g., a loan rejection.
#### Three Key Barriers to Transparency
There are mainly three barriers to transparency. Let us briefly discuss
these.
**Intentional concealment.** For example, a bank might intentionally
conceal their decision procedure for loan rejection. Decision-making
procedures are often kept from public scrutiny.
**Gaps in technical literacy.** For example, even if the bank is
enabling insight into its decision-making procedure, people may not be
able to understand the raw code. For most people, reading code is
insufficient.
**Mismatch between actual inner workings of models and the demands of
human-scale reasoning and styles of interpretation.** This is perhaps
the most technical aspect this book seeks answers to.
Human-comprehensibility was highlighted as a crucial aspect of XAI
methods by several
researchers [@DBLP:journals/corr/Miller17a; @molnar2020interpretable; @belle2021principles].
If we are showing the weights of a model to a human, it is unlikely that
they see some meaning. We need summarization, dimensionality reduction,
and attachment of human-interpretable concepts.
We have answered "Why is an explanation needed?" Let us turn to "When is
an explanation needed?"
### When is an explanation needed?
The following points are inspired
by [@https://doi.org/10.48550/arxiv.1702.08608; @keil2006explanation].
Explanations may highlight an incompleteness/problem. In particular,
explanations are typically required when something does not work as
expected. When everything is working well, we usually do not question
why something is working. When something does not work, we start raising
questions.
### When is an explanation *not* needed?
First, we discuss a list of examples
from [@https://doi.org/10.48550/arxiv.1702.08608]. We also argue why
this list of examples might not be descriptive enough.
- **Ad servers.** Our remark is that it is a request of society in
general that they should be prompted for consent if they want to see
targeted ads, and also to gain insight into how profiling works.
- **Postal code sorting.** Even though in general, society might not
care much about the inner workings of post offices, explanations
might still be needed for debugging such sorting systems or for
unveiling potential security risks.
- **Aircraft collision avoidance systems.** Again, explanations for
such systems are generally not requests of society. Still, the
aviation company must be in control of all situations that might
arise, and for that, explanations are great tools.
The above list lacks *recipients*. Whether an explanation is needed in a
certain situation depends on the *explainee*. A person not using the
internet might, indeed, not care about ad servers. Similarly, a person
who does not work as a developer for a post office might not need
explanations about the sorting algorithm. Still, we can almost always
find target groups for explanations about any topic.
The general reasoning of [@https://doi.org/10.48550/arxiv.1702.08608] is
sound: generally, we might not need explanations when
1. there are no significant consequences for unacceptable results, or
when
2. the problem is sufficiently well-studied and validated in real
applications that we *trust* the system's decision (even if the
system is not perfect).
However, explanations are always great tools for exploratory analysis.
## Human Explanations
### How do humans explain to each other?
We discuss Tim Miller's work, titled "[Explanation in Artificial
Intelligence: Insights from the Social
Sciences](https://arxiv.org/abs/1706.07269)" [@DBLP:journals/corr/Miller17a].[^24]
According to [@malle2006mind], people ask for explanations for two main
reasons:
1. **To find meaning.** To reconcile the contradictions or
inconsistencies between elements of our knowledge structures. We are
trying to figure out at which principle we have contradictions.
There are often contradictions between our understanding and the
status quo in the outside world.
2. **To manage social interaction.** To create a shared meaning of
something, change others' beliefs and impressions, or influence
their actions. Example questions include "Why am I doing this? Why
are you doing this?" But also: "If I believe what you are doing has
a greater cause, I can also align my action to what you are doing."
Both are important for XAI systems.
- **Finding meaning in XAI.** "Why is this model not doing as I
expect? Where is this inconsistency coming from?"
- **The social aspect of XAI.** We want to be able to share our way of
thinking with the machine, and we expect it also to be able to do
the same.[^25]
#### Human-to-human explanations are ...
**...contrastive.** Explanations are sought in response to particular
counterfactual cases. People usually do not ask, "Why did event $P$
happen?", they ask, "Why did event $P$ happen instead of some event
$Q$?" Even if the apparent format is the former, it usually *implies* a
hidden foil (i.e., the alternative case). **Example**: For the question
"Why did Elizabeth open the door?", there are many possible foils. (1)
"Why did Elizabeth open the door, rather than leave it closed?" -- a
foil against the action. (2) "Why did Elizabeth open the door rather
than the window?" -- a foil against the subject. (3) "Why did Elizabeth
open the door, rather than Michael opening it? -- a foil against the
actor. A brief criticism of XAI is that the questions asked often do not
have any foil in mind in general. We ask questions like, "Why was this
image categorized as $A$?" It would be perhaps less ambiguous to ask,
"Why is this image categorized as $A$, not $B$?" This formulation makes
the foil clear. Another way to extend the question: "Will this image
still be categorized as $A$ even if the image is modified?" We will see
that this kind of question is implied in many XAI systems. In a sense,
input gradients are asking such contrastive questions.
**...selective.** When someone asks for an explanation for some event,
they are usually not asking for a complete list of possible causes but
rather a few important reasons and causes relevant to the discussion at
hand. Humans are adept at selecting one or two relevant key causes from
a sometimes infinite number of causes as the explanation. If we generate
all kinds of causes for explaining a single event, the causal chain can
be too large and hard to handle for the explainee. The principle of
simplicity dictates that the explainer should not overwhelm the
explainee.
**...social and context-dependent.** Philosophy, psychology, and
cognitive studies suggest that we are not explaining the same thing to
everyone -- we change the way we explain based on whom we are talking
to. The way we explain depends greatly on our model of the other person.
People employ cognitive biases and social expectations. Explanations are
a transfer of knowledge, presented as part of a conversation or
interaction. If a person we are talking to does not know something, we
are filling in the gap in their understanding. If they seem to
understand the subject well, we can share less obvious causes for an
event too. Explanations are thus presented *relative* to the explainer's
beliefs about the explainee's beliefs.
**...interactive.** Through the exchange of explanations and
confirmation of understanding, we can continuously stay on the same
page. The explainee can let the explainer know what subset causes they
care about that are relevant for them. The explainer can then select a
subset of that subset based on other criteria. The explainer and the
explainee can interact further and argue about explanations. In XAI,
there have been relatively few works on interactive explanations so far.
Typically, we generate human-agnostic explanations that should work for
everyone. Based on human interactions, we should be able to generate
personalized explanations.
Because of these properties, there is no single correct answer to
"Why?".
## Properties of Good Explanations
### What are good explanations? {#ssec:good}
From now on, we will be using more refined terminologies that are also
used in XAI. The properties of a good explanation we deem most important
are listed and explained below.
**Soundness/faithfulness/correctness.** The explanation should *identify
the true cause for an event*. This is the primary focus of current XAI
evaluation: The attributions should identify the true causes (a model
used) for predicting a certain label. It is also high on the list of
desiderata from domain experts [@lakkaraju2022rethinking]. ("What do you
need from explainability methods?") However, it is important to
highlight that this is not the only criterion for a good explanation.
**Example**: "Why did you recognize a bird in this image?" If the model
points to a feature that does not contribute to its prediction of
'bird', then its explanation is not sound/faithful. Another example is
the following. A doctor wants to know the actual thought process of the
system rather than just a likely reasoning from a human perspective. By
understanding what is going on inside recognition mechanisms, we might
be able to learn more than what humans are currently capable of
extracting from an image.
**Simplicity/compactness.** The explanation should *cite fewer causes*.
A good balance is needed between soundness and simplicity (such that
humans can handle the explanation).
**Generality/sensitivity/continuity.** The explanation should *explain
many events*. They should not only explain very specific events -- one
usually seeks a general explanation. In XAI, generality means that the
explanation should apply to many (similar) samples in the dataset.
**Relevance.** The explanation should be *aligned with the final goal*.
This criterion asks "What do we need the explanation for?" If we need it
for fixing a system, the explanation should help us fix it. If we need
explanations for understanding, a good explanation should then let us
understand the event in question.
**Socialness/interactivity.** Explaining is a social process that
involves the explainer and the explainee. In XAI, the explainer is the
XAI method, and the explainee is the human. As mentioned before, the
explanation process is dependent of the explainee. A good explanation
could consider the social context and adapt and/or interact with the
explainee. It should not always cite the most likely cause but also
retrieve causes that are interesting for the user. These do not
necessarily coincide.
**Contrastivity.** The foil needs to be clearly specified. Ideally, a
method should be able to tell how the model's response changes when we
change something from $A$ to $B$. An example of contrastivity is shown
in the paper "[Keep CALM and Improve Visual Feature
Attribution](https://openaccess.thecvf.com/content/ICCV2021/html/Kim_Keep_CALM_and_Improve_Visual_Feature_Attribution_ICCV_2021_paper.html)" [@kim2021keep],
which we are going to discuss later in more detail. An illustration of
contrastive explanations is given in
Figure [3.2](#fig:contrastive){reference-type="ref"
reference="fig:contrastive"}.
**Human-comprehensibility/coherence/alignment with prior knowledge of
the human.** The given explanation should fit the understanding and
expectations of the human. It should also be presented in a format that
is natural to humans.
The first property, soundness, is much more often used in XAI for
evaluation. Nevertheless, the others are just as important.
![Example of contrastive explanations in XAI in the last column, taken
from [@kim2021keep].](gfx/03_calm0.pdf){#fig:contrastive
width="0.6\\linewidth"}
### Intrinsically Interpretable Models
Intrinsically interpretable algorithms are generally deemed
interpretable and need no post-hoc
explainability [@https://doi.org/10.48550/arxiv.1702.08608]. A DNN is
not like this. We can generate an explanation for a prediction post-hoc,
making it explainable, but the DNN itself is not interpretable.
#### Sparse Linear Models with Human-Understandable Features
Sparse linear models make a prediction $x$ using the following formula:
$$x = \sum_{i \in S} c_i \phi_i \qquad |S| \ll m.$$ The prediction is a
simple linear combination of features $\phi_i$ with coefficients $c_i$.
We also understand the individual $\phi_i$s very well, as they are
human-understandable. Sparse linear models further contain a small
number of coefficients: There are few enough coefficients for humans to
understand the way the model works. Every feature $\phi_i$ is
contributing to the prediction by giving a factor $c_i \phi_i$ to the
sum. We know the exact contributions. We are citing every $\phi_i$ as
the cause of the outcome $x$. This explanation is very general -- it
works for any $\phi_i$ and any value thereof.
::: information
Feature Attribution vs. Feature Importance By construction, when
$\phi_i = 0$, the feature contributes with a factor of zero to the final
prediction. By treating $c_i\phi_i$ as our attribution score for feature
$i$, we cannot give a non-zero attribution score to features whose value
is zero. Depending on what $\phi_i$ is encoding, this might have
surprising consequences. For example, when the individual features are
pixel values ($\phi_i \in [0, 1]$), we cannot attribute to black pixels.
To resolve this, we note that attribution scores are not synonymous with
*feature importance* -- the score we obtain is a reason for the
prediction $x$ feature $\phi_i$, which is exactly $\phi_i c_i$. This
does not mean that this feature is unimportant, it just means that this
term contributed the value 0 to the final prediction. The importance of
the feature is better measured by the coefficient $c_i$, but one can
only directly interpret it as the (signed) importance of the feature in
the prediction of the model if (1) the features are uncorrelated and (2)
they are on the same scale (e.g., by standardization). The coefficient
$c_i$ can also be used as an attribution score: while $c_i\phi_i$
measures the *net* contribution of the feature for the prediction of
$x$, the coefficient $c_i$ measures the *relative* contribution of the
feature if we slightly change that feature.
:::
#### Do these sparse linear models explain well according to our criteria?
Let us consider the aforementioned criteria and evaluate sparse linear
models according to them.
**Soundness/faithfulness/correctness.** By definition, every feature is
a sound cause for the outcome, with contribution given by the terms
$c_i \phi_i$.
**Simplicity/compactness.** One can control this aspect with the number
of features. Sparsity enforces the model to use few causes. We could not
understand the model's decision-making process if we had millions of
features.
**Generality/sensitivity/continuity.** By definition, whenever the cited
causes happen, similar outcomes follow.
**Relevance.** This criterion always depends on the final goal. Is it to
debug? Or to understand? For debugging purposes, we measure the quality
of the explanation in terms of whether it is actually helping a human
find features that are not working and whether the human can fix the
system based on the explanation.
**Socialness/interactivity, Contrastivity.** One can simulate
contrastive reasoning from the ground up. Instead of having
$\phi_1, \dots, \phi_{|S|}$, we can leave one out and see what happens
afterward. This is the counterfactual answer for the effect of leaving a
feature out. However, the models are not social and interactive by
default. An additional module is required for that. The explanation is
also not personalized; it does not consider the level of knowledge of
the explainee. We can attach a chatbot or interactive system to make
interactivity possible.
#### Decision Trees with Human-Understandable Criteria
In a decision tree that follows human-understandable criteria, all
deciding features (criteria) follow a human-understandable concept.
Unless the depth of the tree is too large, or we have too many branches,
the entire tree is human-interpretable. **Example**: The task is to
predict the animal breed. Features are not too many well-known
properties of animals. Humans can then directly understand how the
decision tree makes a particular prediction.
## Taxonomies of Model Explainability
There are different ways to divide the set of explainability methods.
The correlated "axes" of variation are as follows.
#### Intrinsic vs. Post-hoc
**Intrinsic interpretability** means the model is interpretable by
design (i.e., sound, simple, and general at once). The way the input is
transformed into the output is interpretable. **Examples**: sparse
linear models and decision trees.
**Post-hoc explainability** means the model lacks interpretability, and
one is trying to explain its behavior post-hoc. **Example**: turning a
DNN into a more interpretable system. The intrinsically interpretable
models discussed before are very useful throughout our studies of
explainability: We often turn a part of our network into a linear model
and analyze that (e.g., Grad-CAM [@selvaraju2017grad] in
[3.5.17](#sssec:gradcam){reference-type="ref"
reference="sssec:gradcam"}) or approximate the whole model using a
sparse linear model and explain it in a local input region (e.g.,
LIME [@ribeiro2016should] in [3.5.9](#sssec:lime){reference-type="ref"
reference="sssec:lime"}).
#### Global vs. Local
**Global explainability** means the given explanation is not on a
per-case basis. We do not want to understand one particular event but
rather the entire system, allowing us to understand per-case decisions
as well. An example of global explainability is the SIR epidemic
model [@kermack1927contribution], shown in
Figure [3.3](#fig:sir){reference-type="ref" reference="fig:sir"}. Here,
$\beta, \gamma$ are rates of transitions. The system is based on the
differential equations in Figure [3.3](#fig:sir){reference-type="ref"
reference="fig:sir"} that explain the entire system. We simulate the
future based on our choice of $\beta, \gamma$. This gives an overall
understanding of the mechanism but is often impossible to give for
complex, deep black-box models. It is particularly useful for scientific
understanding and simulation of counterfactuals. ("What would happen if
we changed some parameters?")
![The SIR epidemic model is an example of a global explainability tool.
The differential equations above determine the behavior of the system.
Having chosen the parameters $\beta, \gamma$, we simulate the future.
Figure taken from [@luz2010modeling].](gfx/03_sir.png){#fig:sir
width="0.35\\linewidth"}
**Local explainability** means we want to understand the decision
mechanism behind a particular case/for a particular input. **Example**:
"Why did my loan get rejected?" -- explanations for this do not lead to
a global understanding of the system. Local explainability is the main
focus/interest of the book. This type of analysis is feasible in
somewhat sound ways even for complex models.
#### Attributing to Training Samples vs. Test Samples
A model is a function approximator. It is also an output of a training
algorithm. The input to the training algorithm is the training data and
other ingredients of the setting. We write the model prediction as a
function of two variables:
$$Y = \text{Model}(X; \theta) = \text{Model}(X; \theta(X^{\text{tr}}))$$
where $Y$ is our prediction, $X$ is the test input, $\theta$ are the
model parameters, and $X^\text{tr}$ is the training dataset. The
prediction of our model is implicitly also a function of the training
data.
We can trace back (i.e., attribute) the output $Y$ for $X$ to either (1)
particular features $X_i$ of the test sample $X = [X_1, \dots, X_D]$, or
(2) particular training samples $X^{\text{tr}}_i$ in the training set
$X^\text{tr} = \{X^\text{tr}_1, X^\text{tr}_2, \dots, X^\text{tr}_N\}$.
One may also attribute the prediction to a particular parameter
$\theta_j$ or a layer, but individual parameters are often not very
interpretable to humans. Usually, we "project parameters" onto the input
space by gradient-based optimization.
::: information
Correlations in the axes of variation Models that are intrinsically
interpretable are interpretable on a global scope -- they give an
understanding of the whole model. But they can also be used to explain
particular decisions based on an input. While explaining local decisions
is also possible, the focus is rather on the global scale.
Methods with intrinsic interpretability also do not *have to* directly
attribute their predictions to anything. However, this is still often
possible, e.g., in the case of sparse linear models.
:::
### Soundness-Explainability Trade-off
Explanations try to linearize a model in some way. What humans can
naturally understand is a summation of a few features (i.e., a sparse
linear model). There is an inherent *soundness-explainability
trade-off*. One extreme is the original DNN model by itself: It is by
definition sound but not interpretable. Another extreme is creating a
sparse linear model as the global linearization of a DNN around a
particular point of interest. It cannot be sound as a global explanation
but is very interpretable.
Between the two extremes, explanation methods try to linearize different
bits of the model for either the entire input space (generic input) or
for a small part of it. It is relatively easy to linearize the full
model for a small part of the input space. (For example, LIME, discussed
in Section [3.5.9](#sssec:lime){reference-type="ref"
reference="sssec:lime"}.) It is also relatively easy to linearize a few
layers of the model for generic input. (For example, Grad-CAM, discussed
in Section [3.5.17](#sssec:gradcam){reference-type="ref"
reference="sssec:gradcam"}.) However, it is impossible to faithfully
linearize the full model for generic input. Then, we are back to global
linearization.
### Current Status of XAI Techniques
XAI research is often harshly criticized for being useless.
::: center
"Despite the recent growth spurt in the field of XAI, studies examining
how people actually interact with AI explanations have found popular XAI
techniques to be ineffective, potentially risky, and underused in
real-world contexts." [@ehsan2021expanding]
:::
People working in human-computer interaction are very critical of XAI
techniques in ML conferences, as they often do not take humans into
account appropriately yet. It is essential to condition our mindset to
help those who wish to actually use XAI techniques rather than working
on techniques that look fancy or theoretically beautiful (e.g.,
completeness axioms, [3.5.5](#sssec:ig){reference-type="ref"
reference="sssec:ig"}).
::: center
"The field has been critiqued for its techno-centric view, where
"inmates \[are running\] the asylum", based on the impression that XAI
researchers often develop explanations based on their own intuition
rather than the situated needs of their intended audience (final goal is
not taken into account). Solutionism (always seeking technical
solutions) and Formalism (seeking abstract, mathematical solutions) are
likely to further widen these gaps." [@ehsan2021expanding]
:::
We want to move away from developing such XAI techniques and focus on
the demands of those needing XAI systems.
**Note**: Formalism *is* helpful (both in method descriptions and
evaluation), but the most important aspect should be whether these
methods actually help people. Formalism is not the end goal.
## Methods for Attribution to Test Features
![Simplified high-level overview of the CAM method. The model makes a
prediction ('cat'), which is then used to select the appropriate channel
of the Score Map that describes the attribution scores for class 'cat'.
Finally, an optional thresholding can be employed to make a binary
attribution mask.](gfx/03_cam_simple.pdf){#fig:camsimple
width="\\linewidth"}
So far, we have laid down what we desire from explainable ML. In this
section, we discuss actual methods for extracting explanations from
DNNs. In particular, we will look at methods that attribute their
predictions to test features. Instead of the "What is in the image?"
question, explanation methods seek to answer the "Why does the model
think it is the predicted object?" question.[^26] For example,
CAM [@https://doi.org/10.48550/arxiv.1512.04150] produces a score map
the predicted label, as illustrated in
Figure [3.4](#fig:camsimple){reference-type="ref"
reference="fig:camsimple"}. We can threshold it to get a
foreground-background mask as an explanation (which is not necessarily a
mask the GT object location, as the network being explained might have,
e.g., background or texture biases). We can also leave out thresholding
and keep continuous values in the map.
### What features to consider in attribution methods to test features?
::: definition
Superpixel Superpixels are groupings of pixels respecting color and edge
similarity (that very confidently belong to the same object instance).
It gives us a finer grouping than semantic segmentation (in the sense
that the pixels are not grouped into only a couple of categories, but
rather into many patches of pixels that closely belong together) but a
coarser one than the raw pixels. An illustration is given in
Figure [3.5](#fig:superpixels){reference-type="ref"
reference="fig:superpixels"}. There have been many improvements in
superpixel technology until a few years ago. Nowadays, not many people
are looking into superpixel methods. These are often used as features in
explainability for images. They reduce the number of features we have to
deal with without sacrificing soundness.
:::
![Illustration of superpixels of various granularities, which is a
popular choice of features for attribution maps. Figure taken
from [@superpixels].](gfx/03_superpixels.png){#fig:superpixels
width="0.4\\linewidth"}
![Illustration of several feature representations for the same image.
There is a wide range of features we can attribute
to.](gfx/03_catfeatures.pdf){#fig:catfeatures width="\\linewidth"}
We generalize the notion of a feature to any aggregation or description
of the input to the model. Possible features are listed below for visual
models taking an image as input. These are also illustrated in
Figure [3.6](#fig:catfeatures){reference-type="ref"
reference="fig:catfeatures"}.
1. **Single pixels.**
2. **Image patches.** We can aggregate pixels into image patches,
considering each patch as a feature.
3. **Superpixels.**
4. **Instance mask(s).**
5. **High-level attributes.** For example, attributes for a cat image
input can be Cute, Furry, Yellow eyes, Two ears, Animal, and Pet.
The values for each of these attributes can be percentages
representing how fitting a certain attribute is for the input. For
example, Two ears $\rightarrow$ 100% means the feature is maximized,
i.e., the attribute perfectly fits the input. **Note**: These are
*not* the attribution scores corresponding to the individual
attributes. Attribution scores are values describing how each of the
features influences the network prediction, whereas the attributes
*describe* the input. The attributes can be subjective, can point to
specific regions of the image, and can also describe, e.g., the
general feeling of the image.
For natural language models taking a token sequence as inputs, we often
use individual tokens/words as features. We can think about the
contribution of each token (or word) towards the final prediction (e.g.,
sentiment analysis), as considered in the paper "[A Song of
(Dis)agreement: Evaluating the Evaluation of Explainable Artificial
Intelligence in Natural Language
Processing](https://arxiv.org/abs/2205.04559)" [@neely2022song].
Examples of attributing to individual tokens can be seen in
Figure [3.7](#fig:nlp){reference-type="ref" reference="fig:nlp"}.
**Note**: Explanation methods can give significantly different results
for the same input, as shown in
Figure [3.7](#fig:nlp){reference-type="ref" reference="fig:nlp"}. This
has also been reported in the paper titled "[The Disagreement Problem in
Explainable Machine Learning: A Practitioner's
Perspective](https://arxiv.org/abs/2202.01602)" [@krishna2022disagreement]:
local methods approximate the model at a particular test point $x$ in
local neighborhoods, but there is no guarantee that they use the same
local neighborhood. Indeed, since different methods use different loss
functions (e.g., LIME with squared error vs. gradient maps with gradient
matching), it is likely that different methods produce different
explanations.
![Sentiment analysis example. Explanation methods give significantly
different attributions. "The average Kendall-$\tau$ correlation across
all methods for this example is 0.01." [@neely2022song] Figure taken
from [@neely2022song].](gfx/distill-ss-sample1074.pdf){#fig:nlp
width="0.5\\linewidth"}
::: information
Choice of Features If we gather all the features of an image, do we have
to obtain the original image by definition? The answer is *yes*; we
generally wish to *partition* the image with features.
- For partitioning, one may choose [panoptic
segmentation](https://arxiv.org/abs/1801.00868) [@kirillov2019panoptic],
a combination of instance segmentation and semantic segmentation.
This considers both object and stuff masks (where stuff refers to,
e.g., 'road', 'sky', or 'sidewalk'). Another option is regular
semantic segmentation, which can also handle various stuff
categories. The
[COCO-Stuff](https://github.com/nightrome/cocostuff) [@caesar2018cvpr]
dataset gives many examples of how semantic segmentation can
partition images in a detailed way.
- Considering only image parts corresponding to different instance
masks as features is problematic, as stuff information is thrown
away (we get rid of stuff categories), and we do not have a
partition of the original image anymore.
:::
A feature is thus a general concept. The task for feature attribution
methods is to determine which feature contributes how much to the
model's prediction.
In the last section, we have seen that counterfactual (i.e.,
contrastive) reasoning matters a lot in explaining to humans. The most
basic way to explain a model's decision in a counterfactual way is by
asking a question of the form "Is the input image still predicted as a
cat if this feature is missing?" We remove a particular set of pixels
from the image and see how the model's prediction changes. We have many
possibilities to encode what we mean by a "missing" pixel. For example,
we can fill them with black, gray, or even pink pixels (which are rarely
seen in natural images but do not intuitively encode a baseline image).
We can even choose to inpaint them based on the context. One could also
ask counterfactual questions of the form "Is the input image still
predicted as a cat if this feature is replaced with something else?" In
this case, we can, e.g., insert an image of a dog in the "missing"
patch, illustrated in Figure [3.8](#fig:cat){reference-type="ref"
reference="fig:cat"}. After carrying this out for all pixels, we get an
answer to "Which features contribute most to predicting a cat rather
than a dog?" Even in the simple setting of removing a square patch from
an image, many things must be considered.
![Three possibilities of counterfactual explanations. The left image
encodes the missingness of a patch by orange pixel values. The center
image encodes missingness by gray pixel values. These give answers to
the question "How would the prediction change if we removed a patch of
the image?" The right image asks a slightly different question: "Which
features contribute to predicting a cat rather than a dog?", as the dog
image does not aim to encode missingness, i.e., we cannot talk about
removing the patch.](gfx/03_cat.pdf){#fig:cat width="0.8\\linewidth"}
### Intrinsically interpretable models support counterfactual evaluation by design.
In a DNN, when we change something in the input, it is highly unclear
how the forward propagation is influenced to obtain the final answer. In
decision trees, we can just change one attribute in any way and check
how the result changes (by selecting the other branch at a corresponding
attribute). We can do a full simulation quickly where we understand each
part of the decision-making process. We can still do the simulation for
a DNN, but we only observe the outputs (before and after the change) in
an interpretable way. We have no good intuition about what changes
*internally*.
A sparse linear model is just a summation. Every feature contributes
linearly to the final output. It is easy to interpret the relationship
between the features and outputs. We know the full effect on the output
of changing (or removing) one or many features. Our implicit aim is to
linearize our complex models in some way for interpretability.[^27] This
is a common mindset of attribution methods. Because of the linear
relationship between inputs and outputs, we do not have to compute
differences between outputs to study counterfactual evaluation. We
already know how the output changes by changing some inputs. This is
highly untrue for DNNs, requiring a forward pass each time. We will see
that under some quite strong assumptions, we can use the input gradient
and derivative quantities for counterfactual evaluations.
### Infinitesimal Counterfactual Evaluation in Neural Networks: Saliency Maps
We can perform the removal analysis for all input features for neural
networks, e.g., using a sliding window of patches as features. This,
however, takes very long for DNNs. For each image, one needs to compute
$N$ forward operations through a DNN, where
$$N = \text{number of sliding windows per image \(\times\) number of ways to alter the window content}.$$
Doing this in real-time during inference on a single sample is
infeasible without sufficient computational resources for
parallelization. Doing it offline for an entire dataset also takes very
long if the dataset is large. One can use batching, but only a small
number of samples fit on the GPU usually.
However, we can consider a special case where our *features are pixels*
and the *perturbation is small* (infinitesimal). In this case, we can
compute counterfactual analysis quickly, at the cost of the huge
restriction of the perturbation size being small.
**Example**: Consider pixel $(56, 25)$ with original pixel value:
$(232, 216, 231)$. Suppose that all pixels are left unchanged except
this one where the new pixel value is set to $(233, 216, 231)$.[^28]
Further, suppose that the original cat score was 96.5%, but after the
change, the cat score for the perturbed image decreases to 96.4%. This
seems familiar: That is exactly how we approximate the gradient
numerically:
$$\frac{\partial f(x)}{\partial x_i} \approx \frac{f(x + \delta e_i) - f(x)}{\delta}, \qquad \delta \text{ small.}$$
In the ordinary sense, the gradient of the score (or probability) of the
predicted class an RGB pixel is a 3D vector (as the RGB pixel itself is
also a 3D vector). However, we will consider $x \in \nR^d$ as a
flattened version of an image and will also collapse the color channels.
We treat the elements of the resulting vector as pixels. Therefore, the
$i$th pixel direction does *not* correspond to the general definition of
a pixel in the following sections.
::: information
Discrete Representations of Color The 8-bit representation is just a
convention for RGB images. There exist 16- and 32-bit representations
too. The RGB scale is continuous.
:::
A very inefficient way to compute the attribution of each pixel is to
compute the forward pass (number of pixels + 1) times (perturbed images
plus original image, as the latter is shared in all gradient
approximations) to measure pixel-wise infinitesimal contribution. A
large approximate gradient signals a significant contribution of the
corresponding pixel for an infinitesimal perturbation (because of a
significant change in the score of the predicted class).
**Note**: Here, we consider the *relative* contribution of a pixel (as
we equate high contribution to a high *relative change* in the network
output for an infinitesimal perturbation), similarly to the sparse
linear model case where the relative contribution of feature $\phi_i$
was given by the coefficient $c_i$. Of course, this was just a special
case of the gradient for the sparse linear model case: if we
differentiate $x = \sum_{j \in S} c_j\phi_j$ $\phi_i$, we get back $c_i$
again.
::: information
On the Properties of Gradients The derivative
$$f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}$$ is a
normalized quantity. It gives the *relative* change in the function
output, given an infinitesimal change in the input.
:::
The smart way to compute changes in the output infinitesimal
perturbations: Compute one forward and one backward pass the score of
the predicted class to measure attributions for this infinitesimal
perturbation. This answers the question "What will be the *relative
change* in the predicted score if we change a particular pixel by an
infinitesimal amount?"
This leads us to the definition of *Saliency/Sensitivity maps*.
::: definition
Saliency/Sensitivity Map The saliency or sensitivity map visualizes a
counterfactual attribution for an input corresponding to infinitesimal
independent per-pixel perturbations. It gives us a local explanation of
the model's prediction. There are two usual ways to compute it.
Denoting the saliency map for input $x \in \nR^{H \times W \times 3}$
and class $c \in \{1, \dotsc, C\}$ by $M_c(x) \in [0, 1]^{H \times W}$,
the
[SmoothGrad](https://arxiv.org/abs/1706.03825) [@https://doi.org/10.48550/arxiv.1706.03825]
paper
[computes](https://github.com/PAIR-code/saliency/blob/master/saliency/core/visualization.py#L17)
it as
$$(\tilde{M}_c(x))_{i, j} = \sum_{k} \left|\frac{\partial S_c(x)}{\partial x}\right|_{i, j, k}$$
(we take the $L_1$ norm of each pixel), and
$$(M_c(x))_{i, j} = \min\left(\frac{(\tilde{M}_c(x))_{i, j} - \min_{k \in \{1, \dotsc, H\}, l \in \{1, \dotsc, W\}}(\tilde{M}_c(x))_{k, l}}{P_{99}(\tilde{M}_c(x)) - \min_{k \in \{1, \dotsc, H\}, l \in \{1, \dotsc, W\}}(\tilde{M}_c(x))_{k, l}}, 1\right)$$
where $S_c(x)$ is the score for class $c$ given input $x$ and $P_{99}$
is the 99th percentile. This post-processing normalizes the saliency map
to the $[0, 1]$ interval and clips outlier pixels by considering the
99th percentile. Not clipping the outlier values could result in a
close-to-one-hot saliency map.
In [Simonyan
(2013)](https://arxiv.org/abs/1312.6034) [@https://doi.org/10.48550/arxiv.1312.6034]
(the original saliency paper), the authors compute it as
$$(\tilde{M}_c(x))_{i, j} = \max_{k} \left|\frac{\partial S_c(x)}{\partial x}\right|_{i, j, k}$$
and the normalization method is not disclosed.
:::
::: definition
First-Order Taylor Approximation Consider a function
$f: \nR^d \rightarrow \nR$. The first-order Taylor approximation of the
function $f$ around $x \in \nR^d$ is
$$f(x + h) \approx f(x) + \left\langle h, \nabla f(x) \right\rangle.$$
:::
**Backpropagation linearizes the whole model around the test sample.**
To see this, observe that the gradient is used to construct the
first-order Taylor approximation of the model around a particular test
sample, which is the tangent plane of the model around the test sample:
$$f(x + \delta e_i) - f(x) \approx \left\langle \delta e_i, \frac{\partial f(x)}{\partial x} \right\rangle = \delta \frac{\partial f(x)}{\partial x_i}$$
where $f$ gives the score for a fixed class $c$ that is omitted from the
notation. This tangent plane guarantees that the function output with
this linearized solution will be as close as possible (in the set of
linear functions) to the original function's output around the test
input of interest in an infinitesimal region. We give a local
(counterfactual) explanation with this linear surrogate model, as we
only consider an explanation for a single test input. With this
surrogate model, one can very cheaply compute input-based
counterfactuals. However, these will only be faithful to the original
model in a tiny region around the test input of interest.
**Note**: Our surrogate model is linear but is not guaranteed to be
sparse! It can still be hard to interpret when the input dimensionality
is huge. This is primarily the reason why, instead of looking at actual
gradient values, we visualize the dense gradient tensors in the form of
saliency maps.
#### Summary of Infinitesimal Counterfactual Attribution
With local gradients, we obtain
$$f(x + \delta e_i) - f(x) \approx \left\langle \delta e_i, \frac{\partial f(x)}{x} \right\rangle = \delta \frac{\partial f(x)}{x_i}$$
which measures contribution of each pixel $i$ with an infinitesimal
($\delta$) counterfactual.
#### Problem with Saliency Maps
![Example saliency map of image $x$ the class 'gazelle', taken
from [@https://doi.org/10.48550/arxiv.1706.03825]. Saliency maps can be
challenging to interpret.](gfx/03_sensitivity.png){#fig:sensitivity
width="0.5\\linewidth"}
We visualize input gradients using saliency maps. These visualizations
are not particularly helpful, as they are very noisy and hard to
interpret further than a very coarse region of interest. An example is
shown in Figure [3.9](#fig:sensitivity){reference-type="ref"
reference="fig:sensitivity"}. **Note**: saliency maps are always a class
$c$. We almost always compute it the DNN's predicted class. We might ask
ourselves, "What do we actually get out of this?" We do not even see the
object in these input gradient maps. Gradient maps only represent how
much *relative* difference a tiny change in each pixel of $x$ would make
to the classification score for class $c$. It is debatable whether one
should measure attribution values based on such infinitesimal changes.
Negative contributions are counted as contributions here. This varies
from method to method, and no "good" answer exists. The $i$th element of
the gradient measures the relative response of the classification score
of class $c$ to a perturbation of the image in the $i$th pixel
direction. If it is positive, making the pixel more intensive results in
a locally positive classification score change. If it is negative, it
means we reach a higher classification score if we dim the pixel.
Sometimes we only want to attribute to pixels that induce a positive
change in the score when made more intensive. Sometimes we also want to
take negative influences into account.
::: information
Gradients and Soundness It is a *fact* that the gradient gives us the
*true* relative changes in the prediction considering per-pixel,
independent infinitesimal counterfactuals. It is very important to not
confuse this fact with the statement that gradients give perfectly sound
attributions in the sense that they flawlessly enumerate the true causes
for the network making a certain prediction.
Soundness, by definition, measures whether the attribution method
recites the true causes for the model to predict a certain class. As the
true causes are encoded in the model weights and the forward propagation
formula, which is not at all interpretable, it seems clear that *no
attribution method that presents significantly simpler reasoning can be
perfectly sound*. Saliency maps -- that *seek* to give sound
counterfactual explanations an infinitesimal perturbation -- make use of
such simple reasoning: linearizing the network around the input of
interest and taking the rates of change as attribution scores. The
linearization, the independent consideration of inputs (with which we
discard the possible influence of input feature correlations on the
network prediction), and the "arbitrary" normalization and aggregation
techniques of the 3D gradient tensor are all significant simplifications
that make saliency maps *impossible to be completely sound*.[^29] Even
if they *were* sound explanations infinitesimal perturbations, the
question itself already seems oddly artificial: "Why did the network
make a certain prediction for input $x$ compared to an infinitesimally
perturbed version of it?". There is no reason why society would demand
explanations for such answers.
Moreover, feature attribution methods restrict the explanations to the
features in the test input, but the true causes for a network to predict
a certain output can also lie in the training set samples and the
resulting model weights. Feature attribution methods only consider the
test input features as possible causes and make crude assumptions to
compute attribution scores. For the soundness of the *explanation*, the
attribution method has to give the exact causes of why the network made
a certain prediction for an input $x$. We argue that these exact causes
cannot be encoded in general into a map measuring infinitesimal
perturbations. Of course, most feature attribution explanations do not
claim to provide sound explanations. Instead, they aim to highlight
that, given an input, some features were more important in a certain
decision than others.
To summarize, the saliency map does not give a perfectly sound
attribution map for the predictions of the model on the input of
interest because it uses abstractions and simplifications to make the
explanation human-understandable.
**Note**: The soundness of an attribution method and the counterfactual
or non-counterfactual nature of explanations it gives are completely
independent. For non-counterfactual explanations, a sound attribution
method simply aims to give the true influence of each feature of the
original input on the network prediction *without* comparing to other
predictions.
:::
### SmoothGrad -- Smoother Input Gradients
The natural question is: Can we get smoother maps of attributions that
are more interpretable? To obtain smoother attribution maps than
saliency maps, *SmoothGrad*, introduced in the paper "[SmoothGrad:
removing noise by adding
noise](https://arxiv.org/abs/1706.03825)" [@https://doi.org/10.48550/arxiv.1706.03825],
computes gradients in the vicinity of the input $x$. It follows three
simple steps:
1. Perturb the input $x$ by additive Gaussian noise.
2. Compute the gradients of the perturbed images.
3. Average the gradients.
This gives us slightly less precise local attributions than the vanilla
gradient (which is as local as possible). It also results in much
clearer attribution maps because the added Gaussian noise and the
gradient noise cancel out by averaging while the main signal remains in
place. Examples are shown in
Figure [3.10](#fig:smoothgrad){reference-type="ref"
reference="fig:smoothgrad"}. Combining gradients of different
perturbations can reduce the noise and perhaps allow us to see more
relevant attribution scores. Formally,
$$\hat{M}_c(x) = \frac{1}{n}\sum_{i = 1}^n M_c(x + \epsilon_i)\qquad \epsilon_i \sim \cN(0, \sigma^2I)$$
where care is also taken for each perturbed image $x + \epsilon_i$ to
stay in the $[0, 1]^{H \times W \times 3}$ space, as we are averaging
across normalized saliency maps.
![Qualitative comparison of SmoothGrad and saliency maps, taken
from [@https://doi.org/10.48550/arxiv.1810.03292]. SmoothGrad gives
attribution maps that are more aligned with human expectations and more
interpretable. One has to be careful with confirmation bias, though
(Section [3.7.3](#sssec:eval){reference-type="ref"
reference="sssec:eval"}).](gfx/03_smoothgrad.pdf){#fig:smoothgrad
width="0.6\\linewidth"}
#### Summary of SmoothGrad
With "less local" gradients, we obtain $$\begin{aligned}
\nE_z\left[f(x + z + \delta e_i) - f(x + z)\right] &\overset{\text{Taylor}}{\approx} \nE_z\left[\delta \left\langle e_i, \nabla_x f(x + z) \right\rangle\right]\\
&= \delta \left\langle e_i, \nE_z\left[\nabla_x f(x + z)\right]\right\rangle\\
&= \delta \left(\nE_z\left[\nabla_x f(x + z)\right]\right)_i\\
&= \delta \nE_z\left[\frac{\partial}{\partial x_i}f(x + z)\right]
\end{aligned}$$ which measures the contribution of each pixel $i$ with
an infinitesimal[^30] counterfactual at multiple points $x + z$ around
$x$. This expands the originally very local computation of the gradient
to a slightly more global region around $x$.
### Integrated Gradients {#sssec:ig}
We will now go from local changes (simple gradients) to the inputs to
more and more global changes in the hope that we obtain more sound
attribution scores this way. *Integrated gradients* is the middle ground
between local and global perturbations. It averages over local *and*
global perturbations instead of perturbing only around a single point.
We are linearly interpolating between two points in the input space.
In *Integrated Gradients*, introduced in the paper "[Axiomatic
Attribution for Deep
Networks](https://arxiv.org/abs/1703.01365)" [@sundararajan2017axiomatic],
we choose a base image that contains no information, $x^0$, and consider
our input image, $x$. We linearly interpolate between $x^0$ and $x$ in
the pixel space by slowly going from an image with no information
($x^0$, the *baseline image*) to the original image ($x$). We do the
gradient computation at every intermediate point along the line, then
average them (without weights, as the expectation is over a uniform
distribution). This *nearly* gives us the integrated gradients method:
$$\begin{aligned}
\nE_{\alpha \sim \mathrm{Unif}[0, 1]}\left[f(x^0 + \alpha(x - x^0) + \delta e_i) - f(x^0 + \alpha(x - x^0))\right] &\overset{\text{Taylor}}{\approx} \nE_{\alpha}\left[\left\langle \delta e_i, \nabla_x f(x^0 + \alpha(x - x^0)) \right\rangle\right]\\
&= \delta \left\langle e_i, \nE_\alpha\left[\nabla_x f(x^0 + \alpha(x - x^0))\right] \right\rangle\\
&= \delta \left\langle e_i, \int_0^1 \nabla_x f(x^0 + \alpha(x - x^0))\ d\alpha \right\rangle.
\end{aligned}$$ This estimates the pixel-wise contribution with an
infinitesimal counterfactual ($\delta$), averaged over an entire line
between the original input and the baseline image containing "no
information".[^31]
However, in the integrated gradients method, the contribution of pixel
$i$ is computed as
$$(x_i - x^0_i) \left\langle e_i, \int_0^1 \nabla_x f(x^0 + \alpha(x - x^0))\ d\alpha \right\rangle,$$
and we derived
$$\left\langle e_i, \int_0^1 \nabla_x f(x^0 + \alpha(x - x^0))\ d\alpha \right\rangle.$$
We seemingly multiply a nicely motivated formula with pixel values.
However, the integrated gradients formulation is actually the "prettier"
formula, as it satisfies the *completeness axiom*. If we sum over the
contribution of all pixels $i$, we obtain $$\begin{aligned}
&\sum_i (x_i - x^0_i) \left\langle e_i, \int_0^1 \nabla_x f(x^0 + \alpha(x - x^0))\ d\alpha \right\rangle\\
&= \left\langle (x_i - x^0_i)e_i, \int_0^1 \nabla_x f(x^0 + \alpha(x - x^0))\ d\alpha \right\rangle\\
&= \left\langle x - x^0, \int_0^1 \nabla_x f(x^0 + \alpha(x - x^0))\ d\alpha \right\rangle\\
&= \int_0^1 \left\langle \nabla_x f(x^0 + \alpha (x - x^0)), x - x^0 \right\rangle\ d\alpha\\
&= f(x) - f(x^0),
\end{aligned}$$ using the fundamental theorem of line integrals
(Definition [\[def:lineintegrals\]](#def:lineintegrals){reference-type="ref"
reference="def:lineintegrals"}) with
$r(\alpha) = x^0 + \alpha(x - x^0)$. In words: if we sum the pixel-wise
contributions of all pixels (integrated gradients in the $i$th
direction, multiplied by pixel differences), we get the difference
between the original prediction and the baseline prediction.
The authors of [@sundararajan2017axiomatic] argue that the completeness
axiom is a necessary condition for a sound attribution. This axiom
states that pixel-wise contributions for input $x$ must sum up to the
difference between the current model output $f(x)$ and baseline output
$f(x^0)$. Here, the baseline image is an image without "any
information". It represents the complete absence of signal. We measure
what kind of additional information we add per pixel on top of this
baseline image. The baseline image can be, e.g., an image consisting of
noise or a completely black image.[^32]
**Important downside of a black image baseline.** If we choose our
baseline to be a black image, black pixels (e.g., pixels of a black
camera) cannot be attributed at all, as $x_i - x^0_i = 0$. This does not
seem right. The black pixels of the camera are very likely also
contributing to the model prediction of the class camera. This is
different from the sparse linear model case:
$x = \sum_{i \in S}c_i\phi_i$. There, whenever an input feature $\phi_i$
was 0 (e.g., a black pixel), it contributed to the prediction with a
factor of 0, and this was the *GT contribution* of this feature to the
prediction. This was also a sound attribution. DNNs, however, are much
more complex, and we no longer have this GT correspondence. Here, it is
almost surely the case that the black pixels also contributed to the
model prediction of a black camera. This problem is known as the
"missingness bias" which we will further detail in later sections.
Generally, the choice of the baseline value can be quite important. In
many cases, random noise seems to be a better option. For the interested
reader, the [following
resource](https://distill.pub/2020/attribution-baselines/) describes
other options for the choice of the baseline.
#### Results of Integrated Gradients
The paper [@sundararajan2017axiomatic] only provides an empirical
evaluation of the method's soundness. Example attribution maps are shown
in Figure [\[fig:integrated\]](#fig:integrated){reference-type="ref"
reference="fig:integrated"}. According to the results, the integrated
gradients method nicely attributes (focuses) to the actual object
regions, whereas gradients alone do not give us the "focus" we would
expect.
We as humans deem the results sensible (which coincides with the
'coherence with human expectations' property of a good explanation), as
we would also focus on the regions that the method highlights. This is,
however, a severe case of confirmation bias. We will discuss such biases
in Section [3.7.3](#sssec:eval){reference-type="ref"
reference="sssec:eval"}.
The attribution maps of the integrated gradients method are certainly
more *interpretable* than saliency maps. These show more continuous
regions; thus, the explanations are more selective. However, this is
just one of the evaluation criteria for a good explanation. The
soundness of the explanations is only measured qualitatively, even
though quantitative analysis would have been critical.
![image](gfx/img0.jpg){width="0.7\\columnwidth"}
![image](gfx/img1.jpg){width="0.7\\columnwidth"}
![image](gfx/img2.jpg){width="0.7\\columnwidth"}
![image](gfx/img3.jpg){width="0.7\\columnwidth"}
### Comparing Local and Global Perturbations -- Two Ways of Measuring Contribution
We consider two extremes in the domain of local explanation methods that
aim to give counterfactual explanations: those that make *local
perturbations* to the input $x$ and those that perturb the input
*globally*. We also consider an entire *spectrum* between these two
extremes. This spectrum is depicted in
Figure [3.11](#fig:spectrum){reference-type="ref"
reference="fig:spectrum"}.
![Spectrum of local explanation methods the nature of perturbations they
employ.](gfx/03_localglobal.pdf){#fig:spectrum width="0.8\\linewidth"}
Local perturbations make very local changes to the input and measure the
network's response.
- **Pro**: It has well-understood properties. (The concept of a
gradient.) It has no dependence on reference values.
- **Contra**: We only employ infinitesimal counterfactuals.
Global perturbations measure counterfactual responses by turning off
features entirely in various ways.[^33]
- **Pro**: This can lead to meaningful counterfactual analysis. This
is also a much more natural question to seek explanations for.
- **Contra**: Setting the reference values is hard. Such methods are
computationally heavy and need further assumptions/approximations to
make them efficient.
The method of integrated gradients gives a smooth interpolation between
local changes and turning off features completely. **Note**: These are
all still *local explainability methods*. Whether the perturbation is
local or global is an independent axis of variation.
### Local $=$ Global for (Sparse) Linear Models
Consider a linear model
$$x = \sum_{i \in S} c_i \phi_i \qquad |S| \ll m.$$ When responding to
local perturbations, the gradient of the output the feature $i$ is
$c_i$. When responding to global perturbations, the effect of turning
off feature $i$ is $c_i \phi_i$. (Here, we actually set the feature to
zero.)
Therefore, the spectrum in
Figure [3.11](#fig:spectrum){reference-type="ref"
reference="fig:spectrum"} collapses into a single point for linear
models: we do not have any distinction between the two methods. We often
try to turn some complex non-linear models into linear ones locally.
Therefore, it is of crucial importance to understand linear models.
### Zintgraf : Inpainting + Black-Box Computation
The [Zintgraf
(2017)](https://arxiv.org/abs/1702.04595) [@https://doi.org/10.48550/arxiv.1702.04595]
attribution method employs global perturbations -- they measure
missingness by imputation. It uses the "naive way" of computing the
forward pass several times for computing counterfactual attributions.
The proposed *prediction difference analysis* reflects the fundamental
notion of a counterfactual explanation very well. We want to obtain
$$\begin{aligned}
P(c \mid x_{\setminus i}) &= \sum_{x_i} P(x_i \mid x_{\setminus i})\underbrace{P(c \mid x_{\setminus i}, x_i)}_{\text{trained network}}\\
&= \nE_{P(x_i \mid x_{\setminus i})}\left[P(c \mid x_{\setminus i}, x_i)\right],
\end{aligned}$$ which is the probability of class c according to the
network after removing feature $i$. As we do not know the true posterior
$P(x_i \mid x_{\setminus i})$[^34] over the missing feature, we
approximate it using an inpainting model
$$Q_{\mathrm{inpainter}}(x_i \mid x_{\setminus i}).$$ Therefore,
$$\begin{aligned}
P(c \mid x_{\setminus i}) &\approx \nE_{Q_{\mathrm{inpainter}}(x_i \mid x_{\setminus i})}\left[P(c \mid x_{\setminus i}, x_i)\right]\\
&\approx \frac{1}{M} \sum_{m = 1}^M P(c \mid x_{\setminus i}, x^{(m)}_i)
\end{aligned}$$ where
$x^{(m)}_i \sim Q_{\mathrm{inpainter}}(x_i \mid x_{\setminus i})$.
Finally, we calculate the counterfactual before and after removing
feature $i$ using the *weight of evidence* value:
$$\operatorname{WE}_i(c \mid x) = \log_2(\operatorname{odds}(c \mid x)) - \log_2(\operatorname{odds}(c \mid x_{\setminus i})),$$
where
$$\operatorname{odds}(c \mid x) = \frac{P(c \mid x)}{1 - P(c \mid x)}.$$
![Illustration of the conditional independence assumptions used by
Zintgraf to make the conditioning tractable. A patch of size
$k \times k$ only depends on the surrounding pixels from an $l \times l$
patch that contains the $k \times k$ patch. Figure taken
from [@https://doi.org/10.48550/arxiv.1702.04595].](gfx/box_visualization.png){#fig:zintgraf2
width="0.6\\linewidth"}
::: definition
Mixture of Gaussians A Mixture of Gaussians (MoG) distribution with $M$
components is of the form
$$P(x) = \frac{1}{M}\sum_{m = 1}^M \cN(x; \mu_m, \Sigma_m)$$ where
$\mu_m$ and $\Sigma_m$ are the mean vector and covariance matrix of the
$m$th component, respectively. The MoG distribution is one of the
simplest *multimodal* distributions.
:::
#### Remarks for Zintgraf
In Zintgraf , the features do not have to correspond to pixels. They
correspond to image patches in this work.
The weight of evidence is a signed value, as we consider evidence *for*
and *against* the prediction. When $\mathrm{WE}_i$ is negative for
sliding window (image patch) $i$, it is evidence *against* the model's
prediction. It is also often evidence *for* the second-highest scoring
class.
To compute the attribution scores, we could use any
difference/comparison between $P(c \mid x)$ and
$P(c \mid x_{\setminus i})$. The authors argue that using log odds is
well-founded.
It is costly to do this procedure for all features $i$. For each image,
one needs to compute $N$ forward operations for the main model + $N$
inpainting computations, where
$$N = \text{number of sliding windows} \times \text{number of samples for inpainting}.$$
The authors propose two methods for estimating the true inpainting
distributions $P(x_i \mid x_{\setminus i})$. The first one is to assume
*independence* of feature $x_i$ on other features $x_{\setminus i}$. If
we make such an assumption, we can consider the empirical distribution
of feature $x_i$ from the dataset, i.e., we replace the feature value
with a different one sampled from the dataset at random. By sampling
more possible feature values from the dataset (at the same image
location), we Monte Carlo estimate the expectation. As the authors also
state, this is a crude approximation. The second proposal of the paper
is to not assume independence but to suppose that an image patch $x_i$
of size $k \times k$ *only depends on the surrounding pixels*
$\hat{x}_i \setminus x_i$, where $\hat{x}_i$ is an image patch of size
$l \times l$ that contains $x_i$. An illustration is given in
Figure [3.12](#fig:zintgraf2){reference-type="ref"
reference="fig:zintgraf2"}. To speed things up, the authors used a
straightforward method for inpainting: a multivariate Gaussian
inpainting distribution in pixel space, [fit on dataset
samples](https://github.com/lmzintgraf/DeepVis-PredDiff/blob/02649f2d8847fc23c58f9f2e5bcd97542673293d/utils_sampling.py#L146).
In particular, the authors calculate the empirical mean $\mu_i$ and
empirical covariance $\Sigma_i$ of the large patch $\hat{x}_i$ on the
entire dataset, using the simplifying assumption that the distribution
of the large patch $\hat{x}_i$ (i.e., the *joint* distribution of the
window we want to sample from and the surrounding pixels) is a Gaussian:
$P(\hat{x}_i) = \cN(\hat{x}_i; \mu_i, \Sigma_i)$. Finally, the authors
use the well-known conditioning formula for Gaussians to obtain
$P(x_i \mid \hat{x}_i \setminus x_i)$. Under their assumptions, we have
$$P(x_i \mid \hat{x}_i \setminus x_i) = P(x_i \mid x_{\setminus i}).$$
This is probably the simplest form of inpainting one could think of.
Other possibilities for the inpainting distribution: One could use a
Mixture of Gaussians (MoG) or diffusion
models [@https://doi.org/10.48550/arxiv.2006.11239] for inpainting.
However, then it would take even longer to compute the explanation for a
single image. There is always a trade-off between complexity and
quality.
The method of Zintgraf is a *local* explanation method (as it only gives
an explanation for a single image) but a *global* counterfactual method
(because the inpainter is allowed to predict anything, not just very
small perturbations compared to the original image features). Note,
however, that the inpainter is only used to replace small patches -- it
is still spatially local.
#### Results of Zintgraf
![Results of Zintgraf
(2017) [@https://doi.org/10.48550/arxiv.1702.04595], taken from the
paper. The attribution maps look surprisingly hard to interpret.
Different architectures seem to look at notably different parts of the
input image. Still, maybe it *is* the genuine contribution of each
feature to the network's prediction. We should not rely too much on
human intuition, as that might harm our belief about the soundness of
the method. It is hard to say whether this is right or wrong without a
quantitative soundness evaluation.](gfx/03_zintgraf.png){#fig:zintgraf
width="\\linewidth"}
We show a few attribution maps in
Figure [3.13](#fig:zintgraf){reference-type="ref"
reference="fig:zintgraf"}. We argue that this is the most promising
solution for counterfactual attribution, but it is also the most
computationally heavy. Let us now give some pros and cons of the method.
- **Pro**: The method performs a global counterfactual analysis
because of inpainting. It is also one of the few datatype-agnostic
methods -- it can be applied to image, text, and tabular data inputs
as well, given that an inpainter is available.
- **Contra**: The method is way too complex to be practical. It also
depends on the inpainter, which opens a new can of worms.
### LIME: Fitting a Sparse Linear Model {#sssec:lime}
LIME, introduced in the paper "["Why Should I Trust You?": Explaining
the Predictions of Any
Classifier](https://arxiv.org/abs/1602.04938)" [@https://doi.org/10.48550/arxiv.1602.04938],
has been a popular method for more than five years now that is a bit
more realistic than Zintgraf regarding practical use. It builds a
surrogate model that is explainable by definition. Given the general
formulation $$\xi(x) = \argmin_{g \in G} \cL(f, g, \pi_x) + \Omega(g)$$
where $f$ is the original model, $g$ is the surrogate model, $G$ is the
set of possible surrogate models, $\pi_x$ is a measure of distance from
$x$ used to weight loss terms, and $\Omega$ is a measure of complexity.
The authors make the following choices: $G$ should be a set of sparse
linear models, and $\Omega$ should be a sparsity regularizer for the
linear model $g$.
By optimizing the objective function, we try to make $g$ as close to $f$
as possible *in the vicinity* of $x$, the test input of interest,
weighted by $\pi_x$, while also keeping it sparse.
In LIME for images, we define
- $x$ as the original image,
- $x'$ as the interpretable version of the original image: a binary
indicator vector whether superpixel $i$ is turned on or off (grayed
out). Here, all entries are ones.
- $z'$ as a sample around $x'$ by drawing non-zero elements of $x'$
uniformly at random. The number of draws is also uniformly sampled.
- $z$ as $z'$ transformed back to an actual image,
- $f(z)$ as the probability that $z$ belongs to the class being
explained, and
- $\cZ$ as the dataset of $(z, z')$ pairs.
We specify the sparse linear function $g$ formally by
$$g(z') = w_g^\top z'$$ and the sparsity constraint by
$$\Omega(g) = \infty\bone\left(\Vert w_g \Vert_0 > K\right),$$ i.e., $f$
should have at most $K$ non-zero weights. The function fitting takes
place around input $x$. We let $g$ follow $f$ via the $L_2$ loss on the
function outputs
$$\cL(f, g, \pi_x) = \sum_{(z, z') \in \cZ} \pi_x(z) \left(f(z) - g(z')\right)^2,$$
with $\pi_x$ making sure that we focus on fitting $g$ to $f$ only in the
vicinity of $x$ (we only aim for local faithfulness):
$$\pi_x(z) = \exp\left(-D(x, z)^2 / \sigma^2\right).$$ Here $D$ is the
cosine distance from $x$ to $z$ if the input is text, or the $L_2$
distance for images.
![Toy example of LIME being fit to the bold red plus data point. The
brown plus and blue circle samples are the sampled instances in the
vicinity of the input being explained, $(z, z') \in \cZ$. Their size
encodes their similarity with the original input, as given by
$\pi_x(z)$. The background contours encode the decision boundary of the
complex model $f$, whereas the dashed line encodes the decision boundary
of $g$. The surrogate model is locally faithful to the complex model.
Figure taken
from [@https://doi.org/10.48550/arxiv.1602.04938].](gfx/03_lime.png){#fig:lime
width="0.4\\linewidth"}
An example of the fitting procedure is given in
Figure [3.14](#fig:lime){reference-type="ref" reference="fig:lime"}. The
linear model learns to respect local changes of $f$. This is close to
taking the gradient, but we get a sparser linearization than that, which
is more interpretable.
The workflow with LIME for images can be explained as follows.
1. We pick an input $x$ and the class to explain.
2. We train a linear model on top of the superpixel features.
3. We extract the surrogate model weights and check each superpixel's
contribution.
4. The superpixel corresponding to the largest weight contributes most
to the class prediction in question.
The authors do not only test the method on the actual prediction of the
network. They deliberately come up with confusing images with multiple
possible classes and try to explain the prediction of the network for
the top $k = 3$ predictions. This is shown in
Figure [\[fig:inception\]](#fig:inception){reference-type="ref"
reference="fig:inception"}.
::: figure*
:::
Let us discuss the pros and cons of the method.
- **Pro**: The results are interpretable by design.
- **Contra**: (1) We only have a local sparse linear approximation
that can be very different from the DNN. (2) The method is
expensive, as a sparse linear model has to be fit for all images we
want to be explained. (3) The reference image is assumed to be a
gray image, an often-used representation of missingness. We discuss
in [3.5.11](#sssec:missing){reference-type="ref"
reference="sssec:missing"} that this might be suboptimal. (4) The
method is not stable. The given explanations (coefficients) are not
continuous in the input and are, therefore, not general. In
particular, @alvarezmelis2018robustness show in the paper "[On the
Robustness of Interpretability
Methods](https://arxiv.org/abs/1806.08049)" [@alvarezmelis2018robustness]
that even explaining test instances that are very close/similar to
each other leads to notably different results.
**Note 1**: The reference cannot be seen in the formulation but rather
in how we construct the $z$ samples:
$$z'_i = 0 \iff \text{superpixel \(i\) is gray}.$$ Thus, we still have
an actual 0 value in "interpretable space"; the related term does not
contribute to the sum in the linear model.
**Note 2**: When we give an input to a DNN, we typically subtract the
mean of the training set to center the input. So an original image that
becomes a 0 input for a DNN is usually gray (the mean of the training
set samples, close to constant gray for a versatile dataset). For
ImageNet (and many other vision datasets), the standard practice is to
[subtract the
mean](https://github.com/huggingface/pytorch-image-models/blob/da6644b6ba1a9a41f2815990111056bbf0b05c8e/timm/data/loader.py#L132).
::: information
Surrogate Model The LIME paper uses $f(x)$ to denote the probability
that $x$ belongs to the class being explained. The surrogate $g$ is,
however, defined to be a *linear* model that can, in principle, predict
any real number and not just probabilities. We could have two other
options for defining the surrogate model.
1. Use the *logit* values of the classifier as the targets for the
surrogate model. This way, we are matching a real number to another
(unconstrained) real number, which seems more natural. However, the
coefficients of the surrogate model do not correspond to the changes
in the model *output* anymore, but rather to the changes in the
logits that are more disconnected from the model's final decision
than its predicted probabilities.
2. Constrain the surrogate model's outputs to the $(0, 1)$ range, e.g.,
by using a logistic sigmoid activation function. This way, we could
use any classification loss to train the surrogate model -- we are
matching probabilities to probabilities. The downside is that the
surrogate model outputs are not *linearly* related to the outputs
anymore, and the attribution scores become less interpretable.
:::
### SHAP (SHapley Additive exPlanations)
The setup of the SHAP method, introduced in the paper "[A Unified
Approach to Interpreting Model
Predictions](https://arxiv.org/abs/1705.07874)" [@https://doi.org/10.48550/arxiv.1705.07874],
is very similar to that of the LIME method in terms of the knowledge
about the system and the input/output format. In particular, we assume a
black-box system with a binary input vector $x \in \{0, 1\}^N$ that
gives us scores $f(x) \in \nR$ for a particular class $c$. We want to
assign the contribution of each feature $i$ to the prediction.
The input is represented by a given set of features. The binary
membership indicator $x$ is a constant one vector: in the original
input, all features are present. For perturbed inputs $z \subseteq x$,
zeros and ones indicate whether the corresponding feature is present or
turned off in the perturbed image. As we have binary input features, we
have a clear interpretation of turning on (1) and turning off (0)
features. For images, this is usually a superpixel representation, where
the constant one vector is the full image, and the subsets specify which
superpixels we switch off (i.e., replace with some base value) and on.
::: definition
Combination The number of possible ways to choose $k$ objects from $n$
objects is $$\binom{n}{k} = \frac{n!}{k!(n - k)!}.$$
:::
SHAP determines the individual contribution of each feature $i$ to the
prediction $f(x)$ using the notion of Shapley
values [@shapley1953value]. The value is defined as
$$\phi_{f, x}(i) = \nE_{z \subseteq x: i \in z}\left[f(z) - f(z - i)\right].$$
This value gives the average contribution of feature $i$ in all subset
cases to the output of network $f$. $z$ is a subset of $x$ that must
include $i$. For every subset, we analyze the effect of discarding
feature $i$. This can be thought of as a set function version of the
gradient of $f$ at $x$ feature $i$. The original input $x$ is always
treated as $[1, 1, \dots, 1]$ (all features are turned on), and an
example of a valid sample $z$ is $[0, 1, 0, 0, 1, 0]$ for index $i = 2$
if $x \in \{0, 1\}^6$. (The indexing starts from $1$.) The possible
subsets $z$ are thus any binary vector of the same dimensionality as
$x$. It also follows from the formulation that Shapley values are
*signed*, unlike, e.g., saliency maps. Similarly to LIME, we give an
attribution score to each feature (e.g., superpixel) turning them on/off
(global counterfactual explanation).
**Note**: The expectation in the SHAP attribution values is *not*
uniform across all possible $z$s that are subsets of $x$. The
expectation follows the procedure below:
1. Sample subset size $m$ from $\text{Unif}\{1, \dots, |x|\}$.[^35]
2. Sample a subset $z$ of size $m$ containing feature $i$ with equal
probabilities.
Not every subset across all subset sizes has the same probability of
being picked because of sample size differences. If $|x| = 10$, then
$\binom{9}{4} \gg \binom{9}{9}$, meaning particular small or large
subsets are much more likely than particular medium-sized ones.
**Example**: Let us consider features as image patches. Suppose that
feature $i$ indicates the face region of the cat. To calculate the
Shapley value corresponding to feature $i$, we average the function
output for all possible inputs with $i$ switched on (other parts are
free to vary), then we *subtract* the average function output for all
possible inputs with $i$ switched off. The example is illustrated in
Figure [3.15](#fig:shapcat){reference-type="ref"
reference="fig:shapcat"}.
![Illustration of the computation of Shapley values. This is equivalent
to the formulation above because the expectation is
linear.](gfx/03_shapcat.pdf){#fig:shapcat width="\\linewidth"}
We rewrite the expectation as $$\begin{aligned}
\phi_{f, x}(i) &= \nE_{z \subseteq x: i \in z}\left[f(z) - f(z - i)\right]\\
&= \frac{1}{|x|} \sum_{z \subseteq x: i \in z} \binom{|x| - 1}{|z| - 1}^{-1}\left[f(z) - f(z - i)\right]\\
&= \sum_{z \subseteq x: i \in z} \frac{(|z| - 1)!(|x| - |z|)!}{|x|!}\left[f(z) - f(z - i)\right]
\end{aligned}$$ by leveraging that the probability of sampling $z$ is
equal to the probability of subset size $|z|$ times the probability of
choosing a particular subset of size $|z|$.
#### SHAP also satisfies the completeness axiom.
::: claim
If we sum over the Shapley values for all features $i$, then we get the
difference of the function value for the input of interest $x$ and the
prediction for the baseline $0$:
$$\sum_{i} \phi_{f, x}(i) = f(x) - f(0).$$
:::
::: proof
*Proof.*
$$\sum_{i} \phi_{f, x}(i) = \sum_i \sum_{z: i \in z \subseteq x} \frac{(|z| - 1)!(|x| - |z|)!}{|x|!}\left[f(z) - f(z - i)\right].$$
Here, '$\cdot f(z)$' appears $|z|$ times ($|z| \in \{1, \dots, |x|\}$)
with a *positive* sign, once for each feature $i$ in $z$. Its
coefficient is always $$\frac{(|z| - 1)!(|x| - |z|)!}{|x|!},$$ thus
$|z|$ times the coefficient gives $$\binom{|x|}{|z|}^{-1}.$$
Similarly, '$\cdot f(z)$' appears $|x| - |z|$ times
($|z| \in \{0, \dots, |x| - 1\}$) with a *negative* sign, once for each
feature $i$ *not* in $z$. Its coefficient is always
$$\frac{|z|!(|x| - |z| + 1)!}{|x|!}$$ as we consider $|z| \gets |z| + 1$
in the formula, thus $|x| - |z|$ times the coefficient gives
$$\binom{|x|}{|z|}^{-1}.$$
The terms of the previous two paragraphs obviously cancel whenever
$z \notin \{0, x\}$.
For $z = 0$, $f(z)$ appears $|x|$ times with a *negative* sign. Its
coefficient is always $$\frac{0!(|x| - 1)!}{|x|!} = \frac{1}{|x|},$$
thus, $|x|$ times the coefficient gives $1$. Therefore, the term gives
$-f(0)$ in the sum.
For $z = x$, $f(z)$ appears $|x|$ times with a *positive* sign. Its
coefficient is always $$\frac{(|x| - 1)!0!}{|x|!} = \frac{1}{|x|},$$
thus, $|x|$ times the coefficient gives $1$. Therefore, the term gives
$+f(x)$ in the sum.
Finally, by summing all terms up, we indeed obtain
$\sum_i \phi_{f, x}(i) = f(x) - f(0)$. ◻
:::
**Note**: The $0$ vector can mean arbitrary missingness in the pixel
space, just like in LIME. For integrated gradients, we had a very
similar result: When we sum over all contributions from every pixel, we
obtain $f(x) - f(x^0)$. The difference is that we are not in the pixel
space with SHAP.
#### SHAP satisfies the strong monotonicity property.
::: definition
Strong Monotonicity Attribution values $\phi$ satisfy the strong
monotonicity property if, for every function $f$ and $f'$, binary input
$x$ and feature $i$, the following holds:
$$f(z) - f(z - i) \le f'(z) - f'(z - i)\quad \forall z \subseteq x \text{ s.t. } i \in z \quad \implies \quad \phi_{f, x}(i) \le \phi_{f', x}(i).$$
In words, if the impact of deleting feature $i$ is more significant for
$f'$ for all subsets of $x$ containing $i$, then the attribution value
for $f'$ on feature $i$ must be greater than that for $f$.
:::
The fact that SHAP satisfies the strong monotonicity property follows
trivially from its formulation. This seems to be a very reasonable
property[^36] but should not be deemed crucial. Below, we will see that
Shapley values are special for measuring contribution.
**Uniqueness**: The attribution values $\phi$ of SHAP are the only ones
that satisfy both the strong monotonicity and the completeness
axiom [@young1985monotonic]. The theorem is well-known in the game
theory literature. This roughly translates to: "If we want these nice
properties, we must use SHAP." Thus, *SHAP is sufficient and necessary
for these two properties to hold jointly.* The coefficients for Shapley
values are, therefore, significant to be exactly these.
#### Why do we want these properties?
Why are strong monotonicity and completeness useful from an
applicability point of view? We do not have a strong argument for why
this should be the "holy grail" for attribution. The paper also does not
give a strong reason why these properties should be strongly connected
to any real-world properties. Such works that are built upon axiomatic
foundations that introduce some intuitive requirements (e.g., strong
monotonicity or completeness axioms) usually conclude that the only
method that satisfies all the axioms is theirs. But they usually take
*different axioms*, which results in different formulations. The
integrated gradients method is also a unique formulation that satisfies
a different set of axioms [@sundararajan2017axiomatic]. Everything
depends on how we choose these axioms. We do not think that any of the
axioms are *absolute necessities*. They are just one way to connect
possible real-world needs to an actual explanation method we wish to
have.
#### Using SHAP in practice
We approximate the Shapley values by sampling the expectation at random,
according to the coefficients (choose size uniformly, choose a set of
that size uniformly). This avoids traversing through the combinatorial
number of subsets but introduces large variance in the Monte Carlo
approximation, leading to a decreased trustworthiness of the attribution
scores.
Let us consider the pros and cons of SHAP.
- **Pro**: Similarly to LIME, the results are interpretable by design.
The method also gives global counterfactual analysis.
- **Contra**: (1) We have to use efficient approximations of the
Shapley values to keep tractability. Depending on the variance of
our approximations, the results we obtain this way might not be
faithful to the true Shapley values. (2)
@alvarezmelis2018robustness [@alvarezmelis2018robustness] show also
for SHAP that the attribution scores can change significantly in
small input neighborhoods. (3) Just like in LIME, the reference
image is assumed to be a gray image (the mean of the training
distribution) in the paper. This might have unfavorable
implications, which we will further discuss in
Section [3.5.11](#sssec:missing){reference-type="ref"
reference="sssec:missing"}.
### Defining a Missing Feature {#sssec:missing}
We needed a good definition of "no information" for the methods
discussed previously.
In *integrated gradients*, we use black pixels as missing features,
which is empirically justified in [@sundararajan2017axiomatic]. This
gradually kills information by dimming and considers the effect for each
pixel integrated through the procedure.
*Zintgraf* use inpainting as missing features, which is, perhaps, a more
sensible choice to encode missingness than any fixed color.
*LIME* takes the mean pixel values to indicate missingness (which
corresponds to gray pixels for most datasets of natural images).
In *SHAP*, missingness is indicated the same way as in LIME. **Note**:
Completeness holds when we consider a 0 *vector*. It can correspond to
*any* image. The authors equate that to a gray image, but one could make
different choices, such as black/white images or Gaussian noise. The
choice of what the 0 vector encodes could also be made arbitrarily for
LIME. The integrated gradients method also gives a freedom of choice in
designing the baseline image. Usually, the choice is made based on
results from cross-validation or qualitative analysis (the latter often
being flawed). It is also important to remark that neither of the
methods is restricted to images, and we have to reason about the
definition of "missingness" for other kinds of data in the same way. For
example, for tabular data, both a zero value and the mean value of the
dataset make intuitive sense, but they might give different results.
As discussed previously, we consider inpainting to be the most promising
approach to defining missingness. The problem with fixed missing feature
values is that they can also carry information (e.g., black pixels on a
car or gray pixels on a house, illustrated in
Figure [3.16](#fig:missingness2){reference-type="ref"
reference="fig:missingness2"}), might matter a lot for the prediction
but might not be attributed at all. Such pixel values can appear in
natural images, yet they will automatically have a zero value in
integrated gradients. This is, of course, problematic. The problem can
even arise in CALM or SHAP, though perhaps not as severely as in
integrated gradients: If a particular superpixel has the same constant
value as the mean pixel, turning it on or off does not have any effect,
so the attribution value is necessarily zero.
Using constant pixel values to encode missingness also causes problems
when considering soundness evaluation methods such as
remove-and-classify, introduced in
Section [3.7.7](#sssec:rac){reference-type="ref" reference="sssec:rac"},
as it can introduce missingness bias, discussed in
Section [3.7.8](#sssec:missingness_bias){reference-type="ref"
reference="sssec:missingness_bias"}.
![Example of black and gray colors -- popular choices for encoding
missingness -- conveying information in images. Choosing *any* fixed
color to encode missingness is questionable. The images were generated
by Stable
Diffusion [@https://doi.org/10.48550/arxiv.2112.10752].](gfx/03_missingness2.png){#fig:missingness2
width="0.7\\linewidth"}
::: information
Inpainting Models Language models are also often inpainting models
(context prediction self-supervised learning (SSL) objective). To get
performant solutions, one needs a huge model. The same goes for
diffusion-based inpainting models. They are also huge pre-trained models
that can synthesize more realistic images. Inpainting is not as easy as
it sounds.
:::
### Meaningful Perturbations
Now, we discuss the "[Interpretable Explanations of Black Boxes by
Meaningful Perturbation](https://arxiv.org/abs/1704.03296)" [@Fong_2017]
paper that introduces *meaningful perturbations*. Instead of different
colors encoding missing features, one can also use *learned blurring*.
Image blurring can erase information without potentially introducing
some. (However, for humans, it might not be enough. Considering an image
of a person playing the flute, even if we blur the flute out, a human
still knows what is in their hands. However, in this paper, the authors
demonstrated that DNNs do not work like this, as shown in
Figure [3.17](#fig:learnedblur){reference-type="ref"
reference="fig:learnedblur"}.)
![Example of a learned blur that results in diminished predictive
performance, taken
from [@Fong_2017].](gfx/03_learnedblur.pdf){#fig:learnedblur
width="0.8\\linewidth"}
The authors are optimizing for the *blur mask*. After the optimization,
the final blurred region is ideally the most important region for
predicting the corresponding label. The optimization problem is
$$m^* = \argmin_{m \in [0, 1]^\Lambda}\lambda \Vert \bone - m \Vert_1 + f_c(\Phi(x_0; m))$$
where
- $m$: A continuous relaxation of a binary mask that associates each
pixel $u \in \Lambda$ with a scalar value $m(u) \in [0, 1]$.
- $m(u) = 1$: We do not perturb the pixel at all.
- $m(u) = 0$: We perturb the pixel (region) as much as possible.
- $m^*$: Mask that erases most information from the image while also
being sparse.
- $\Vert \bone - m \Vert_1$: Measures the area of the erased region.
As $m$ is continuous (smooth), the magnitude matters. $L_1$
regularization encourages the mask to be sparse. This can be
considered as a relaxation of the NP-hard problem using
$\lambda \Vert \bone - m \Vert_0$ plus $m \in \{0, 1\}^\Lambda$.
- $f_c$: Classifier score for class $c$. We want to minimize this in a
regularized fashion.
- $\Phi(x_0; m)$: The perturbation operator, e.g., blurring of
original image $x_0$ according to the mask $m$:
$$\left[\Phi(x_0; m)\right](u) = \int g_{\sigma_0 \cdot (1 - m(u))}(v - u) \cdot x_0(v)\ dv$$
where $\sigma_0 = 10$ is the maximum isotropic standard deviation of
the Gaussian blur kernel.
The objective is fully differentiable $m$; one can train end-to-end with
Gradient Descent (GD).
#### Use cases of meaningful perturbations
After optimization, we can look at the learned continuous mask to see
what region(s) have a large effect. This can unveil very interesting
properties of our model. For example, to determine whether chocolate
sauce is in the image, our model might be looking more at the spoon than
the actual sauce (meaning the score decreases more for blurring this
region), as depicted in
Figure [\[fig:chocolate_sauce\]](#fig:chocolate_sauce){reference-type="ref"
reference="fig:chocolate_sauce"}. Thus, we can even detect spurious
correlations with the method. ("Did my model learn the wrong
association?") After detection, we can fix them. This is much more
direct than the counterfactual evaluation introduced in
Section [2.10](#sssec:identify){reference-type="ref"
reference="sssec:identify"}.
![image](gfx/chocolate_masked_example.pdf){width="0.9\\linewidth"}\
![image](gfx/truck_masked_example.pdf){width="0.9\\linewidth"}
Considering the inherent linearity of various XAI methods
([3.6](#ssec:linearization){reference-type="ref"
reference="ssec:linearization"}), this method does not explicitly give
rise to a linear approximation of $f(x)$, but it might be possible to
obtain a linear formula in the *transformed* attributions $T(m)$ by
embedding them in a non-linear fashion and still keeping them
interpretable. Another possibility is that the method linearizes the
model's prediction, just not in the attributions but in another
property.
### Testing with Concept Activation Vectors (TCAV) {#sssec:tcav}
Let us go beyond the previous low-level features. We look into
higher-level and human-understandable ones because interpretable
features are more relevant for most real-life applications. Saliency
maps use the gradient directly to attribute to individual pixels. If we
look at saliency maps, we usually gain no information about where the
important object/region is for a particular label. They are simply too
noisy to read and trust and to understand a network's prediction. Even
if we choose other pixel attribution methods, these are not
interpretable features and do not allow us to relate to more abstract
*concepts*. What we really want to
ask [@https://doi.org/10.48550/arxiv.1711.11279]:
- "Was the model looking at the cash machine or the person to make the
prediction?"
- "Did the 'human' concept matter?"
- "Did the 'glass' or 'paper' concept matter?"
- "Which concept mattered more?"
- "Is this true for all other predictions of the same class?"
These are much more semantic questions than the previous methods can
handle. This is because while most concepts can be expressed through
examples/natural language, they are often impossible to explain in terms
of input gradients or more sophisticated scores at the pixel/pixel
aggregation level.
*TCAV*, introduced in the paper "[Interpretability Beyond Feature
Attribution: Quantitative Testing with Concept Activation Vectors
(TCAV)](https://arxiv.org/abs/1711.11279)" [@https://doi.org/10.48550/arxiv.1711.11279],
is a method that allows us to ask whether an abstract concept mattered
in the prediction. Figure [3.18](#fig:tcav){reference-type="ref"
reference="fig:tcav"} gives an overview of the method through an
intuitive example. We have a classifier with one of the classes being
"doctor". We want to know whether some abstract concept was important in
predicting $P(z)$, the "doctor-ness". A concept does not have to be an
explicit part of training: It can be implicitly globally encoded into
the whole image. Instead of relying on gradients/pixel-wise or
superpixel-wise attributions, we directly attribute to the
human-understandable concept, e.g., woman/not woman.
![Overview of the TCAV method that attributes to human-interpretable
concepts. Figure taken from the [ICML presentation
slides](https://beenkim.github.io/slides/TCAV_ICML_pdf.pdf)
of [@https://doi.org/10.48550/arxiv.1711.11279].
](gfx/03_tcav1.pdf){#fig:tcav width="0.8\\linewidth"}
#### Attributing to high-level concepts
Let us first introduce the notation used in the paper:
- $C$: concept;
- $l$: layer index;
- $k$: class index;
- $X_k$: all inputs with label $k$ (e.g., in the training set).
![Individual stages of the TCAV pipeline, taken from the [ICML
presentation slides](https://beenkim.github.io/slides/TCAV_ICML_pdf.pdf)
of [@https://doi.org/10.48550/arxiv.1711.11279]. Quantitative CAV
validation can be performed using statistical testing the set of random
samples by validating that the distribution of the obtained TCAV scores
is statistically different from that of random TCAV scores. For example,
one can use a t-test.](gfx/03_tcavzebra.pdf){#fig:tcavzebra
width="0.8\\linewidth"}
Consider the (already trained) sub-network
$f_l: \nR^n \rightarrow \nR^m$ whose output is an intermediate
representation of dimension $m$, corresponding to layer $l$. We denote
the "remaining net" that gives the score to class $k$ by
$h_{l, k}: \nR^m \rightarrow \nR$. The method can be summarized as
follows (Figure [3.19](#fig:tcavzebra){reference-type="ref"
reference="fig:tcavzebra"}). We prepare a set of positive and negative
samples for the concept (e.g., images containing stripes and other
random images). We also prepare images for the studied class (e.g., from
the training set). We train a linear classifier to separate the
activations of the intermediate layer $l$ between the positive and
negative samples for the concept. The Concept Activation Vector (CAV)
$v_C^l$ is the vector *orthogonal to the decision boundary of the linear
classifier*. This is cheap to obtain: the normal of the decision
boundary is the weight vector that points into the positive class. For a
particular input $x$, we consider the *directional derivative of the
prediction $h_{l, k}(f_l(x))$ the intermediate feature representation of
$x$, $f_l(x)$, in the direction of the CAV*: $$\begin{aligned}
S_{C, k, l}(x) &= \lim_{\epsilon \rightarrow 0} \frac{h_{l, k}(f_l(x) + \epsilon v^l_C) - h_{l, k}(f_l(x))}{\epsilon}\\
&= \nabla_{f_l(x)} h_{l, k}(f_l(x))^\top v^l_C.
\end{aligned}$$ We treat this as the *score* of how much the concept
contributed to the class prediction for this particular example. (How
would it influence our predictions if we moved a tiny bit in the
direction of the concept vector in the feature space?) If the
directional derivative is positive, the concept positively impacts
classifying the input as the class. Otherwise, the concept has a
negative impact.
Finally, the TCAV score for a set of inputs with label $k$, $X_k$, is
calculated as
$$\text{TCAV}_{Q_{C, k, l}} := \frac{\left|\left\{x \in X_k: S_{C, k, l}(x) > 0\right\}\right|}{\left| X_k \right|} \in [0, 1].$$
In words: $\text{TCAV}_{Q_{C, k, l}}$ is the fraction of samples in the
dataset with label $k$ where the contribution of the concept was
positive for the prediction of the class. This metric only depends on
the sign of the scores $S_{C, k, l}$; one could also consider the
magnitude of conceptual sensitivities. The TCAV score turns the
*instance-specific* analysis ($S_{C, k, l}$, local explanation method)
into a more *global* one, for a particular class in general
($\text{TCAV}_{Q_{C, k, l}}$, more global explanation method). It tells
us whether the *presence* of the concept is important for a class in
general.
#### TCAV Results
![Qualitative results of the TCAV method on GoogLeNet and Inception-v3,
taken from the [ICML presentation
slides](https://beenkim.github.io/slides/TCAV_ICML_pdf.pdf)
of [@https://doi.org/10.48550/arxiv.1711.11279]. Stars mark CAVs omitted
after statistical testing different random images. One can see the
concepts the model looks at to make predictions. TCAV can measure how
important the presence of {red, yellow, blue, green} color is for the
prediction of 'fire engine'. The experiment results show that the red
and green colors are important. This signals a strong geographical bias
towards countries in the dataset with red and green fire engines. TCAV
can also measure how important the presence of different ethnicities is
for the prediction of 'ping-pong ball'. The result of the experiments is
that the East Asian and African concepts are important. This signals a
strong bias towards the ethnicity of players. Agreeing with human
intuition, the 'arms' concept is more important for the prediction of
'dumbbell' than the 'bolo tie' or 'lamp shape'
ones.](gfx/03_tcavres.pdf){#fig:tcavres width="0.5\\linewidth"}
![Results of using the TCAV method for Diabetic Rethinopathy, taken from
the [ICML presentation
slides](https://beenkim.github.io/slides/TCAV_ICML_pdf.pdf)
of [@https://doi.org/10.48550/arxiv.1711.11279]. When the model is
accurate, TCAV also shows that it is consistent with the doctor's
knowledge: It gives high scores to features deemed by doctors as a
precise cause for the prediction. When the model is less accurate, TCAV
shows that the model is inconsistent with the doctor's knowledge: It
gives a high score to a concept that the doctors deem not helpful to
look at.](gfx/03_tcavdiab.pdf){#fig:tcavdiab width="0.9\\linewidth"}
Qualitative results of TCAV are shown in
Figure [3.20](#fig:tcavres){reference-type="ref"
reference="fig:tcavres"}. TCAV can also shine in medical image analysis,
as shown in Figure [3.21](#fig:tcavdiab){reference-type="ref"
reference="fig:tcavdiab"}. TCAV can streamline the interaction between
humans and computers for making predictions.
Let us discuss some pros and cons of the
method [@molnar2020interpretable].
- **Pro:** TCAV produces global explanations and can therefore provide
insights into how the model works as a whole. It allows users to
investigate any concept they define and is, therefore, flexible.
- **Contra:** While the flexibility to investigate user-defined
concepts is an advantage, it also has its downside: TCAV may require
additional annotation/efforts to construct a concept dataset.
Depending on the user's needs, TCAV may not easily scale to many
concepts. Furthermore, TCAV requires a good separation of concepts
in the latent space. If a model does not learn such a latent space,
TCAV struggles and may not be applicable, as e.g. in shallow
networks.
### Class Activation Maps (CAM)
*CAM*, introduced in the paper "[Learning Deep Features for
Discriminative
Localization](https://arxiv.org/abs/1512.04150)" [@https://doi.org/10.48550/arxiv.1512.04150],
is a method that attributes to interpretable intermediate features. A
high-level overview of the method is shown in
Figure [3.4](#fig:camsimple){reference-type="ref"
reference="fig:camsimple"}. CAM employs a typical CNN-based architecture
with only a linear operation after calculating the intermediate score
map. Up to the score map, the network is very complicated. Afterward, it
is just a linear model using Global Average Pooling (GAP) and an
intrinsically interpretable linear layer. The key assumption of CAM is
that the attribution to pixels in the score map "kind of" corresponds to
the attribution to original pixels. This is a huge leap of trust, but
CNNs preserve localized information throughout the network (as given by
the receptive field of individual neurons). Thus, the explanation the
score map also roughly corresponds to the original image. Because of
this, we do not have to do linearization for the earlier part of the
network to attribute to pixels. We can easily find the pixel in the
score map that contributes most to the final prediction. We can also do
thresholding the label of choice, and then we obtain a
foreground/background mask as an explanation.
#### Original CAM Formulation
Our training likelihood (or prediction) is
$$P(y \mid x) = \operatorname{softmax}\left(\sum_l W_{yl} \left(\frac{1}{HW} \sum_{hw} \bar{f}_{lhw}(x)\right)\right).$$
(We use NLL to train the model.) We obtain our explanation score map at
test time label $\hat{y}$ by using the formula
$$f_{\hat{y}hw} = \sum_l W_{\hat{y}l}\bar{f}_{lhw}(x).$$ That is, we
weight each channel of our convolutional feature map $\bar{f}$ the
weights between channels $l$ and class $\hat{y}$.
The used shapes of the tensors in the above formulation are
$\bar{f}(x) \in \nR^{L \times H \times W} = \nR^{2048 \times 7 \times 7}$
for the ResNet-50 CAM uses[^37] and
$W \in \nR^{C \times L} = \nR^{1000 \times 2048}$ where $C = 1000$ is
the number of classes (using ImageNet-1K).
#### Simplified CAM Formulation
We rewrite our training likelihood as $$\begin{aligned}
P(y \mid x) &= \operatorname{softmax}\left(\sum_l W_{yl} \left(\frac{1}{HW} \sum_{hw} \bar{f}_{lhw}(x)\right)\right)\\
&= \operatorname{softmax}\left(\frac{1}{HW} \sum_{hw} \underbrace{\sum_l W_{yl}\bar{f}_{lhw}(x)}_{f_{yhw}(x) :=}\right)\\
&= \operatorname{softmax}\left(\frac{1}{HW} \sum_{hw} f_{yhw}(x)\right)
\end{aligned}$$ where
$f(x) \in \nR^{C \times H \times W} = \nR^{1000 \times 7 \times 7}$.
After this, we trivially simplify our explanation algorithm by indexing
into our last-layer feature map $f$ that was already calculated in the
forward propagation:
$$f_{\hat{y}hw} = \sum_l W_{\hat{y}l}\bar{f}_{lhw}(x).$$
We do not have to do an additional matrix multiplication to generate the
score map, i.e., we do not have to do the linear computation twice. We
calculate it once for the forward propagation and then reuse the
intermediate result for the class of interest by taking the last-layer
feature map with channel index $=$ class of interest. This gives us the
score map directly. In the original formulation, we first perform GAP,
and then use the FC layer during forward propagation. In the new
formulation, we have to modify our original model a bit: We exchange the
GAP and FC layers and turn the FC layer into a $1 \times 1$
convolutional layer.
The FC operation is identical to the $1 \times 1$ convolution operation,
except that FC operates on non-spatial 1-dimensional features, but
$1 \times 1$ convolution operates on spatial 3-dimensional features. We
apply the same matrix multiplication for every "pixel" ($\in \nR^L$) in
the spatial dimensions of the feature map. Shape of weights:
$\nR^{1000 \times 2048 \times 1 \times 1}$. This
ResNet [@https://doi.org/10.48550/arxiv.1512.03385] variant is fully
convolutional. These have been used extensively in the era of CNNs for
semantic segmentation. In this case, we are training for a pixel-wise
prediction of the class; thus, we need a tensor output. Usually, the
exact spatial dimensionality is not retained; we have an hourglass
architecture and upscaling at the
end [@https://doi.org/10.48550/arxiv.1505.04597]. Mask
R-CNN [@https://doi.org/10.48550/arxiv.1703.06870] also predicts a
binary *instance* mask for each detection, and it also has to upscale to
the original window size.[^38]
We compare the implementation of both approaches. With the simplified
formulation, extracting the score map becomes much more straightforward.
The two approaches are visualized in
Figure [3.22](#fig:cam2){reference-type="ref" reference="fig:cam2"}.
Python code for the original CAM formulation using PyTorch is shown in
Listing [\[lst:original\]](#lst:original){reference-type="ref"
reference="lst:original"}. Similarly, Python code for the simplified CAM
formulation is shown in
Listing [\[lst:new\]](#lst:new){reference-type="ref"
reference="lst:new"}.
![Comparison of the two formulations of the CAM method. Extracting the
score map corresponding to any of the classes becomes significantly
easier when using the bottom formulation. The overhead of having to
store the tensor of shape $1000 \times 7 \times 7$ in memory is
negligible.](gfx/03_cam2.pdf){#fig:cam2 width="\\linewidth"}
::: booklst
lst:original class ResNet(nn.Module): def \_\_init\_\_(self, block,
layers, num_classes): super().\_\_init\_\_() self.conv1 = DNN.Conv2d( 3,
64, kernel_size=7, stride=2, padding=3, bias=False ) self.bn1 =
DNN.BatchNorm2d(64) self.relu = DNN.ReLU(inplace=True) self.maxpool =
DNN.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 =
self.\_make_layer(block, 64, layers\[0\]) self.layer2 =
self.\_make_layer(block, 128, layers\[1\], stride=2) self.layer3 =
self.\_make_layer(block, 256, layers\[2\], stride=2) self.layer4 =
self.\_make_layer(block, 512, layers\[3\], stride=2)
self.avgpool = DNN.AdaptiveAvgPool2d((1, 1)) self.fc = DNN.Linear(512 \*
block.expansion, num_classes)
def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x =
self.layer4(x)
x = self.avgpool(x) x = torch.flatten(x, 1) x = self.fc(x) return x
def compute_explanation(self, x, y): x = self.conv1(x) x = self.bn1(x) x
= self.relu(x) x = self.maxpool(x)
x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x =
self.layer4(x) \# (1, 512 \* block.expansion, w, h)
weights = self.named_modules( )\[\"fc\"\].weight.data\[y,
:\].unsqueeze(0).unsqueeze(2).unsqueeze(3) \# (1, 512 \*
block.expansion, 1, 1)
return torch.nansum(weights \* x, dim=1) \# (1, w, h)
:::
::: booklst
lst:new class ResNet(nn.Module): def \_\_init\_\_(self, block, layers,
num_classes): super().\_\_init\_\_() self.conv1 = DNN.Conv2d( 3, 64,
kernel_size=7, stride=2, padding=3, bias=False ) self.bn1 =
DNN.BatchNorm2d(64) self.relu = DNN.ReLU(inplace=True) self.maxpool =
DNN.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 =
self.\_make_layer(block, 64, layers\[0\]) self.layer2 =
self.\_make_layer(block, 128, layers\[1\], stride=2) self.layer3 =
self.\_make_layer(block, 256, layers\[2\], stride=2) self.layer4 =
self.\_make_layer(block, 512, layers\[3\], stride=2) self.conv_last =
DNN.Conv2d( 512 \* block.expansion, num_classes, kernel_size=1 )
self.avgpool = DNN.AdaptiveAvgPool2d((1, 1))
def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x =
self.layer4(x)
x = self.conv_last(x) x = self.avgpool(x) return x
def compute_explanation(self, x, y): x = self.conv1(x) x = self.bn1(x) x
= self.relu(x) x = self.maxpool(x)
x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x =
self.layer4(x) \# (1, 512 \* block.expansion, w, h)
x = self.conv_last(x) \# (1, num_classes, w, h) return x\[:, y\] \# (1,
w, h)
:::
### Comparison of the two CAM implementations
Let us consider the pros and cons of the simplified CAM implementation.
- **Pro**: Simpler implementation (especially for CAM computation)
without changing the model performance (confirmed through numerous
experiments).
- **Contra**: More memory usage (but negligible). In the original
formulation, after we perform GAP, we are left with a 2048D vector.
In the simplified formulation we first perform the $1 \times 1$
convolution, which results in a tensor of shape
$\nR^{1000 \times 7 \times 7}$. We need to store more floating point
values for backprop (and for the CAM computation), but this is
negligible compared to the total memory usage of a deep net.
### Assumptions to Make CAM Work
As we have seen, CAM assumes an architecture in which we only perform
linear operations after computing the score map. There should be a
linear mapping from the feature map to the final score. (This does not
hold if we also consider the softmax activation, but that is generally
considered an interpretable operation, and we usually attribute to the
logits.) The GAP operation is just a linear sum of $7 \times 7$ features
channel-wise, which is very interpretable. (Final prediction can be
split into predictions from each of the features of the feature map. For
sums, people have an excellent intuition about what is contributing by
how much.)
As discussed previously, another crucial assumption of CAM is that the
feature map pixels contain information specific to the corresponding
input pixels. We treat each "pixel" of the score map as the feature
corresponding to the input pixels at the same spatial location. This was
empirically found to be true for CNNs because of translational
equivariance. We can trace every feature pixel in the score map back to
the possible range of pixels in the input that influenced that feature
pixel, called the *receptive field*. This tends to be huge, but the
corresponding pixels "move" with the feature pixels via translational
equivariance.
Notably, the assumptions only work to some extent. We get *coarse*
attribution scores upscaled to fit the input shape. This upscaling (or
overlaying) is not theoretically justified, but we still get pretty
sound attributions, as measured by soundness evaluation techniques. We
can find worst-case examples where (because the receptive field can be
much larger than the region one upscaled attribution "pixel" covers) the
attribution map does not attribute the pixel responsible for the
prediction at all. However, these are pretty artificial examples.
A spatial location of the score map usually has a very large receptive
field, especially for a deep architecture like the ResNet-50. When we
make use of CAM, we simply upscale it to match the input dimensions. By
doing so, the input pixels that correspond to the score map "pixels"
according to CAM can be much fewer than the number of pixels in the
receptive field of a particular score map "pixel", i.e., the number of
pixels that actually influence this score map value. CAM might be
localizing more than it should. There is no guarantee that there is a
nice straight mapping between the feature map pixels and the raw pixels.
Researchers might have gained such insight through semantic segmentation
models using a fully convolutional architecture (e.g.,
DeepLab [@https://doi.org/10.48550/arxiv.1606.00915]) where the
pixel-wise prediction is directly generated from the feature map (after
upsampling). However, such models are trained with pixel-wise
supervision, and so they are explicitly instructed that each feature
pixel should mostly encode the content of the input area around that
feature pixel (not the entire receptive field). CAM is not trained with
such a signal -- only the aggregation of the pixel-wise predictions is
supervised -- so there is even less guarantee for the correspondence. In
fact, the CAM activation pattern tends to reflect the shape of the
receptive field. There exist architectures with non-unimodal receptive
fields; for example, DeepLab has hierarchical, checkerboard-like
receptive field patterns due to the dilated convolutions and increased
convolution strides. As a result, CAM applied to DeepLab shows
checkerboard-like activation maps even if the object is at one location
of the given image.
The upsampling also introduces various possible problems with
interpretability. The upsampling method also influences the attributions
(e.g., nearest vs. bilinear vs. bicubic) and detaches the explanation
from the extracted feature map values. This is similar to how the
normalization of the CAM attributions is detached from the intuitive
feature map values and how they relate to the output of the network.
We will see that more sophisticated methods (e.g., CALM [@kim2021keep]
in Section [3.5.19](#sssec:calm){reference-type="ref"
reference="sssec:calm"}) still suffer from the "handwaviness" of the
upscaling.[^39] In summary, the foundation of CAM is questionable, and
the attribution maps should be taken with a grain of salt.
The validity of the assumption that the feature map pixels correspond to
the respective input pixels is unclear for Transformer-based models, as
they do not have translational equivariance. It is unclear if the
token-wise features after all self-attention layers actually convey the
same semantic content as the corresponding tokens at the beginning. Most
likely, CAM would not work well for Visual Transformers
(ViTs) [@https://doi.org/10.48550/arxiv.2010.11929].
::: information
Attribution Methods for Transformers For Transformers, various
attribution methods are discussed in the paper "["XAI for Transformers:
Better Explanations through Conservative
Propagation](https://arxiv.org/abs/2202.07304)" [@https://doi.org/10.48550/arxiv.2202.07304].
It discusses Generic Attention Explainability (GAE), input $\times$
gradient methods, and several other methods for Transformers.
:::
### Grad-CAM -- Generalizing CAM to non-linear $h$ {#sssec:gradcam}
![High-level motivation of the Grad-CAM method. In Grad-CAM,
irrespective of the particular task-specific network we have on top of
the convolutional feature representation, as long as it is
differentiable, we can linearize it around the feature space point of
interest $g(x)$.](gfx/03_gradcam1.pdf){#fig:gradcam1
width="0.9\\linewidth"}
![Detailed overview of the Grad-CAM method, a generalization of the CAM
method that employs linearization. CNN denotes a fully convolutional
network like the one we have seen in CAM. Until the *Rectified Conv
Feature Maps*, the architecture considered is the same as the one in CAM
-- $A \in \nR^{2048 \times 7 \times 7}$ is the 3D convolutional feature
map we previously denoted by $\bar{f}$. Note: $A$ can have other shapes,
too -- Grad-CAM is not restricted to fixed shapes, and neither is CAM.
However, the later layers require some modifications to the CAM method.
First, we consider image classification. In VGG, after the convolutional
layers, the authors employ three more FC layers with activations. This
results in a non-linear second part -- vanilla CAM no longer works. In
2017, LSTMs for NLP tasks (including image captioning) were the gold
standards. (A lot has changed since, Transformers have taken over the
field.) An LSTM also contains many non-linearities -- vanilla CAM does
not work here, either. To solve this, we linearize the second part of
the network architecture for each input $x$. In particular, we calculate
the 3D gradient map of the logit for class $c$ the intermediate
representation,
$\frac{\partial y^c}{\partial A} \in \nR^{2048 \times 7 \times 7}$
(meaning that $\frac{\partial y^c}{\partial A^k_{ij}} \in \nR$). Base
Figure taken
from [@selvaraju2017grad].](gfx/03_gradcam2.pdf){#fig:gradcam2
width="\\linewidth"}
What happens if the part of our network between the feature map and the
final scores is not linear? This assumption was one of the reasons why
we could take an intermediate layer as a feature attribution map in CAM
-- the remaining layers were linear and, therefore, intrinsically
interpretable. Had they not been linear, we could have not applied CAM
directly.
Conveniently, we can extend CAM using linearization techniques we have
seen before.
[Grad-CAM](https://arxiv.org/abs/1610.02391) [@selvaraju2017grad] is a
follow-up method on CAM that extends it to non-linear parts between the
feature map and the final scores. A high-level overview and description
of the method is given in
Figure [3.23](#fig:gradcam1){reference-type="ref"
reference="fig:gradcam1"}. A detailed overview of Grad-CAM is shown in
Figure [3.24](#fig:gradcam2){reference-type="ref"
reference="fig:gradcam2"}.
**Remarks**: It is unclear why the authors apply ReLU on the weighted
sum. CAM already performs max-normalization, i.e., they drop all
negative values and normalize with the max value anyway. Using the
notation of Figure [3.23](#fig:gradcam1){reference-type="ref"
reference="fig:gradcam1"}, instead of having a linear $h$ part, the
authors linearize $h$ locally around $g(x)$ and then compute CAM this
linearized network.
**Note**: People often refer to CAM as Grad-CAM in many cases because
the latter is a generalization of the former. When Grad-CAM is mentioned
in a paper, it could refer to CAM. It depends on the architecture it is
being used on.
::: information
Grad-CAM Generalizes CAM In CAM, we had
$$f_{\hat{y}hw} = \sum_l W_{\hat{y}l}\bar{f}_{lhw}(x).$$ Here, we have a
*generalization* of CAM (considering $A = \bar{f}$). Using the notation
from Figure [3.24](#fig:gradcam2){reference-type="ref"
reference="fig:gradcam2"}, $$\begin{aligned}
s^c_{hw} &= \frac{1}{HW}\sum_{ijk}\frac{\partial y^c}{\partial A_{kij}}A_{khw}\\
&= \sum_k \underbrace{\frac{1}{HW}\sum_{ij}\frac{\partial y^c}{\partial A_{kij}}}_{\alpha^c_k}A_{khw}
\end{aligned}$$ To see that this is indeed a generalization of CAM (with
a twist), observe that when we have a linear second part, i.e.,
$$y^c = \frac{1}{HW}\sum_{hw}\sum_k W_{y^ck} A_{khw},$$ then
$$\begin{aligned}
\frac{\partial y^c}{\partial A_{lij}} &= \frac{\partial}{\partial A_{lij}} \frac{1}{HW}\sum_{hw}\sum_k W_{y^ck} A_{khw}\\
&= \frac{1}{HW}\sum_{hw}\sum_k W_{y^ck} \frac{\partial}{\partial A_{lij}} A_{khw}\\
&= \frac{1}{HW}\sum_{hw}\sum_k W_{y^ck} \delta_{lk}\delta_{ih}\delta_{jw}\\
&= \frac{1}{HW} W_{y^cl},
\end{aligned}$$ thus,
$$\alpha^c_k = \frac{1}{HW}\sum_{ij} \frac{1}{HW} W_{y^ck} = \frac{1}{HW} W_{y^ck},$$
which nearly gives us back the CAM formulation but has an additional
scaling term $\frac{1}{HW}$. This, however, does not matter for the
final activation map because it is normalized. This scaling factor just
makes the computation a bit more stable by averaging. This shows that
this method is a natural extension of CAM.
:::
### Remaining Weakness of CAM {#sssec:cam}
CAM is not as interpretable as we would want. While the function on top
of the feature map is linear (GAP + $1 \times 1$ convolution), that is
not the end of the story. This is because when we compute CAM, we have
*an additional step of normalization* of the map to be in the image
value range. The unnormalized score map is taken from the pre-softmax
values; thus, we have no guarantee of normalization. We do not only need
normalization to be in the image value range -- a fixed range for the
score map is needed anyway, as otherwise, there would be no way to
compare score maps consistently across many images. The model we train
in CAM is
$$P(y \mid x) = \operatorname{softmax}\left(\frac{1}{HW}\sum_{hw}f_{yhw}(x)\right),$$
which is very interpretable. However, the final score map is calculated
in two ways:
$$s = \begin{cases} \frac{\max(0, f^{\hat{y}})}{f^{\hat{y}}_\text{max}} & \text{ if max} \\ \frac{f^{\hat{y}} - f^{\hat{y}}_{\text{min}}}{f^{\hat{y}}_\text{max} - f^{\hat{y}}_{\text{min}}} & \text{ if min-max}\end{cases} \in [0, 1]^{H \times W}.$$
These non-linear transformations of our feature map are hard to
interpret. In English, the max-version could be explained as
::: center
"The pixel-wise pre-GAP, pre-softmax feature value at $(h, w)$, measured
in relative scale within the range of values $[0, A]$ where $A$ is the
maximum of the feature values in the entire image."
:::
It is clear that we could not explain this to an end user who has no
knowledge of ML. They would not understand what is being shown in the
score map, which is a necessary condition of the attributions to be
deemed human-understandable.
A summary of problems with CAM as an attribution method is given below:
- *The test computational graph is not a part of the training graph.*
In a sense, we are making up values later, at test time, for the
score map.
- *We only have an unintuitive description of the score map values in
English.* It is difficult to explain the attribution values to
clients. Another problem with the normalization method is that
min-max or max normalization suffers from outliers without clipping.
If one is not careful, whenever the pre-normalized scores contain
outliers, the normalized score maps can become uninformative: when
visualized, everything seems roughly equally important and the
displayed map is not faithful anymore to the actual attribution
scores.
- *CAM also violates widely accepted "axioms" for attribution
methods.* Details are given in the
[CALM](https://arxiv.org/abs/2106.07861) paper.
### Class Activation Latent Mapping (CALM) {#sssec:calm}
To fix the problems introduced in
Section [3.5.18](#sssec:cam){reference-type="ref"
reference="sssec:cam"}, we discuss the paper "[Keep CALM and Improve
Visual Feature
Attribution](https://arxiv.org/abs/2106.07861)" [@kim2021keep]. In CALM,
we approach the problem with a fully probabilistic treatment of the last
layers of CNNs.
**Notation**:
- $X$: Input image.
- $Y$: Class label $\in \{1, \dotsc, C\}$.
- $Z$: Pixel index (location) $\in \{1, \dotsc, M\}$ in the feature
map. For example, possible values for $Z$ are $1, \dots, 49$ for a
$7 \times 7$ feature map. $Z$ is a discrete random variable in the
spatial feature map dimensions.
**Task**: "Predict $Y$ from $X$ by looking at pixel $Z$." Our prediction
is based on the observations at feature location $Z$. $Z$ is a latent
variable not observed during training. Only $X$ and $Y$ are observed;
the training set is the same as always. In particular, we do not have GT
values for $Z$.
We use the following decomposition of the joint distribution:
$$\begin{aligned}
P(x, y, z) &= P(y, z \mid x)P(x)\\
&= P(y \mid x, z)P(z \mid x)P(x).
\end{aligned}$$
![Detailed overview of CALM, a fully probabilistic approach to feature
attribution. The CNN used is an FCN, just like in CAM and Grad-CAM.
Figure taken from [@kim2021keep].](gfx/03_calm2.pdf){#fig:calm2
width="\\linewidth"}
This corresponds to the probabilistic graph (directed graphical model)
illustrated in [\[fig:calm1\]](#fig:calm1){reference-type="ref"
reference="fig:calm1"}. A detailed overview of CALM is provided in
Figure [3.25](#fig:calm2){reference-type="ref" reference="fig:calm2"}.
We discuss the individual parts of the model here.
#### Part (a) of Figure [3.25](#fig:calm2){reference-type="ref" reference="fig:calm2"} {#part-a-of-figure-figcalm2}
We obtain the conditional joint distribution of $Y$ and $Z$ given $X$.
Both $Y$ and $Z$ are discrete random variables; thus, we can fully
represent their joint distribution by a 3D tensor, where $z$ is a 2D
spatial index and $y$ is a 1D index. Before spatial $L_1$ normalization,
we apply softplus. $g(x)$ and $h(x)$ are network predictions. The only
requirement for the joint distribution is that the values are between
$0$ and $1$, and they sum up to $1$. This is enforced by the
normalization before the element-wise multiplication. We could also just
apply global softmax on the entire $g(x)$ pre-activation tensor that
normalizes both the class and spatial dimensions, but it did not perform
well in the early experiments, according to the authors. Softmax and
softplus + $L_1$ norm are very similar: both are eventually $L_1$
normalization, but softmax exponentiates before normalizing and
softplus + $L_1$ uses softplus before normalization. Exponentiation can
sometimes be too harsh because it can blow up high values to infinity or
push low values down to virtually 0. Softplus, on the other hand, is
much better behaved -- the transformation is approximately linear on the
positive side. For this reason, one should always consider using
softplus + $L_1$ norm when softmax blows up neural network training. It
would be interesting to observe how turning softmax in Transformers into
softplus + $L_1$ norm influences the behavior of these networks.
#### Part (b) of Figure [3.25](#fig:calm2){reference-type="ref" reference="fig:calm2"} {#part-b-of-figure-figcalm2}
We obtain the test-time prediction from the network. Similarly to CAM,
we do global *sum* pooling. We sum instead of averaging because the
elements are probabilities, and this corresponds to marginalization.
#### Part (c) of Figure [3.25](#fig:calm2){reference-type="ref" reference="fig:calm2"} {#part-c-of-figure-figcalm2}
We obtain the attribution map from the network for a particular input
$x$. $\hat{y}$ is the ground truth label. For a particular location $z$,
the *attribution score* $s_z$ is $$s_z := P(\hat{y}, z \mid x).$$ In
English, the map is significantly simpler to explain than CAM:
::: center
"The probability that the cue for recognition was at $z$ and the ground
truth class $\hat{y}$ was correctly predicted for image $x$."
:::
A nice property of this formulation is that the attribution map is
well-calibrated: it lies between $0$ and $1$ and has a probabilistic
interpretation. One can also normalize $z$ and calculate the attribution
map the predicted class to get a similar formulation as in CAM. We have
a simpler way to compute a calibrated explanation score map.
#### Part (d) of Figure [3.25](#fig:calm2){reference-type="ref" reference="fig:calm2"} {#part-d-of-figure-figcalm2}
::: information
DeepLab
[DeepLab](https://arxiv.org/abs/1606.00915) [@https://doi.org/10.48550/arxiv.1606.00915]
is a semantic segmentation network from 2016. It was SotA on the PASCAL
VOC-2012 semantic segmentation task at the time of its publication.
:::
We consider two ways of training CALM: Marginal Likelihood (ML) and
Expectation maximization (EM). These are typical methods to train a
latent variable model. Let us discuss them in this order.
**Marginal likelihood.** This method directly minimizes the negative
log-marginal likelihood. This is the usual way to train when obtaining
$P(y \mid x)$ is tractable. The NLL is simply the CE loss
$$-\log P(\hat{y} \mid x) = -\log \sum_z P(\hat{y} \mid x, z) P(z \mid x) = - \log \sum_z \tilde{g}_{\hat{y}z}\cdot \tilde{h}_z.$$
**Expectation maximization.** Segmentation methods using CNNs use it
often. This is exactly how we train a
DeepLab [@https://doi.org/10.48550/arxiv.1606.00915] model using
pixel-wise GT masks. We optimize for the joint tensor. `detach()` is
needed to not have any gradient flow from the *target*. We take the GT
slice of the joint distribution. It is a likelihood because we apply our
knowledge of what the true $y$ is, and it is unnormalized in $z$. $L_1$
normalization means dividing by the sum of values in the matrix. The
pseudo-target is what we want to reach with $P(\hat{y}, z \mid x)$, as
we want only the $\hat{y}$ slice to have a positive probability. Then
the joint becomes properly normalized in $z$ when considering $\hat{y}$.
We have an entire prediction vector for every pixel in the joint. In our
minds, we expand the pseudo-target into a one-hot vector ($\hat{y}$
dimension is the pseudo-target, all other class dimensions are zeros).
Then we apply a CE loss.
#### CALM Addresses the Limitations of CAM
CALM addresses the limitations of CAM detailed previously:
- *The test computational graph is a part of the training graph.* The
training, test, and interpretation phases are all probabilistic.
- *We have an intuitive description of the score map values.*
- *CALM respects all widely accepted "axioms" for attribution
methods.* Exact details are discussed in the
[CALM](https://arxiv.org/abs/2106.07861) paper. Being probabilistic,
CALM has many linear components.
While this method solves many problems with CAM, it still lacks
reasoning about upscaling the score map instead of taking receptive
fields into account more rigorously.
#### Windfall features for CALM attributions
![Examples of different windfall attributions we can obtain from the
joint $P(y, z \mid x)$ in CALM, taken from the
paper [@kim2021keep].](gfx/03_calm3.pdf){#fig:calm3
width="0.6\\linewidth"}
CALM comes with numerous windfall gains.[^40] When the attribution map
is well-calibrated and probabilistic, we can compute a lot of
derivative[^41] attributions on top of it, as illustrated in
Figure [3.26](#fig:calm3){reference-type="ref" reference="fig:calm3"}.
Score maps can be given, e.g., for
- the GT class (first row, second image);
- the likelihood of the GT class (first row, third image -- the
difference is the normalization factor);
- the predicted class (not shown);
- a generic class (not shown);
- all classes (second row, first image);
- multiple classes (second row, second image);
- and counterfactuals (second row, third image).
We discuss some of the options in detail below.
Marginalizing out all classes allows us to gain an overview. "Where is
any object that belongs to the 1k classes in ImageNet-1K?" (Only a
somewhat valid interpretation when the network is void of any spurious
correlation, but even then, its prediction might only depend on small
object parts that are very predictive.) "What image regions does the
network attribute to any of the classes?" (Valid interpretation for any
network.)
We can also sum scores for a subset of classes (e.g., dog, living thing,
equipment, object, edible fruit, or food). Here, we sum up score maps
for all dog classes (118 in ImageNet-1K). We get better-delineated
boundaries for the dog meta-class.
Subtracting different score maps gives a counterfactual explanation of
why we chose a class over another. The score map still makes sense; we
just use different colors for the two classes.
![Qualitative comparison of CALM and other attribution methods against
the GT CUB annotations, taken from the paper [@kim2021keep]. In detail,
the authors select the ground truth class and one that is easily
confused with it (i.e., the differences appear only on a few body parts
of the bird species). They want the model to give the same attributions
to the body parts where the classes' (birds') attributes are the
same.](gfx/03_calm4.pdf){#fig:calm4 width="\\linewidth"}
Let us now turn to Figure [3.27](#fig:calm4){reference-type="ref"
reference="fig:calm4"}. One can evaluate the quality of the attribution
maps on the CUB dataset as follows.
::: center
"We compare the counterfactual attributions from CALM and baseline
methods against the GT attribution mask \[on CUB\]. The GT mask
indicates the bird parts where the attributes for the class pair
$(A, B)$ differ. The counterfactual attributions denote the difference
between the maps for classes $A$ and $B$: $s^A - s^B$.
\[\...\]" [@kim2021keep]
:::
The corresponding results are shown qualitatively in
Figure [3.27](#fig:calm4){reference-type="ref" reference="fig:calm4"}
and quantitatively in Table [3.1](#tab:calm5){reference-type="ref"
reference="tab:calm5"}. One possible problem with the evaluation in
Figure [3.27](#fig:calm4){reference-type="ref" reference="fig:calm4"} is
that the attribution maps that are compared are $P(z, \hat{y} \mid x)$
and $P(z, \tilde{y} \mid x)$ (where $\hat{y}$ and $\tilde{y}$ are two
similar classes), which are not normalized in $z$. In particular, they
do not even sum to the same value in $z$, as the predicted probabilities
$P(\hat{y} \mid x$ and $P(\tilde{y} \mid x)$ are never exactly equal for
NN predictions. The paper mentions that if a pixel for both classes is
equally important, the difference ideally cancels out so the
counterfactual attribution map ideally focuses on pixels that affect the
two classes differently. But because of the two maps not being on the
same scale, even if proportionally some pixel has the same relative
importance, the values are not going to cancel. Thus, the plot very
likely shows that the individual attributions are already *only*
focusing on parts that are discriminative between the two classes. This
would mean that the given reasoning for the feature maps is slightly
incorrect. Without knowing the individual attribution maps, the
difference is also not very descriptive. For example, for the ground
truth class in the above row, the bird's head seems to be a very
distinctive factor for the prediction. However, the wing might also be a
factor that the network takes into consideration *for* the GT class, but
it is certainly taken with a higher attribution value for the
alternative class because it is pale blue. Still, we do not know the
exact attributions.
CALM gives counterfactual score maps that often coincide with the GT
masks on the CUB task. CALM with EM beats CAM on the CUB benchmark, as
shown in Table [3.1](#tab:calm5){reference-type="ref"
reference="tab:calm5"}. Both CALM variants beat other attribution
methods. (This is not an evaluation of soundness. The model has all the
rights to look elsewhere, e.g., because it suffers from spurious
correlations.)
The authors evaluate the soundness of their method using
remove-and-classify (Figure [3.28](#fig:rac){reference-type="ref"
reference="fig:rac"}; discussed in
Section [3.7.7](#sssec:rac){reference-type="ref"
reference="sssec:rac"}). CALM performs best and seems to be the most
sound. [@kim2021keep]
::: {#tab:calm5}
------------------------- -- ---------- ---------- ---------- -- ----------
#part differences 1 2 3
#class pairs 31 64 96 mean
Vanilla Gradient 10.0 13.7 15.3 13.9
Integrated Gradient 12.0 15.1 17.3 15.7
Smooth Gradient 11.8 15.5 18.6 16.5
Variance Gradient 16.7 21.1 23.1 21.4
CAM 24.1 28.3 32.2 29.6
$\text{CALM}_\text{ML}$ 23.6 26.7 28.8 27.3
$\text{CALM}_\text{EM}$ **30.4** **33.3** **36.3** **34.3**
------------------------- -- ---------- ---------- ---------- -- ----------
: Quantitative comparison of CALM and other attribution methods
against the GT CUB annotations, taken from the paper [@kim2021keep].
:::
!["**Remove-and-classify results.** Classification accuracies of CNNs
when $k$% of pixels are erased according to the attribution values
$s_{hw}$. We show the relative accuracies $\mathcal{R}_k$ against the
random-erasing baseline. Lower is better." [@kim2021keep] CALM performs
well on the remove-and-classify benchmark. Figure taken
from [@kim2021keep].](gfx/03_rac.pdf){#fig:rac width="0.8\\linewidth"}
#### Cost to Pay in CALM
CALM is clearly a better explainability method than CAM but is not
necessarily a better classifier. CALM is changing the network structure,
so it is very different from the reformulation of CAM. There, we had an
equivalent formulation: The ResNet-50 architecture is fully compatible
with CAM. Here, we do not have such an equivalent formulation. Now if we
change the original network structure to the CALM formulation, we are
changing the mathematical structure of the model. We cannot expect the
same accuracy.
As shown in Table [3.2](#tab:calm6){reference-type="ref"
reference="tab:calm6"}, CALM ML sometimes gains a few points of accuracy
and sometimes loses a few against CAM. CALM EM sometimes becomes much
worse than CAM in accuracy, sometimes stays close to CAM, and is only
behind by a few percentage points. One has to be careful about the
possible accuracy loss with CALM.
There is an inborn trade-off between interpretability and accuracy. The
existence of this trade-off is very curious: it also means that
depending on our actual needs, we might need to choose a different
model. For example, losing 4% accuracy might not be as important as
gaining confidence and a better picture of how our model works. For such
applications, we probably need CALM-trained models. Why must there be a
trade-off between accuracy and the model's ability to explain itself?
Because we limit ourselves to a smaller fraction of models if we are
confined to interpretable models.
There are diverse requirements for deployment. We need to develop more
diverse types of models. We should not only aim for models that perform
well on a validation set but also develop slightly suboptimal models
that are, e.g., interpretable or generalize very well to unseen
situations. As an attribution method, there is room for improvement for
CAM. CALM improves upon CAM regarding explainability. The better
interpretability of CALM also contributes to better Weakly Supervised
Object Localization (WSOL) [@kim2021keep], even though WSOL is not
precisely aligned with explainability. Better interpretability, however,
comes with a cost to pay (accuracy).
The human-interpretability of an XAI method does *not* mean that we wish
to make the model recognize things as humans do (human alignment).
Instead, we wish to present the behavior of the model in a form that is
*understandable* by humans. No model will "start thinking like humans"
by using human-interpretable XAI methods. There is no human alignment
involved in the above reasoning. These two are also orthogonal axes of
variation: the model might make decisions just like humans do, but the
XAI method might fail to capture this. Vice versa, the XAI method might
*show* that the model makes decisions in a very human-aligned way, but
it might just be because of the poor soundness of the method. However,
there *are* also occasions where we want better human alignment even at
the cost of some loss in accuracy -- e.g. when the model is helping
experts. This is a different trade-off, namely, the alignment-accuracy
trade-off.
::: {#tab:calm6}
Methods CUB Open ImNet
------------------------- -- ------ ------ -------
Baseline 70.6 72.1 74.5
$\text{CALM}_\text{EM}$ 71.8 70.1 70.4
$\text{CALM}_\text{ML}$ 59.6 70.9 70.6
: Classification accuracies of the Baseline (ResNet-50), CALM ML and
CALM EM, taken from [@kim2021keep]. Both formulations of CALM result
in decreased accuracy in most situations. These can also be quite
severe: The accuracy of $\text{CALM}_\text{ML}$ is more than 10% less
than that of the baseline on CUB. However, there are also some
situations where CALM can increase accuracy: On CUB,
$\text{CALM}_\text{EM}$ improves upon the baseline.
:::
::: information
Layer Norm [Layer
Norm](https://arxiv.org/abs/1607.06450) [@https://doi.org/10.48550/arxiv.1607.06450]
is a normalization technique that normalizes the mean and variance
calculated across the feature dimension, independently for each element
in the batch. On the other hand, Batch Norm calculates the mean and
variance statistics across all elements in the batch. Layer Norm is
widely used in
Transformer-based [@https://doi.org/10.48550/arxiv.1706.03762]
architectures.
:::
::: information
Modified Backpropagation Variants This book does not mention some
attribution methods: the class of modified backpropagation variants.
There are many such methods:
- LRP -- layer-wise relevance
propagation [@https://doi.org/10.48550/arxiv.1604.00825]
- DeepLIFT [@https://doi.org/10.48550/arxiv.1704.02685]
- DeepSHAP
- GuidedBP [@https://doi.org/10.48550/arxiv.1412.6806]
- ExcitationBP [@https://doi.org/10.48550/arxiv.1608.00507]
- xxBP
- ...
They are all based on some form of modification to backpropagation. They
modify the gradients and do the backpropagation with some broken
gradients. Eventually, we get the attribution in some intermediate
feature layer or the input space. Vanilla backprop propagates gradients
all the way back to the weights. However, gradients are (1) very local
and (2) sometimes the function value is more important than the local
variations. For example, when a function is constant, it is still
contributing something to the next layer.
We do not deal with them in this book for the following reasons:
- It seems complicated to explain what the explanation shows.
- For new types of DNN layers, one needs to develop a new recipe for
modified backpropagation. For example, for Transformers, we
sometimes need to skip the layer norms with a straight-through
estimator (seen in obfuscated gradients). There is no good intuition
yet for how to modify backpropagation correctly across layer norms.
- Results depend on the implementation of the DNN. The attributions we
obtain can differ for mathematically equivalent networks with
different implementations. For example, we can consider two linear
layers without non-linearity in between. For fixed, already trained
weights, we (1) multiply them together beforehand, use the resulting
matrix in a single linear layer, or (2) keep them separate for
modified backprop. The results will differ because these methods
modify backprop, and the separate modules are different in the two
cases. This is a severe issue. We do not have the uniqueness of our
attribution score.
- Caveat: They still show good soundness results, especially for
Transformer architectures. We should have more understanding of why
or how they work.
:::
### Summary of Test Input Attribution Methods
Linear models provide nice contrastive explanations. Therefore, we
explored ways to linearize complex models (DNNs).
Local linearization around input $x$ for the entire function $f$ is
employed by, e.g., input gradients, SmoothGrad, Integrated Gradients,
LIME, and SHAP. The caveat is that it is hard to choose the right way to
encode "no information". If we perform global linearization for an
entire function, we just obtain a linear function that is interpretable
by design but not globally sound.
We also discussed the diversity of features for contrastive
explanations. One may use pixels, superpixels, instance segments,
concepts (that are high-level, i.e., they cannot be represented as an
aggregation of pixels), or feature-map pixels (that are aggregations of
receptive field pixels with highly non-linear transformations).
We have no guarantee for Transformers that they see the same influence
from the corresponding location of tokens/image patches. We cannot
expect CAM to work. (Strictly speaking, we do not even have guarantees
for ResNets because the receptive field does not coincide with upscaling
the feature map.)
Attribution methods come with various pros and cons (depending on the
method), and none of them is perfect.
## Explanations Linearize Models in Some Way {#ssec:linearization}
As we have seen, the attributions are often based on some form of
linearization of the original complex function (the DNN). This is
because sparse linear models are already intuitive for humans. Let us
give an overview of previously introduced methods and discuss what
linearizations they employ.
### Input Gradient
Taylor's theorem tells us that for a differentiable function
$f: \nR^d \rightarrow \nR$,
$$f(y) = f(x) + \langle y - x, \nabla_x f(x) \rangle + o(y - x);$$ thus,
for very small perturbations around the input $x$, our function is
approximately linear. Taking the first-order Taylor approximation means
finding the tangent plane of $f$ at input $x$.
**Note**: Only the input gradient linearizes the entire model the input
$x$ out of the methods in this overview. In the subsequent cases, we
will observe linearization in either the *attributions*, the
*discretized versions* of the input, or linearization of parts of the
network. The only commonality is that the models are analyzed with some
form of a linear model, but not necessarily a linear model *$x$*. The
point we make here is that, despite clever ways to formulate the
attributions, none of the discussed methods could eventually avoid
borrowing the immediate intuitiveness and interpretability of linear
models. It would be an interesting research objective to try to
formalize this intuition and show that any reasonable XAI method is
inherently linear. (For an example of a weaker result, any method that
satisfies the completeness axiom is inherently linearizing the
predictions in the attributions.)
### Integrated Gradients {#integrated-gradients}
This method also turns our model into a linear model for an input $x$
around the baseline $x^0$. Linearity is not the input $x$ rather the
attributions: $$f(x) = f(x^0) + \sum_i a_i(f, x, x^0)$$ where
$$a_i(f, x, x^0) = (x_i - x^0_i)\left\langle e_i, \int_{0}^1 \nabla_x f(x^0 + \alpha(x - x^0))d\alpha\right\rangle.$$
The prediction for the input $x$ equals the prediction for the baseline
image $x^0$ plus the sum of the contribution of each pixel.
### LIME
LIME makes an obvious, explicit linearization by approximating our
possibly highly non-linear model $f: \nR^d \rightarrow \nR$ by a sparse
linear model locally: $$g(z') = w_g^\top z'.$$
### SHAP
SHAP also turns our model into a linear model for a binary input $x$
around the baseline $0$ the attribution values:
$$f(x) = f(0) + \sum_i \phi_{f, x}(i).$$ The prediction for the input
$x$ equals the prediction for the baseline plus the sum of the
contribution of each feature (e.g., superpixel). By turning on/off our
features, we regulate whether we include a contribution in the final
prediction, so in this sense, this is a linear approximation of our
model.
### TCAV
In TCAV, we have $$f(x) = h(z) = h(g(x)),$$ which is illustrated in
Figure [3.29](#fig:tcavlinear){reference-type="ref"
reference="fig:tcavlinear"}.
![High-level overview of TCAV. We linearize the second part, $h(z)$, the
intermediate representation, $z = g(x)$, making it a *local* (linear)
approximation of the second part of the
network.](gfx/03_tcavlinear.pdf){#fig:tcavlinear width="0.7\\linewidth"}
We take the gradient of the output the intermediate layer
$$\nabla_z h(z) \big|_{z = g(x)},$$ thereby linearizing the second part
of our network locally, around $g(x)$.
### Different Types of Linearization
We enumerated methods using linearization for explanations. Let us now
see what categories of linearization we can establish.
#### (1) Locally linear around $x$, completely linear for $f$.
This scenario is illustrated in
Figure [3.30](#fig:linearization1){reference-type="ref"
reference="fig:linearization1"}. Examples include Input Gradient, LIME,
Integrated Gradients, and SHAP (*even though some of them employ global
perturbations*). Here, the entire model is linearized, but only locally.
It is quite simple to achieve: Around $x$, we can explain everything
nicely and interpretably.
![Local linearization around $x$ and global linearization in $f$. This
approach is used in the Input Gradient, LIME, Integrated Gradients, and
SHAP methods.](gfx/03_linearization1.pdf){#fig:linearization1
width="0.7\\linewidth"}
#### (2) Globally linear over $g$-space, partially linear for $f$.
This scenario is shown in
Figure [3.31](#fig:linearization2){reference-type="ref"
reference="fig:linearization2"}. Examples include CAM and CALM.
CAM's/CALM's second part is already linear, therefore, there is no need
to linearize it. Instead of explaining everything in terms of the input
features, we are getting help from the interpretable intermediate
features. The second part of the network is naturally interpretable.
Therefore, we explain in terms of interpretable features without
approximations, under some assumptions.
![Global linearization over $g$-space, partial linearization for $f$.
Examples of methods employing this strategy include CAM and
CALM.](gfx/03_linearization2.pdf){#fig:linearization2
width="0.9\\linewidth"}
#### (3) Locally linear around $g(x)$, partially linear for $f$.
This category is illustrated in
Figure [3.32](#fig:linearization3){reference-type="ref"
reference="fig:linearization3"}. Examples include TCAV ($S_{C, k, l}$)
and Grad-CAM. TCAV takes all $x$ into account that correspond to some
label in the TCAV score (quite global explanations), but it linearizes
the second part of the network (partial linearization) for each $x$
separately. Whether we perform partial or total *linearization* does
*not* depend on whether the method gives local or global *explanations*.
In Grad-CAM, we also only linearize the second part of the network (for
a single input $x$).
Instead of explaining everything in terms of the input features, we are
getting help from the interpretable features. We are only approximating
the second part of our network through gradients. Therefore, we explain
in terms of interpretable features but *with approximations*.
![Local linearization around $g(x)$ and partial linearization for $f$.
TCAV and Grad-CAM follow this
approach.](gfx/03_linearization3.pdf){#fig:linearization3
width="0.9\\linewidth"}
## Evaluation of Explainability Methods
First, let us discuss why we even need empirical evaluation.
### Why do we need empirical evaluation?
The fundamental limitation of explainability is the
soundness-explainability trade-off: Our explanation cannot be fully
sound and fully explainable. We consider two extremes. The original
model is too complex for humans to understand. This is the reason why we
needed a separate explanation in the first place. We need
simplifications to make humans understand. A global linear approximation
makes our model interpretable again, but the model is not the same as
before. The soundness of the explanations suffers a lot.
If we look at different XAI methods, they are in the trade-off frontier
between soundness and explainability. One cannot say that some
explanation method is conceptually perfect by design. Eventually, what
matters is *whether the method is serving our need* (the end goal). For
that, we need empirical evaluation. We need ways to quantify different
aspects of explanations in numbers.
### Types of Empirical Evaluation
Doshi-Velez and Kim [@https://doi.org/10.48550/arxiv.1702.08608]
distinguish three types of empirical evaluation:
*functionally-grounded*, *human-grounded*, and *application-grounded*
evaluation (Figure [3.33](#fig:doshivelez){reference-type="ref"
reference="fig:doshivelez"}). Let us briefly discuss each of these
standard evaluation practices below.
![Comparison of three types of evaluation methods. As we go from bottom
to top, the methods become more aligned with human needs but also become
more expensive to carry out. Figure taken
from [@https://doi.org/10.48550/arxiv.1702.08608].](gfx/04_doshivelez.pdf){#fig:doshivelez
width="0.5\\linewidth"}
#### Functionally-Grounded Evaluation
Functionally-grounded evaluation uses *proxy tasks* to evaluate
explanations. Here, no human subjects are required for the evaluation,
making this type of evaluation appealing from a time and cost point of
view. However, as explainability is necessarily human-grounded, such
evaluations should only be considered in addition to human-grounded
studies. **Example**: One linear model might be more sparse than
another, signaling better
human-interpretability [@https://doi.org/10.48550/arxiv.1702.08608].
Sparsity can be evaluated without the involvement of humans.
#### Human-Grounded Evaluation
Human-grounded evaluation considers human subjects but conducts *simple*
experiments. This is desired when one wants to evaluate general aspects
of the explanation that do not require domain expertise. No specific end
goal is considered in such evaluation tasks, but they can still be used
to judge general characteristics of explanations. **Example**: Human
subjects are presented with explanation pairs and are asked to choose
the 'better' one [@https://doi.org/10.48550/arxiv.1702.08608].
#### Application-Grounded Evaluation
In application-grounded evaluation, it is measured how well an
explanation method helps *humans* when considering *real*
applications/problems. The helpfulness of an explanation method can be
quantified by how much it increases human performance on a certain real
task. This is the evaluation type that is most aligned with the *human
aspect* of explanations, but it is also the most expensive to carry out.
**Example**: A computer programmer is evaluated based on how well they
can fix their code after being given an explanation.
### Soundness Evaluation Techniques {#sssec:eval}
As discussed in Section [3.3.1](#ssec:good){reference-type="ref"
reference="ssec:good"}, soundness (also referred to as faithfulness or
correctness of our explanation) is arguably one of the most important
and possibly the most widely used criterion. A sound explanation must
identify the true cause(s) for an event. Currently, this seems to be the
primary focus of XAI evaluation, but it is also *not* the only criterion
for a good explanation. This is crucial to keep in mind.
::: definition
Confirmation Bias Confirmation bias is confirming the performance of our
explanation method against what humans think would be the proper
attribution instead of investigating further whether the model was
actually basing its prediction on these causes.
:::
For measuring soundness, much previous research relied on qualitative
evaluation of (potentially cherry-picked) examples. Consider the
integrated gradients paper referring to the attribution maps shown in
Figure [\[fig:integrated\]](#fig:integrated){reference-type="ref"
reference="fig:integrated"}:
::: center
"Notice that integrated gradients are better \[than input gradients\] at
reflecting distinctive features of the input image \[for the
prediction\]." [@sundararajan2017axiomatic]
:::
Can we really conclude that for the images provided? Maybe the
integrated gradients method *delineates* the objects better than input
gradients, but does that mean they reflect distinctive features for the
model's predictions (i.e., what the model is looking at) better? That is
an entirely different question.[^42] Another claim from the paper:
::: center
"We observed that the results make intuitive sense. E.g., 'und' is
mostly attributed to 'and', and 'morgen' is mostly attributed to
'morning'." [@sundararajan2017axiomatic]
:::
To humans, this makes perfect sense. However, what if the model looked
at a different feature for predicting these words? We argue that this is
a case of confirmation bias. If we keep relying on human intuition to
measure/evaluate explainability, how could we detect models that rely on
new knowledge humans have not learned before? This point of view
prohibits us from *learning* from models. Another example from CAM
(referring to a bunch of visualizations of different methods, shown in
Figure [3.34](#fig:camvis){reference-type="ref"
reference="fig:camvis"}):
::: center
"We observe that our CAM approach significantly outperforms the
backpropagation approach
\[\...\]" [@https://doi.org/10.48550/arxiv.1512.04150]
:::
What do they exactly mean by outperforming? When is an explanation
method doing better? Can we really conclude this?[^43] This is likely
another case of confirmation bias. We can see that qualitative
evaluations of soundness are susceptible to confirmation bias. This is
made even more severe by the fact that *no GT explanation exists in
general*.
!["a) Examples of localization from GoogleNet-GAP. b) Comparison of the
localization from GooleNet-GAP (upper two) and the backpropagation using
AlexNet (lower two). The ground-truth boxes are in green, and the
predicted bounding boxes from the class activation map are in
red." [@https://doi.org/10.48550/arxiv.1512.04150] The authors conclude
that "our CAM approach significantly outperforms the backpropagation
approach." What they exactly mean by outperforming is not disclosed. In
particular, it is questionable if any form of 'outperforming' can be
concluded by observing these results. Figure taken
from [@https://doi.org/10.48550/arxiv.1512.04150].](gfx/03_imagenet_localization.pdf){#fig:camvis
width="0.9\\linewidth"}
#### Does localization evaluation make sense for soundness?
For the quantitative evaluation of CAM, the authors measure the number
of times their attribution score map corresponds to the object bounding
box. They segment regions whose CAM value is above 20% of the maximum
CAM value. Then they take the tightest bounding box that covers the
largest connected component in the segmentation map. Finally, they
measure the IoU between this box and the GT object box of the class of
choice. When $\text{IoU} \ge 50\%$, they consider it a success. They
measure the success rate on "the" ImageNet validation set. They find
that the CAM variants perform better than the backprop variants.
**Setting**: Explanation method $A$ finds GT object boxes better than
explanation method $B$. Does this mean that explanation method $A$ is
working better than $B$? *We do not think so.* The model may have been
looking at a non-object region to make the prediction. If that were the
case, the explanation method with a lower localization score might
explain the model better.
**Takeaway**: We should not evaluate according to our expectations when
evaluating explanation methods.
#### How to interpret unintuitive explanations?
Suppose we have a case when the provided explanation differs greatly
from our expectations. Does that mean that the explanation method failed
while the model was working fine (it was looking at the right thing), or
did the explanation method correctly expose a bug in our model (or in
the data), like spurious correlation? There is no way to tell these two
scenarios apart from a single visual inspection.
#### Typical pitfalls of soundness evaluation
Soundness aims to evaluate the following: Does the score map $s(f, x)$
represent the true causes for $f$ to predict $f(x)$? The true
explanation depends on both the input $x$ and the model $f$. We can also
calculate the attributions for the GT class $y$ or any other class. This
is usually done less in practice. Explaining something that has already
happened (e.g., $f$ predicted $f(x)$ for $x$) makes more sense than
explaining hypothetical situations. In this case, the true explanation
depends on $x, f,$ and $y$. The problem with qualitative evaluation is
that humans also cannot tell what the cues were that $f$ looked at to
predict a certain class. We are only looking at $x$ and $y$ to make the
evaluation, not $f$. This seems wrong *by design*.
The problem with localization evaluation is that if we compare to GT
localization, we also do not take the model $f$ into account, only $x$
and $y$. This also seems wrong by design. We know already that models do
not always look at foreground cues to predict classes.
The fundamental issue with evaluating soundness is that there is no GT
explanation in general.[^44] Humans cannot provide GT explanations. This
is precisely the reason we are developing an explanation technique in
the first place. If there were a GT explanation for a model, then that
itself would be a good explanation, and there would be no need to study
what a good explanation is and evaluate explanations. We should start
from somewhere, but it is hard. We are facing a chicken-egg problem.
### Evaluation of Soundness of Explanations based on Necessary Conditions
There is a trick that people consider to test the soundness of
explanation methods. We define a few criteria that a successful
explanation method must satisfy.[^45]
**Example**: The explanation $s(f, x)$ must not contain *any*
information if $f$ is not a trained model (i.e., it is randomly
initialized). The intuition is, "How could any explanation contain any
interesting information for an untrained model?" Otherwise, our
explanation is rather trying to please human qualitative evaluations by
producing plausible explanations. Interestingly, a randomly initialized
CNN achieves a better score than random guessing with a trained linear
layer on top because of inductive biases. On ImageNet-1K, one can
achieve $4\%$
accuracy [@https://doi.org/10.48550/arxiv.2106.05963].[^46] It seems
that this is probably a way too strong necessary condition. There can be
some information in the score map (Why not? We do not fully know the
behavior of a randomly initialized model.), but the main point is that
the score map should strongly depend on function $f$. The explanation
does not have to be *informationless* when a model is randomly
initialized. If it turns out that the score map is independent of the
model altogether, then something is wrong.
A relaxed version of the above is that when the model changes (becomes
gradually randomly initialized from a trained model), we should also see
notable changes in the attribution map.
### Sanity Checks for Saliency Maps
![Results of various explainability methods on the cascading model
parameter randomization sanity check. This sanity check is passed by
saliency maps, SmoothGrad, and Grad-CAM. Details are discussed in the
text. Figure taken
from [@https://doi.org/10.48550/arxiv.1810.03292].](gfx/03_sanity.pdf){#fig:results
width="\\linewidth"}
Let us discuss the paper "[Sanity Checks for Saliency
Maps](https://arxiv.org/abs/1810.03292)" [@https://doi.org/10.48550/arxiv.1810.03292]
where the authors benchmarked various explainability methods on sanity
check tasks. We highlight two of these:
1. **Cascading randomization.** As we randomize the network's weights
(starting from the latest layers and going toward the input layer),
we should see notable changes in the explanations XAI methods give.
2. **Label randomization.** For models trained with randomized labels
(that should not learn anything meaningful), XAI methods should not
highlight parts of the input that are discriminative for the
*original task* (without label randomization). They should return
irrelevant attribution maps.
#### Cascading randomization
Let us discuss Figure [3.35](#fig:results){reference-type="ref"
reference="fig:results"} showcasing *cascading normalization*. Saliency
maps (called 'gradient' in the Figure) exhibit large changes in the
attribution map. SmoothGrad is also heavily influenced by randomization:
it "passes the check." Curiously, the image does not become complete
noise from the initially clear attribution map, rather, it becomes a
noisy edge detector.) For Gradient $\odot$ Input, the outline of the
bird is always visible: the changes are not so large.
Guided Backpropagation [@https://doi.org/10.48550/arxiv.1412.6806] shows
a similar attribution map all the way, even after a global change of the
model. It only becomes noisier, the edges are clear all the way. It
seems like it does not take model $f$ into account that much. For Guided
Backpropagation, the authors were selling the fact that they get very
nice visualizations of
objects [@https://doi.org/10.48550/arxiv.1412.6806]. This is true, but
it does not reflect well what the model is doing. It is close to being
an edge detector, but at the time when they published it, it looked like
a ground-breaking technology. No one has done it before, and the results
looked like the classifier had all the knowledge about where objects
are. But even though it looked promising at the time, people have since
then realized it does not work. It does not contain enough information
about the model. Let us give a rough outline of the Guided
Backpropagation method. If we use ReLU, then during backpropagation,
when the pre-activation is negative, we do not backpropagate gradients.
When it is positive, we just let the gradient through. In Guided
Backpropagation, we do not let the gradient through when it is negative.
(Like a ReLU on gradients.) This results in an AND condition: if the
gradient was positive and the pre-activation was positive, then we let
the gradient through. There is no justification for why this should
work. And it does not, apparently.
Continuing with previously discussed methods, Grad-CAM showcases large
changes in the attribution map as well. Regarding Guided Grad-CAM, we
could give the same remarks as for Guided Backpropagation. This is a
multiplication of the guided backpropagation score map and the CAM score
map. It is natural that the method inherits lots of issues from
guidance. Integrated Gradients gives very similar results to Gradient
$\odot$ Input. Attributions change only slightly -- definitely not as
radically as for, e.g., SmoothGrad. Integrated Gradients-SG is very
similar to Integrated Gradients, maybe even a bit worse.
**Note**: Earlier versions
of [@https://doi.org/10.48550/arxiv.1810.03292] give significantly
different results (even more extreme).
This sanity check was set up as a necessary condition: Any explanation
method (even the simplest, most naive ones) should satisfy it. If they
do not, the method is unusable. It is the bare minimum requirement an
explanation method has to satisfy. Nevertheless, some methods already
fail to pass this simple test. Namely, Guided BP and Guided Grad-CAM are
essentially edge detectors. Gradient $\odot$ Input and Integrated
Gradients are also not so convincing.
![Results of various explainability methods on the data randomization
(randomized labels) sanity check. This check is also passed by saliency
maps, SmoothGrad, and Grad-CAM. Details are discussed in the text.
Figure taken
from [@https://doi.org/10.48550/arxiv.1810.03292].](gfx/03_sanity2.pdf){#fig:sanity2
width="0.8\\linewidth"}
#### Label randomization
We discuss the other aforementioned sanity check the paper considered,
which uses *random labels* named 'data randomization test'. The results
can be seen in Figure [3.36](#fig:sanity2){reference-type="ref"
reference="fig:sanity2"}. One can also compare explanations for two
models trained on MNIST with true (original) labels or random labels
(control group). Random-label-trained models should return explanations
without information. With Guided Backpropagation, we can clearly see the
shape of 0 for random labels; it seems to give edge detection regardless
of the label used for training. Guided Grad-CAM also fails the test
again. Methods depending on pixel values tend to show a "0" shape even
for random label models: Integrated Gradients[^47], Integrated
Gradients-SG, and Gradient $\odot$ Input all showcase the same problem.
Gradient, SmoothGrad, and Grad-CAM look more random: We say they pass
the test.
This shows that any method trying to multiply the input onto the score
map is strange. Even in such cases where we should not attribute to any
meaningful pixels, we see patterns in the map dependent on just the raw
input image. Notice how this seemingly simple sanity check already
conflicts with the theoretically justified completeness axioms.
![Spearman rank correlation barplot (without absolute values) of various
explainability methods for an MLP. Grad-CAM gives convincing results.
Details are discussed in the text. Figure taken
from [@https://doi.org/10.48550/arxiv.1810.03292].](gfx/03_sanity3.pdf){#fig:sanity3
width="0.7\\linewidth"}
#### Quantitative results: rank correlation
We also briefly discuss a correlation plot in the paper, shown in
Figure [3.37](#fig:sanity3){reference-type="ref"
reference="fig:sanity3"}. How much correlation can we see between the
upper and bottom rows for each method in
Figure [3.36](#fig:sanity2){reference-type="ref"
reference="fig:sanity2"}? Grad-CAM has a rank correlation of almost 0
for pixels on average. It satisfies the overall necessary condition the
best -- no correlation in attribution ranking for true/random labels. It
does not seem to show any correlation between the explanation for the
model trained with true labels vs. the model trained with random labels.
Grad-CAM and SmoothGrad are still generally perceived as one of the best
explanation methods.
We have seen that there is no GT explanation in general, and we are
facing a chicken-egg problem. However, if we are a bit creative, we can
simulate samples where the GT explanation actually exists (to some high
extent). We know with very high confidence where the model should be
looking for these images. According to the attribution map, we then
check whether it is actually looking at the "GT part" of the image.
### Simulation of Inputs with GT Explanations
We discuss a possible way to simulate inputs with GT explanations from
the paper "[Interpretability Beyond Feature Attribution: Quantitative
Testing with Concept Activation Vectors (TCAV)
](https://arxiv.org/abs/1711.11279)" [@https://doi.org/10.48550/arxiv.1711.11279].
We define three classes: zebra, cab, and cucumber.
![Samples from the dataset with "GT attributions" introduced
in [@https://doi.org/10.48550/arxiv.1711.11279]. The image label is
included in the image with a noise parameter that controls the
probability that the label is correct. Figure taken
from [@https://doi.org/10.48550/arxiv.1711.11279].](gfx/03_tcavex.jpg){#fig:tcavex
width="0.8\\linewidth"}
We provide potentially noisy captions written in the bottom left corner
of the image. We have a controllable noise parameter $p \in [0, 1]$ to
control the impact of the captions. In detail, $p$ is the probability
that the caption disagrees with the image content. $p = 0$ means there
is no disagreement: an image of a cucumber would always have the caption
'cucumber'. For $p = 0.5$, each image has a 50% chance of the caption
and the image content disagreeing. Examples are given in
Figure [3.38](#fig:tcavex){reference-type="ref" reference="fig:tcavex"}.
We have a feature selection problem (look at caption vs. image), but for
low noise levels, the caption is a very prominent feature the model
cannot resist looking at (refer to *simplicity bias* in
[2.9](#ssec:simplicity){reference-type="ref"
reference="ssec:simplicity"}). We will measure if the attribution
methods are correctly picking that up. When the noise level is high, the
model cannot rely on the captions at all. Thus, we will measure if the
attribution methods are correctly **not** attributing the predictions to
the label.
#### GT Attribution Results
![Results of various XAI methods on the dataset with "GT attributions"
introduced in [@https://doi.org/10.48550/arxiv.1711.11279]. The 'cab'
caption has to be a strong cue for recognition if $p = 0$. Conversely,
it has to be a weak cue for recognition if $p = 1$. The test input image
contains the *correct* caption. For the model trained on images with
captions and 0% noise, we expect the attribution to be wholly focused on
the caption. It seems like (without quantification) SmoothGrad is doing
that most prominently, at least from how they show it. However, the
gradient-based explanations are not well-calibrated (not ideal).
Depending on how we renormalize the map, we may also get such a strong
attribution to the caption for the other gradient maps. We cannot fully
trust these kinds of score maps. Figure taken
from [@https://doi.org/10.48550/arxiv.1711.11279].
](gfx/03_tcavres2.jpg){#fig:tcavres2 width="0.8\\linewidth"}
Results are shown in Figure [3.39](#fig:tcavres2){reference-type="ref"
reference="fig:tcavres2"}. Based on these results, SmoothGrad seems to
be a great explainability method.
### Remove-and-Classify/Remove-and-Predict {#sssec:rac}
::: definition
Remove-and-Classify/Remove-and-Predict The Remove-and-Classify algorithm
is a prevalent soundness evaluation method for feature attribution
scores. Attribution scores define a ranking over features: the feature
attribution explanation $s(f, x) \in [0, 1]^{H \times W}$ ranks each
feature in the input $x$. Remove-and-Classify removes features from the
test input(s) iteratively, according to the attribution ranking of the
explainability method. In the most popular variant, where the feature
with the highest attribution score (most important) is removed first,
the explainability method with the steepest drop in classification
accuracy performs best. One usually calculates the Area under the Curve
(AUC) to compare explanation methods. The features might be removed one
by one or in a batched manner.
There are several variants of the Remove-and-Classify method. Compared
to the variant introduced above, one might...
1. ...remove the features with the lowest attribution score (least
important) first. In this case, the explainability method with the
shallowest drop in accuracy performs best.
2. ...start from the base image and introduce features one by one (or
in a batched way) according to the ranking of the explainability
method -- either increasing or decreasing attribution score.
Sometimes, people take the average performance on these four possible
benchmark combinations.
There is no "correct" choice of encoding missingness. One must be
particularly careful not to introduce *missingness bias*
([3.7.8](#sssec:missingness_bias){reference-type="ref"
reference="sssec:missingness_bias"}). The most prevalent removal
technique for natural images and pixels as features is replacing the
pixel with the mean pixel value(s) in the dataset, which is usually gray
for natural images.
**Note**: Just like in counterfactual explanation methods, grayness can
still convey information -- it can be problematic to consider this the
base value. The quality of this choice also depends on our task -- e.g.,
what if our task is to detect all gray boxes? Even though the signed
distance of all data points to the mean (usually gray) image is zero on
average, and the images are scattered around the mean image, it does not
mean that individual gray pixels cannot contribute to a model's
decision. They can be grouped into arbitrary shapes that have semantic
meaning, even though a completely gray image might not convey much
semantic information. Sticking to *any* color has a potential pitfall.
:::
A feature attribution explanation gives a "heat map" of the given input.
Suppose $s(f, x)$ is sound and correctly cites the causes for the
prediction (in the correct order of importance). In that case, removing
the most critical feature $i^* = \argmax_i s_i(f, x)$ will significantly
decrease the score $f(x)$ for the class in question. We measure the
speed of decrease in classification accuracy as we remove pixels in the
order dictated by $s(f, x)$.
We now discuss the result of remove-and-classify shown in
Figure [3.28](#fig:rac){reference-type="ref" reference="fig:rac"} that
was reported in the CALM paper. The baseline is random erasing with
equal probabilities. If we remove pixels, we kill information,[^48] so
we should still see a drop in accuracy, just not as fast. The used
metric is the relative drop in accuracy when erasing according to an
attribution method, compared to random erasing. A method is *better* if
it results in a faster drop in accuracy. Methods corresponding to curves
enveloping others from below are supposed to be more sound explanation
methods. Sometimes, we also measure the AUC for this plot, where lower
is better. One can also consider the unnormalized plot, where we do not
compare against a random baseline.
Our observation is that CALM gives a sound explanation. It gives a huge
drop in accuracy for pixels with high attribution scores. For
SmoothGrad, the pixels attributed to being most important were not the
most important ones, as we see a smaller drop.
One might ponder why most of the methods get worse than random erasing
for larger values of $k$. Filling in gray/black pixels is not the best
way to kill information. It can also *introduce* information. We address
this in Section [3.7.8](#sssec:missingness_bias){reference-type="ref"
reference="sssec:missingness_bias"}.
### Missingness Bias {#sssec:missingness_bias}
A recent phenomenon named *missingness bias* was reported in a recent
paper titled "[Missingness Bias in Model
Debugging](https://arxiv.org/abs/2204.08945)" [@https://doi.org/10.48550/arxiv.2204.08945].
Figure [3.40](#fig:missingness){reference-type="ref"
reference="fig:missingness"} aims to provide some insights. There is no
common understanding of what the SotA for erasing information is. We
argue that inpainting and blurring are good candidates. However, the
choice of the inpainter and the exact blurring method are both
hyperparameters that have important implications and might raise new
problems. It is hard to explain what exactly is happening and what might
be confusing textures for different architectures. It is, however,
important to be aware of missingness bias and encode missing information
in a suitable way.
![Illustration of the missingness bias. "Given an image of a flatworm,
we remove various regions of the original image. Irrespective of what
subregions of the image are removed (least salient, most salient, or
random), a ResNet-50 outputs the wrong class (crossword, jigsaw puzzle,
cliff dwelling). A closer look at the randomly masked image shows that
the predicted class (crossword puzzle) is not totally unreasonable,
given the masking pattern. The model seems to rely on the masking
pattern to make the prediction rather than the image's remaining
(unmasked) portions. Conversely, the ViT-S either maintains its original
prediction or predicts a reasonable label given remaining image
subregions." [@https://doi.org/10.48550/arxiv.2204.08945] Figure taken
from [@https://doi.org/10.48550/arxiv.2204.08945]. Replacing pixels with
mean values (or any fixed value) does not necessarily remove
information. It may add further information (crossword) or kill
unnecessary information. We also see that Transformers suffer a lot less
from this phenomenon. Thus, depending on different models, the
attribution methods might see different success rates.
Remove-and-classify is not the perfect soundness evaluation metric.
However, it is the most popular and one of the best ways to evaluate
soundness.](gfx/03_missingness.pdf){#fig:missingness
width="\\linewidth"}
We now address the curious behavior in
Figure [3.28](#fig:rac){reference-type="ref" reference="fig:rac"}. When
we only remove information by erasing pixels, we should see the random
baseline as the worst-case removing strategy (in expectation). We see
the jump above the baseline in
Figure [3.28](#fig:rac){reference-type="ref" reference="fig:rac"}
because the model can predict based on the "removed" patterns for random
removal, which can introduce greater changes in classification than
removing pixels in an orderly fashion. Thus, random removal might add
information that confuses the model more. If we erase according to CAM,
we will see something like the right of
Figure [3.40](#fig:missingness){reference-type="ref"
reference="fig:missingness"}.
## Soundness is Not The End of the Story
There are many other criteria, like soundness, simplicity, generality,
contrastivity, socialness, interactivity, but also . The latter depends
on the final goal. Is it to debug? Is it to understand? Is it to gain
trust? This is an essential criterion, as we are not looking at
explanation methods for the sake of themselves, but we rather treat them
as an intermediate step towards a final goal.
### Various End Goals for Explainability
**Model debugging as the end goal.** Here, we wish to identify spurious
correlations (why a model has made a mistake) and then fix them (we have
seen methods for both in [2.10](#sssec:identify){reference-type="ref"
reference="sssec:identify"} and
[2.11](#sssec:overview){reference-type="ref"
reference="sssec:overview"}). This can improve generalization to OOD
data. There are no successful/commercialized explanation tools yet that
are specialized in debugging. It is not yet clear how to help an
engineer fix a general problem with the model, and we have yet to see a
*successful* use case of XAI for model debugging. There is still so much
more to be researched for ML explanations. Attribution does not always
guarantee successful debugging.
**Understanding as the end goal.** Do humans understand the
idiosyncrasies ("odd habits") of a model? If a model is doing something
odd, understanding why it is doing so could be an interesting objective.
Can humans predict the behavior of a model based on the provided
explanation? Do humans learn new knowledge based on the explanation? If
the model is doing something new that humans cannot do yet, transferring
that knowledge to humans would be essential.
**Enhancing human confidence, gaining trust as the end goal.** Does the
explanation technique help persuade doctors to use ML models? Many
doctors are still very averse to ML-based advice; they have no trust.
Explanations could help them incorporate ML techniques. Does the
explanation technique convince people to use self-driving cars (even
though safety stays the same -- or, as seen, worse because of the
trade-offs)?
The "soundness" criterion does not fully align with the previous end
goals and desiderata. The current evaluation is too focused on soundness
(and qualitative evaluations). Given an explanation, we still have some
end goals:
- ML Engineer: "Now I know how to fix model $f$."
- Scientist: "Now I understand the mechanism behind the recognition of
cats."
- Doctor: "Now I can finally trust this model for diagnosing cancer."
Soundness focuses only on the explanation itself, which is an
intermediate step. We need evaluation with the end goal in mind. There
is no way we do not have to use human-in-the-loop (HITL) evaluation at
some point.
### Human-in-the-Loop (HITL) Evaluation
::: definition
Human-in-the-Loop Evaluation Human-in-the-loop evaluation refers to any
evaluation technique for explainability that incorporates humans and
measures how well the explanations help them achieve their end goals.
:::
Let us now turn to discussing human-in-the-loop (HITL) evaluation. In
particular, we will consider the paper "[What I Cannot Predict, I Do Not
Understand: A Human-Centered Evaluation Framework for Explainability
Methods
](https://arxiv.org/abs/2112.04417)" [@https://doi.org/10.48550/arxiv.2112.04417].
![Overview of different tasks considered
in [@https://doi.org/10.48550/arxiv.2112.04417]. The authors are
evaluating recent explainability methods directly through the end goals
of XAI -- practical usefulness. They are trying to see whether
explanations are actually helping humans in achieving their end goals.
Figure taken
from [@https://doi.org/10.48550/arxiv.2112.04417].](gfx/03_hitl1.jpg){#fig:hitl1
width="\\linewidth"}
An overview of the settings the authors consider is given in
Figure [3.41](#fig:hitl1){reference-type="ref" reference="fig:hitl1"}.
They address three real-world scenarios, each corresponding to different
use cases for XAI.
1. **Husky vs. Wolf**. Here, debugging is the end goal. Can the
explanations help the user identify sources of bias in the model?
Examples include background bias (snow, grass) instead of the
animal.
2. **Real-World Leaf Classification problem**. Here, understanding is
the end goal. Can the explanations help the user (non-expert) learn
what parts of the leaf to look for to distinguish different leaf
types? The humans want to adopt the strategy of the model.
3. **Failure Prediction Problem**. Here, understanding is the end goal
again. This dataset is a subset of ImageNet. It consists of images,
of which half have been misclassified by the model. Can the
explanations help the user understand the failure sources of the
(otherwise high-performing) model?
![Overview of the stages of the method considered
in [@https://doi.org/10.48550/arxiv.2112.04417]. The authors use a
human-centered framework for the evaluation of explainability methods.
The evaluation pipeline consists of (1) the predictor $f$, which is a
black-box model, (2) an explanation method $\Phi$, and (3) the
meta-predictor, a human subject $\psi$ whose task is to understand the
behavior of $f$ based on samples (i.e., the rules that the model uses
for its predictions). First, the meta-predictor is trained using $K$
triplets $(x, \Phi(f, x), f(x))$, where $x$ is an input image, $f(x)$ is
the model's prediction and $\Phi(f, x)$ is the explanation of the
model's prediction. Second, for the Husky vs. Wolf and the Failure
Prediction problems, the meta-predictor is evaluated on how well they
can predict the model's outputs on new samples $\tilde{x}$. This is done
by comparing the meta-prediction $\psi(\tilde{x})$ to the true
prediction $f(\tilde{x})$. For the leaf classification problem, the
meta-predictor is evaluated on how well they can classify the leaves
after observing the explanations. The meta-prediction $\psi(\tilde{x})$
is compared to the GT label $y$. Figure taken
from [@https://doi.org/10.48550/arxiv.2112.04417].
](gfx/03_hitl2.png){#fig:hitl2 width="0.8\\linewidth"}
Figure [3.42](#fig:hitl2){reference-type="ref" reference="fig:hitl2"}
gives a detailed description of how the explainability methods are
evaluated in all three scenarios. If the model makes a mistake on the
evaluation image, the human should be able to pinpoint the mistake the
model will make. Similarly, humans should be able to learn to classify
leaves based on the knowledge encoded by the networks, and they should
also be able to identify biases under the assumption that the
explanation method works well. The paper uses the term *simulatability*.
A model is explainable when its output can be predicted following the
explanations.
First, we train humans on dataset
$\cD = \{(x_i, f(x_i), \Phi(f, x_i))\}_{i = 1}^K$. For new samples, we
let the humans predict the model predictions. The value $\psi^{(K)}(x)$
is the human prediction of the model prediction after training with $K$
samples). The Utility-K score is calculated as follows:
$$\operatorname{Utility-K} = \frac{P(\psi^{(K)}(x) = f(x))}{P(\psi^{(0)}(x) = f(x))}.$$
In words, the utility score is the relative accuracy improvement of the
meta-predictor trained with or without explanations. The baseline
factors out the contribution of explanations for educating humans.
Humans for the baseline predictions are trained on dataset
$\cD = \{(x_i, f(x_i))\}_{i = 1}^K$. To make the evaluation meaningful
for Husky vs. Wolf and Failure Prediction, the authors mixed correct and
incorrect model predictions 50-50% during evaluation.
![Results on the Wolf vs. Husky task (left) and the Leaf Classification
task (right). For the Leaf Classification task, the Utility-K value is
the normalized accuracy of the human predictor on test leaf images after
observing the explanations. For the Husky vs. Wolf task, Grad-CAM,
Occlusion, and SmoothGrad are seemingly useful. For the Leaf
Classification task, Saliency, Smoothgrad, and Integrated Gradients seem
to perform best. Figure taken
from [@https://doi.org/10.48550/arxiv.2112.04417].](gfx/lime_utility.png){#fig:tasks
width="\\textwidth"}
![Results on the Wolf vs. Husky task (left) and the Leaf Classification
task (right). For the Leaf Classification task, the Utility-K value is
the normalized accuracy of the human predictor on test leaf images after
observing the explanations. For the Husky vs. Wolf task, Grad-CAM,
Occlusion, and SmoothGrad are seemingly useful. For the Leaf
Classification task, Saliency, Smoothgrad, and Integrated Gradients seem
to perform best. Figure taken
from [@https://doi.org/10.48550/arxiv.2112.04417].](gfx/pnas_utility.png){#fig:tasks
width="\\textwidth"}
#### Task (i): Husky vs. Wolf
For Husky vs. Wolf, results are shown in the left panel of
Figure [3.44](#fig:tasks){reference-type="ref" reference="fig:tasks"}.
The control group is shown some score map, called bottom-up saliency,
that is not an explanation (it is independent of model $f$). This is
used to rule out the possibility that people try harder to solve the
task if any explanation is provided to them. CAM achieves good results
(same story as before); SmoothGrad and Occlusion are also good. All
attribution methods used are better than the control 'method'. More
training samples mean further knowledge of what the model might be
doing.
#### Task (ii): Leaf Classification
The results for Leaf Classification are shown in the right panel of
Figure [3.44](#fig:tasks){reference-type="ref" reference="fig:tasks"}.
This is an example of using ML to educate humans. In this case,
Utility-K is the normalized accuracy of the meta-predictor after
"training". Humans do not know how to distinguish these leaf types in
the beginning. By showing where the model is looking (that can solve the
task well), humans also learn how to classify the leaves (as they learn
useful cues). SmoothGrad is consistently good for educating humans for
the task. CAM helps a bit less ideally compared to SmoothGrad. Saliency
also performs well but does not scale well to more training samples
($K = 15$). Integrated Gradients is on par with CAM and also scales
better.
#### Task (iii): Failure Prediction
On the ImageNet dataset, *none of the methods tested exceeded baseline
accuracy*. These results made the authors suspicious that the
explanation methods might not be sound: If the user observes
explanations from a method that is not sound, it will not gain enough
insight into the model's internals. The authors compared Utility scores
(AUC scores under the (K, Utility-K) curve for various K values) to
corresponding faithfulness scores. The results are shown in
Figure [3.45](#fig:hitl3){reference-type="ref" reference="fig:hitl3"}.
![Correlation of the faithfulness and the utility score
in [@https://doi.org/10.48550/arxiv.2112.04417]. HITL Utility does not
correlate in general (across datasets) with the evaluation's soundness
(faithfulness). (Correlations change across tasks. It seems very
random.) Faithfulness metrics are poor predictors of end-goal utility.
For the Husky vs. Wolf and leaves datasets, a negative correlation can
be observed, meaning *high soundness might even come with the price of
less end-goal utility*. Figure taken
from [@https://doi.org/10.48550/arxiv.2112.04417].](gfx/03_hitl3.png){#fig:hitl3
width="0.6\\linewidth"}
#### Conclusion for HITL Evaluation
SmoothGrad is doing a great job in helping humans with the end goals
considered in the benchmark. If we care about how humans can understand
attributions and learn from attributions (explanations), then soundness
evaluation is not a good proxy for choosing between methods. Thus, HITL
evaluation cannot be replaced with soundness evaluation.
::: information
HITL Out of the three downstream use cases mentioned in the book, HITL
evaluation seems to be more tailored toward understanding. How could it
measure how much trust is given to the model? How could we measure how
much a method helps fix a model (if the answer is not spurious
correlations)? While HITL is a step forward compared to soundness
evaluation in the sense that it measures how humans understand better,
it still does not measure the end-to-end metric of how much more trust
is given or how much better a model is debugged after an explanation in
general. End-goal-tailored explainability is a young field with many
questions to be answered.
:::
## Towards Interactive Explanations
Previously, we have seen that for HITL evaluations, we need to include
human participants to evaluate how useful a method is for human subjects
and their end goals. *Interactive explanations* are also deemed
necessary by decision-makers.
### A Survey on Explanations
::: information
Quant A quant, short for quantitative analyst, is a person who analyzes
a situation or event (e.g., what assets to buy/sell in a hedge fund),
specifically a financial market, through complex mathematical and
statistical modeling.
:::
We consider a survey for decision-makers using ML, titled "[Rethinking
Explainability as a Dialogue: A Practitioner's
Perspective](https://arxiv.org/abs/2202.01875)" [@lakkaraju2022rethinking].
The survey aims to find answers to the question, "What kind of features
do you need from explanations?"
#### Desiderata for Interactive XAI
Let us now discuss the survey for domain experts using ML in detail. In
particular, we consider exact statistics from the survey. **Note**:
There is only a small number of respondents, but as they are experts,
conducting such surveys is expensive. The quotes are imaginary and only
illustrate the discussed desiderata. The list also does not mean that
there are technologies already satisfying these desiderata. We are far
away from many aspects still.
24/26 respondents wish to eliminate the need to learn and write the
commands for generating explanations. "We do not want to care about
writing code. We need a more natural-language-based interaction with the
system."
24/26 respondents prefer methods that describe the accuracy of the
explanation in the dialogues. A notion of *uncertainty* is needed.
23/26 respondents wish to use explanation tools that preserve the
context and enable follow-up questions. "If we do not understand
something in the previous round, we should be able to ask for follow-up
explanations." A key characteristic of a dialog-based system is that the
machine should remember previous topics/conversations.
21/26 respondents would like real-time explanations. "Do not take
several hours to answer our questions. We want an experience as if we
were talking to a human." This is a rather basic requirement for
efficiency.
17/26 respondents would let the algorithm decide which explanations to
run. Users should not have to ask for a specific explainability
algorithm. "We do not wish to decide ourselves, as so many of them
exist. We do not want to build a benchmark, compare all attribution
methods, and decide on an appropriate one for the use case. The system
should determine the best algorithm for our domain."
#### Key Takeaways from the Desiderata
Decision-makers prefer *interactive* explanations. Explanations are
preferred in the form of *natural languages*. Experts want to treat
machine learning models as "another colleague" they can talk to. For
example, a hedge fund might find good use of ML: They might wish to have
a virtual human (a quant) sitting next to them who can answer questions
like "Why do you think this trend is happening?" or "Why did you
buy/sell the stocks?" They want to ask the models' *opinion* or what
they had in mind when making a decision. In particular, they want models
that can be held accountable by asking why they made a particular
decision through expressive and accessible natural language
interactions.
### Generating Counterfactual Explanations with Natural Language
![Overview of the counterfactual explanation pipeline
of [@https://doi.org/10.48550/arxiv.1806.09809]. The method allows users
to generate explanations based on high-level concepts. We have a bird
classifier available. There is also an explanation generator. This is
different from image captioning, as image captioning only talks about
what is in the image, but the explanation generator first makes a
prediction (Scarlet Tanager) for the set of counter-class images and
describes in natural language why it thinks it is that class. The
evidence checker checks how many characteristics extracted from the
explanation generator's explanations are present in the current input
(i.e., it makes a list of scores). It is checking for evidence of these
characteristics in the current image. The counterfactual explanation
generator can use the evidence to answer the counterfactual question.
Figure taken
from [@https://doi.org/10.48550/arxiv.1806.09809].](gfx/03_natural.pdf){#fig:natural
width="0.6\\linewidth"}
The "[Generating Counterfactual Explanations with Natural
Language](https://arxiv.org/abs/1806.09809)" [@https://doi.org/10.48550/arxiv.1806.09809]
paper is a work of @https://doi.org/10.48550/arxiv.1806.09809. An
overview of the method is given in
Figure [3.46](#fig:natural){reference-type="ref"
reference="fig:natural"}. This is a step towards interactive
explanations for humans.
### e-ViL
e-ViL is an explainability benchmark introduced in the paper "[e-ViL: A
Dataset and Benchmark for Natural Language Explanations in
Vision-Language
Tasks](https://arxiv.org/abs/2105.03761)" [@https://doi.org/10.48550/arxiv.2105.03761].
A test example and the outputs of various VL models are given in
Figure [3.47](#fig:evil){reference-type="ref" reference="fig:evil"}. An
overview of the architectures benchmarked in this work is shown in
Figure [3.49](#fig:architectures){reference-type="ref"
reference="fig:architectures"}.
![A test example from the e-SNLI-VE
dataset [@https://doi.org/10.48550/arxiv.2105.03761]. *Contradiction*
means the hypothesis contradicts the image content. *RVT, PJ-X, FME, and
e-UG* are explanation methods. They provide natural language
explanations (NLEs). Explanations are not trained on any GT. (If they
were, that would be another predictive model, and there is no guarantee
it would explain the model's way of prediction.) They instead extract
information from a vision-language (VL) model into a human language
format. The *GT Explanation* is a human-generated explanation for the
answer collected by the authors. **Task**: Given an image and a
hypothesis, decide if the hypothesis is aligned with the image. The
machine also has to explain why they might be contradictory. VL-NLE
models predict *and* explain. Figure taken
from [@https://doi.org/10.48550/arxiv.2105.03761].](gfx/03_natural2.pdf){#fig:evil
width="0.6\\linewidth"}
![Overview of the structure of general VL models (left) and detailed
subparts of individual models benchmarked
in [@https://doi.org/10.48550/arxiv.2105.03761]. Figure taken
from [@https://doi.org/10.48550/arxiv.2105.03761].](gfx/arch.pdf){#fig:architectures
width="\\textwidth"}
![Overview of the structure of general VL models (left) and detailed
subparts of individual models benchmarked
in [@https://doi.org/10.48550/arxiv.2105.03761]. Figure taken
from [@https://doi.org/10.48550/arxiv.2105.03761].](gfx/models.pdf){#fig:architectures
width="\\textwidth"}
### Summary of Interactive Explanations
We only touched on interactive explanation techniques. These are very
new, and there are only a few works. However, it has much potential. We
recommend working in this domain. To work forward, we need to take
humans into account -- whether XAI is helpful for the end user (HITL
evaluation) and view XAI systems as socio-technical systems. We also
want users to be able to interact with the explanation algorithm and to
make the interface more natural for humans (e.g., having a
natural-language-based "chat" about the explanation).
## Attribution to Model Parameters
As we have seen, a model is a function that is an output of a training
algorithm (which, in turn, is another function of the training data and
other ingredients). The model takes training data as input implicitly
through the training procedure. This is a hidden part of the model that
is not used for explanations when we only focus on the attribution to
the test sample features. It can very well happen that the model is
making a bizarre decision not because of a specific feature in the test
sample but because of strange (defective) training samples. It is
difficult to rule this possibility out, and it is, therefore, meaningful
to look at training samples.
We write the model prediction as a function of two variables:
$$Y = \operatorname{Model}(X; \theta) = \operatorname{Model}(X; \theta(\{z_1, \dots, z_n\}))$$
where $Y$ is our prediction, $X$ is the test input, $\theta$ are the
model parameters, and $\{z_1, \dots, z_n\}$ is the training dataset. We
use $z$ because these can both correspond to inputs and input-output
pairs. The prediction of our model is implicitly also a function of the
training data.
As we discussed before, explaining our prediction against features of
$x$ is not always sufficient (but is very popular). We might also be
interested in the contribution of
1. individual parameters $\theta_j$ of the model, and
2. individual training samples $z_i$ in the training set
to the final prediction of the model. First, we look at the contribution
of model parameters. Then, we discuss attribution methods to training
samples.
### Explanation of Model Parameters $\theta$
For DNNs, model parameters are simply millions of raw numbers. They are
complicated to understand. Explaining a prediction these raw numbers is
seemingly a tricky problem. This is in contrast with the input-level
features $x$ and labels $y$. Inputs to a DNN are usually sensory data
(image, sound, text), so humans can naturally understand them.
Thus, inputs and outputs to a DNN are often human-interpretable.
However, the parameters are not, at least not directly. To understand
the parameters $\theta$, we "project" them onto the input space; i.e.,
we give visualizations of them (or explain them in text for NLP
methods).
![Various weight visualizations different target classes
from [@https://doi.org/10.48550/arxiv.1312.6034]. We ask "What is the
most likely image for the class dumbbell?" from the model, or "What
excites a certain neuron most?" One can employ several regularization
techniques (e.g., TV) to make the visualizations more interpretable. For
these samples, the model predicts a very high score for the respective
classes. These are preliminary results from a seminal paper about
turning model parameters into an image in the input space. Figure taken
from [@https://doi.org/10.48550/arxiv.1312.6034].](gfx/03_param1.pdf){#fig:param1
width="0.5\\linewidth"}
Examples for turning parameters into samples from the seminal paper
"[Deep Inside Convolutional Networks: Visualising Image Classification
Models and Saliency
Maps](https://arxiv.org/abs/1312.6034)" [@https://doi.org/10.48550/arxiv.1312.6034]
are given in Figure [3.50](#fig:param1){reference-type="ref"
reference="fig:param1"}. We generate these samples by solving an
optimization problem in the pixel space. We maximize the score for class
$c$ in the input space in a regularized fashion:
$$\argmax_I S_c(I) - \lambda\Vert I \Vert_2^2,$$ where $S_c(I)$ is the
prediction score (logit value, pre-activation of the output layer) for
class $c$ and image $I$ from the network. $L_2$ regularization prevents
a small number of extreme pixel values from dominating the entire image.
It results in smoother and more natural (more interpretable) images. We
can also regularize the discrete image gradient (e.g., with the TV
regularizer), which is also a popular choice. This mitigates the noise
issue even more.[^49] The objective of adversarial attack algorithms is
very similar to this optimization problem. However, attacks try to
minimize the score for a specific class. Here we are trying to maximize
it, e.g., using gradient descent for the loss (less often used) or using
gradient ascent for the logit value (popular).
### More examples of turning parameters into samples
Let us discuss two more examples of turning parameters into samples.
![Visualizations of a deeper layer of an AlexNet-like architecture
from [@https://doi.org/10.48550/arxiv.1506.06579]. The synthesized
images resemble a mixture of animals, flowers, and more abstract
objects. The indices correspond to different feature channels of the
conv5 pre-activation tensor. Each grid corresponds to four runs of the
same optimization problem. Figure taken
from [@https://doi.org/10.48550/arxiv.1506.06579].](gfx/03_more.pdf){#fig:more
width="0.8\\linewidth"}
First, we consider "[Understanding Neural Networks Through Deep
Visualization](https://arxiv.org/abs/1506.06579)" [@https://doi.org/10.48550/arxiv.1506.06579].
We can perform the previous optimization procedure on different
intermediate layers as well. Instead of maximizing the score of a
certain class, we maximize an intermediate feature activation for one of
the units of a layer or maximize the entire layer's activation. We then
recognize patterns in the generated images. These are interpreted as the
patterns that the corresponding neurons have learned and respond to. The
optimization problem here is
$$x^* = \argmax_x \left(a_i(x) - R_{\theta}(x)\right),$$ where $a_i(x)$
can be an activation for a particular unit in a particular layer, or we
can also maximize the mean, min, and max activation in a layer. That
leads to similar results. (Not done in this work.) $R_\theta(x)$ is the
regularization term. In this work, the authors use
$$x \gets r_\theta \left(x + \eta \frac{\partial a_i(x)}{\partial x}\right),$$
which is more expressive. For example, for $L_2$ decay one can choose
$r_\theta(x) := (1 - \theta)\cdot x$. An example collage is shown in
Figure [3.51](#fig:more){reference-type="ref" reference="fig:more"}.
Please refer to the full paper for various visualizations across many
layers, which we discuss below.
When we maximize the output of an early neuron, its receptive field is
usually smaller than the entire input image. Thus, when we visualize the
optimized input, we will see only the small corresponding region
changing in the input. The other input regions are left as we
initialized them.
Typically, we will not see any interpretable pattern for many of the
neurons. In many cases, people cherry-pick to generate these images. The
results are also heavily dependent on the initial image of the
optimization. One should be careful with how they interpret them.
At higher layers, we visualize more semantic content (e.g., cup, garbage
bin, goose). To visualize entire layers, one can take one image for each
channel in the corresponding layer's feature map output. As we go down
the layers, we see more and more generic patterns. These are smaller,
more common patterns that are found in many objects.
When we visualize the optimized inputs for the *first* convolutional
layer's neurons, we roughly see the filters (see [Gabor
filter](https://en.wikipedia.org/wiki/Gabor_filter)) of the
corresponding channel for each neuron in that channel. These contain
single colors or combinations of a few repetitive textures. If we use a
single convolution, we just have a sparse linear network ($S_c$ becomes
linear $I$). The regularized activation-maximizing inputs are nearly the
same as the filters themselves. If we take a channel of the filter (of
shape $(3, H, W)$) corresponding to the output channel of choice, we can
directly visualize it. When doing so, we will see very similar
visualizations to the visualizations of the regularized
activation-maximizing inputs.
Consider a $3 \times 3$ convolutional layer with a single channel. Then
the filter $K$ is of shape $(1, 3, 3, 3)$. The operation for a single
neuron $a$ in the output is simply
$$a = \sum \left(I^a \odot K\right),$$ where $I^a$ is the receptive
field of the neuron $a$, of shape $(3, 3, 3)$. If we perform
unregularized optimization, we obtain $$I^{a*} = \bone(K > 0).$$ By
using, e.g., $L_2$ regularization, we roughly get $I^{a*} \approx K$,
with the outline of the generated image being the same as the
corresponding filter channel.
![Visualizations of high-confidence images for different labels using
the novel generation technique
of [@https://doi.org/10.48550/arxiv.1412.1897]. Figure taken
from [@https://doi.org/10.48550/arxiv.1412.1897].](gfx/03_evenmore.pdf){#fig:evenmore
width="0.8\\linewidth"}
As our second example, we look at "[Deep Neural Networks are Easily
Fooled: High Confidence Predictions for Unrecognizable
Images](https://arxiv.org/abs/1412.1897)" [@https://doi.org/10.48550/arxiv.1412.1897].
One can use the previously introduced optimization problem in the pixel
space to generate images corresponding to high activations (e.g.,
maximize prediction score for the class of choice). We get significantly
different images depending on the regularization of the image generation
(effectively, the search space). This work introduces a different
generation technique that has astounding results. A teaser is shown in
Figure [3.52](#fig:evenmore){reference-type="ref"
reference="fig:evenmore"}. Please refer to the paper for an extensive
collection of visualizations.
The provided visualizations allow us to, e.g., look into the texture
bias of the network. For example, the activation-maximizing input (in
the modified search space) for "baseball" contains a very similar
pattern as a baseball. However, we would not say that this is a baseball
as humans. Nevertheless, the model predicts "baseball" with very high
confidence. The technique allows us to see into the model's
decision-making process, which might often be quite surprising.
In the paper's oral talk, the authors also showed that classifying
images through their mobile phones gives the same result (they made an
app for live demonstration). This shows that the visualizations are
stable representations of the classes for the DNN in question.
The authors provide multiple visualization techniques. All
visualizations correspond to highly confidently predicted generated
images for the classes $0-9$. We get astonishing results on even a
simple dataset like MNIST.
The [linked resource](https://distill.pub/2017/feature-visualization/)
is a recommended blog post for feature visualization.
### Criticism of feature visualization
While feature visualization can give impressive results (as can be seen
in the Distill blog post), there is criticism about the utility of such
methods. Even though we can find visualizations for units of a neural
network that correspond to a human concept (cf. baseball example above),
many visualizations are not interpretable [@molnar2020interpretable]. We
are not guaranteed to find something. Moreover, when we look at feature
visualization as explanations to humans about what *causes* a CNN to
activate, Zimmermann found that feature visualizations do not provide
better insight into model behavior than e.g. looking at data samples
directly [@zimmermann2021well]. The visualizations are interesting but
simpler approaches can provide the human with the same intuitions. A
recent work [@geirhos2023don] shows that feature visualizations are not
reliable and can easily be fooled by an adversary while keeping the
predictive performance of the model. They also prove that feature
visualizations cannot guarantee to deliver an understanding of the
model. Feature attribution gives an interesting tool for exploratory
analysis, but it is not necessarily suitable as is for explaining model
behavior to humans.
## Attribution to Training Samples
This part is much more critical than attributing to individual weights,
for reasons clarified just below.
### Why attribute to training samples?
We saw that we had to eventually map our parameters onto the input space
to visualize what was happening inside the model. It is a natural
question to ask "Why don't we just look at the raw ingredients for the
parameters then?" These are exactly the training samples. Model
parameters are *not interpretable*, so it makes sense to attribute
*directly* to the training samples. Model parameters $\theta$ are also
built on the training samples $\{z_1, \dots, z_n\}$. Therefore,
attributing to training samples is sufficient. Training data
explanations are more likely to give more *actionable* directions to
improve our model. If we can trace the model's error or strange behavior
back to the training samples, we can fix/add/remove training samples to
resolve erratic behavior. If we find out that the model made a mistake
because some of the labels were wrong, we can (1) relabel these samples
that were attributed to the strange behavior, (2) remove them if the GT
labeling $P(Y \mid X = x)$ is too stochastic (faulty sample), or (3) we
can even add new samples to the training set if we think there is no
strong sample supporting the right behavior of the model we wish to see.
### Basic Counterfactual Question for Attribution to Training Data -- Influence Functions
We look at influence functions, first used for deep learning in the
paper "[Understanding Black-box Predictions via Influence
Functions](https://arxiv.org/abs/1703.04730)" [@https://doi.org/10.48550/arxiv.1703.04730].
These find influential training samples for the model prediction on a
particular test sample. The influence here is an answer to the
counterfactual question "What happens to the current model prediction
for a test input $x$ if one training sample $z_j$ was left out of the
training set $\{z_1, \dots, z_n\}$?" This is a minor change in the
training set, as typically, the training set size is in the range of
millions to billions. Leaving out one sample will generally not greatly
affect the overall behavior of the model. However, for a particular test
sample $z$, we can still be interested in the training samples that made
the largest impact on the test sample through the optimization
procedure. Such training samples are likely to be visually similar to
the test sample. We want an algorithm to measure the impact of each
training sample on this particular test sample.
#### Notation
The notation is introduced in
Table [\[tab:notation\]](#tab:notation){reference-type="ref"
reference="tab:notation"}. To find out
$L(z, \hat{\theta}_{\setminus j}) - L(z, \hat{\theta})$, we could
retrain the model on the dataset without $z_j$. However, this is
infeasible for real-life scenarios. To study the impact of every
training sample on every test sample, we would need to train ("number of
training samples" + 1) DNNs and evaluate the differences in the losses
for all test samples we want to consider.
**Note**: We assume that $\hat{\theta}$ and $\hat{\theta}_{\setminus j}$
are *global* minimizers of the respective empirical risks. This is a
strong assumption, but we will see that relaxations of the resulting
method still work well in practice.
#### Lesson from attribution methods: take the gradient!
We only had to do a single backpropagation to determine the contribution
of all pixels modified by an infinitesimal amount (separately) to the
infinitesimal change in the output. Here, we also take the gradient of
the test loss $L(z, \hat{\theta})$ the training sample $z_j$, where the
two values are connected through the entire optimization procedure. In
particular,
$$\hat{\theta} = \argmin_\theta \frac{1}{n}\sum_i L(z_i, \theta).$$
$\hat{\theta}$ is, therefore, a function of $z_j$ through the
optimization we employ, and the dependency between the test loss and
$z_j$ is exactly through $\hat{\theta}$. We are interested in the change
in the test loss when we make a small change in the training sample
$z_j$.
We need a few tricks to compute the gradient
$$\frac{\partial L(z, \hat{\theta})}{\partial z_j}.$$ Taking the
gradient through an optimization procedure has been done in subparts of
ML quite a few times. There are algorithms like "gradient descent by
gradient descent" [@https://doi.org/10.48550/arxiv.1606.04474].[^50]
This is overall a great technique to know.
First, we generalize the notion of "removal" into a continuous
procedure. Removing $z_j$ is a discrete procedure and is thus
non-differentiable. Instead, we take the loss of $z_j$ into a separate
term:
$$\hat{\theta}_{\epsilon, j} := \argmin_{\theta} \frac{1}{n} \sum_i L(z_i, \theta) + \epsilon L(z_j, \theta).$$
Note that the first term still contains a $z_j$ term.
- $\epsilon = 0 \in \nR$: We recover the original minimizer of the
training loss, $\hat{\theta}_{0, j} = \hat{\theta} \in \nR^d$.
- $\epsilon = -1/n$: We obtain our previous notion of "removal".
We further assume that the loss $L$ is twice differentiable and strictly
convex $\theta$ (so that the Hessian matrix of the model parameters is
PD). For DNNs, there is usually no unique $\hat{\theta}_{\epsilon, j}$
and $\hat{\theta}$ because of weight space symmetries and other
contributing factors that make the loss landscape highly non-convex,
with many equally good minima. So we further enforce strict convexity to
have a unique optimum. This is a rather typical trick in research: We
assume that everything is simple during theoretical derivations. In
practice, we ignore the assumptions and hope our method still works.
One can obtain the following derivative (with annotated shapes for
clarity):
$$\underbrace{\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0}}_{\in \nR^d} \approx -\underbrace{H_{\hat{\theta}}^{-1}}_{\in \nR^{d \times d}} \underbrace{\nabla_\theta L(z_j, \hat{\theta})}_{\in \nR^d}.$$
where
$$H_{\hat{\theta}} = \frac{1}{n}\sum_i \nabla^2_\theta L(z_i, \hat{\theta}).$$
In words,
$\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0}$
is the derivative of the weights $\epsilon$, evaluated at $\epsilon = 0$
when $\hat{\theta}_{\epsilon, j} = \hat{\theta}$. This gives the
relative change in the globally optimal weights using the original
objective if we change the additional influence of $z_j$ by an
infinitesimal amount from 0. The last term is the gradient of $z_j$ loss
$\theta$, evaluated for weights $\hat{\theta}$.
$\nabla^2_\theta L(z_i, \hat{\theta})$ is the Hessian matrix of $L$:
$$\left(\nabla^2_\theta L(z_i, \hat{\theta})\right)_{ij} = \frac{\partial^2 L(z_i, \hat{\theta})}{\partial \theta_i \partial \theta_j}.$$
Why is the derivative formula well-defined, i.e., why is this average
Hessian matrix invertible? It is a well-known fact that the average of
symmetric, positive definite (PD) matrices is symmetric PD. The Hessians
are symmetric because of Schwarz's theorem (the loss has continuous
second partial derivatives $\theta$ everywhere). The Hessians are also
PD (i.e., they only have positive eigenvalues, and there is strictly
positive curvature in all directions) because the function is strictly
convex by assumption. Therefore, $H_{\hat{\theta}}$ is symmetric PD.
::: information
Interpreting the Hessian If we have $10^6$ parameters, then
$\nabla_\theta L(z_i, \hat{\theta}) \in \nR^{10^6}$ gives us how the
function value changes in each principal axis direction *relative* to an
infinitesimal change. To obtain the relative change in the loss value in
a particular input direction $v$, one can consider
$$\nabla_\theta L(z_i, \hat{\theta})^\top v \in \nR,\quad \Vert v \Vert = 1.$$
Similarly,
$\nabla^2_\theta L(z_i, \hat{\theta}) \in \nR^{10^6 \times 10^6}$ gives
us how the gradient of the loss at $z_i$ changes in the neighborhood of
$\theta$ along all canonical axes. This is why it is a matrix. In each
axis direction, we measure the relative change in the gradient vector
(in each of its entries) each principal axis. To get the rate of change
of the gradient (curvature) in a particular input direction $v$, one can
consider
$$v^\top \nabla^2_\theta L(z_i, \hat{\theta}) v \in \nR,\quad \Vert v \Vert = 1.$$
When Hessians are symmetric (which is almost always the case in ML
settings), they are orthogonally diagonalizable. In this case, the
diagonal entries of the diagonalized Hessian give the rate of change of
the gradient (curvature) in the eigenvector directions. Let
$$\nabla^2_\theta L(z_i, \hat{\theta}) = Q \Lambda Q^\top$$ where
$\Lambda$ is diagonal and $Q$ is orthogonal. Then, if $v_i$ is the $i$th
eigenvector direction, we have
$$v_i^\top \nabla^2_\theta L(z_i, \hat{\theta})v_i = v_i^\top Q \Lambda Q^\top v_i = v_i^\top Q \Lambda e_i = \lambda_i v_i^\top Q e_i = \lambda_i v_i^\top v_i = \lambda_i.$$
:::
Our story does not end here, as we wish to see the influence of $z_j$ on
the test loss for test sample $z$. Given the previous result, we compute
the influence of sample $z_j$ on the loss for test sample $z$ as
$\text{IF}(z_j, z) \in \nR$, $$\begin{aligned}
\text{IF}(z_j, z) &:= \restr{\frac{\partial L(z, \hat{\theta}_{\epsilon, j})}{\partial \epsilon}}{\epsilon = 0}\\
&= \nabla_\theta L(z, \hat{\theta})^\top \restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0}\\
&= -\nabla_\theta L(z, \hat{\theta})^\top H_{\hat{\theta}}^{-1} \nabla_\theta L(z_j, \hat{\theta}).
\end{aligned}$$ This is the formulation for IF that the referenced paper
uses. The IF value gives the relative change in the test loss value if
we increase $\epsilon$ by an infinitesimal amount from $\epsilon = 0$.
This "upweighing" represents the removal of $z_i$ from the loss
computation. It is large and positive when upweighting $z_j$ a bit
increases the loss by a lot (harmful) $\iff$ when downweighting $z_j$ a
bit decreases the loss by a lot. It is large and negative when
upweighting $z_j$ a bit decreases the loss significantly (helpful). This
formulation refers to *negative influence*. *As we would intuitively
expect a high influence value for a sample that decreases the loss a
lot, both this book and the Arnoldi
paper [@https://doi.org/10.48550/arxiv.2112.03052] consider the
definition*
$$\text{IF}(z_j, z) = \nabla_\theta L(z, \hat{\theta})^\top H_{\hat{\theta}}^{-1} \nabla_\theta L(z_j, \hat{\theta}).$$
Let us consider some remarks. Using a first-order Taylor approximation,
it is also clear that $$\begin{aligned}
L(z, \hat{\theta}_{\epsilon, j}) &= L(z, \hat{\theta}) + \epsilon \restr{\frac{\partial L(z, \hat{\theta}_{\epsilon, j})}{\partial \epsilon}}{\epsilon = 0} + o(\epsilon).
\end{aligned}$$ One can study the behavior of the test loss when
perturbing sample $z_j$ by an infinitesimal amount in a clear way using
the above formula. The IF formula is also very symmetrical: we are
taking a modified dot product between the gradient of loss of the test
sample and the training sample.
From now on, we will use the latter definition for the influence
function (without the negative sign). We have discussed that
$H_{\hat{\theta}}$ is symmetric and PD. Therefore, it can be
orthogonally diagonalized, i.e., we can find a rotation/mirroring such
that in this new basis, the average Hessian on the training points is a
diagonal matrix: $$H_{\hat{\theta}} = Q \Lambda Q^\top$$ with an
orthogonal matrix $Q$ (with ortho*normal* columns), and its inverse is
given by $$H_{\hat{\theta}}^{-1} = Q \Lambda^{-1} Q^\top.$$ Therefore,
$$\begin{aligned}
\text{IF}(z_j, z) &= \nabla_\theta L(z, \hat{\theta})^\top H_{\hat{\theta}}^{-1} \nabla_\theta L(z_j, \hat{\theta})\\
&= \nabla_\theta L(z, \hat{\theta})^\top Q \Lambda^{-1} Q^\top \nabla_\theta L(z_j, \hat{\theta})\\
&= \left(Q^\top \nabla_\theta L(z, \hat{\theta})\right)^\top \Lambda^{-1} \left(Q^\top \nabla_\theta L(z_j, \hat{\theta})\right)\\
&= \left\langle Q^\top \nabla_\theta L(z, \hat{\theta}), Q^\top \nabla_\theta L(z_j, \hat{\theta}) \right\rangle_{\Lambda^{-1}}.
\end{aligned}$$ To calculate $IF(z_j, z)$, we rotate/mirror the gradient
vectors to transform them into the eigenbasis of the average Hessian,
then compute a generalized dot product between the gradients expressed
in the eigenbasis, weighted by the corresponding diagonal entries of
$\Lambda^{-1}$ (the inverse curvatures in each direction of the
eigenbasis). The dot product is, therefore, calculated in a *distorted
space*, where directions with the flattest curvature in the loss
landscape are given more weights. To get a high influence value, having
large positive values in these directions in the gradient vectors
expressed in the eigenbasis is more important.
The caveat is that *we might have millions or billions of parameters*.
Let $p := \text{number of parameters} = \cO(\text{millions-billions})$
and
$n := \text{number of training samples} = \cO(\text{millions-billions})$.
Then, the naive $H_{\hat{\theta}}^{-1}$ computation is $\cO(np^2 + p^3)$
where the $np^2$ part corresponds to computing $H_{\hat{\theta}}$ and
$p^3$ corresponds to computing its inverse. Computing
$H_{\hat{\theta}}^{-1}$ dominates the IF computation when $n$ is not
significantly larger than $p$. In practice, naive computation is
prohibitive and infeasible.
::: information
Proof of the Derivative Formula We start with the definitions
$$\hat{\theta} = \argmin_\theta \frac{1}{n}\sum_i L(z_i, \theta)$$ and
$$\hat{\theta}_{\epsilon, j} = \argmin_\theta \frac{1}{n} \sum_i L(z_i, \theta) + \epsilon L(z_j, \theta).$$
Following the strict convexity assumption, both of these values are
unique (we consider $\epsilon > -1/n$ s.t. all terms in the sum are
strictly convex). Fermat's theorem tells us that every extremum of a
differentiable function is a stationary point. Thus, a necessary
condition for the optimality of a differentiable function is that the
gradient at the optimum must be 0. (This is not sufficient, however:
stationary points can also be maxima and saddle points.) Therefore, the
previous optimality assumptions imply
$$\nabla_\theta \left(\frac{1}{n}\sum_i L(z_i, \hat{\theta})\right) = \frac{1}{n} \sum_i \nabla_\theta L(z_i, \hat{\theta}) = 0$$
(because the gradient is a linear operator) and
$$\nabla_\theta \left(\frac{1}{n}\sum_i L(z_i, \hat{\theta}_{\epsilon, j}) + \epsilon L(z_j, \hat{\theta}_{\epsilon, j})\right) = \frac{1}{n}\sum_i \nabla_\theta L(z_i, \hat{\theta}_{\epsilon, j}) + \epsilon \nabla_\theta L(z_j, \hat{\theta}_{\epsilon, j}) = 0.$$
These are ingredients (1) and (2).
We also make use of the Implicit Function Theorem.
$\hat{\theta}_{\epsilon, j}$ is differentiable $\epsilon$ at
$\epsilon = 0$. (The optimal point of the modified loss is also
differentiable another variable of that function.) Therefore, one can
consider the first-order Taylor expansion (by linearizing
$\hat{\theta}_{\epsilon, j}$ in $\epsilon$ around $\epsilon = 0$):
$$\hat{\theta}_{\epsilon, j} = \underbrace{\hat{\theta}}_{\restr{\hat{\theta}_{\epsilon, j}}{\epsilon = 0}} + \epsilon \underbrace{\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0}}_{\in \nR^d} + o(\epsilon).$$
This is ingredient (3).
- We use linearization often, just like when attributing to test input
features.
- $f(\epsilon) = f(0) + (\epsilon - 0) \cdot \frac{\partial f(0)}{\partial \epsilon} + o(\epsilon)$
is the Taylor expansion of
$f(\epsilon) := \hat{\theta}_{\epsilon, j}$ around $\epsilon = 0$.
- $o(\epsilon)$ specifies
$\lim_{\epsilon \rightarrow 0} \frac{R_1(x)}{\epsilon} = 0$. The
remainder term converges to $0$ faster than $\epsilon$ itself.
We compute $\nabla_\theta L(z_i, \hat{\theta}_{\epsilon, j})$ in terms
of $\nabla_\theta L(z_i, \hat{\theta})$ as follows (by plugging in
ingredient (3)). We calculate the Taylor expansion of
$\nabla_\theta L(z_i, \hat{\theta}_{\epsilon, j})$ in $\theta$, around
$\hat{\theta}$. $$\begin{aligned}
\nabla_\theta L(z_i, \hat{\theta}_{\epsilon, j}) &= \nabla_\theta L \left(z_i, \hat{\theta} + \epsilon \restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + o(\epsilon)\right)\\
&= \nabla_\theta L(z_i, \hat{\theta}) + \nabla^2_\theta L(z_i, \hat{\theta}) \left(\epsilon \restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + o(\epsilon)\right) + o(\epsilon)\\
&= \nabla_\theta L(z_i, \hat{\theta}) + \epsilon \underbrace{\nabla^2_\theta L(z_i, \hat{\theta})}_{\in \nR^{d \times d}} \underbrace{\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0}}_{\in \nR^d} + o(\epsilon)
\end{aligned}$$
- This formulation is, of course, given to later get rid of the
$o(\epsilon)$ terms and provide an approximation. The approximation
is justified because $\hat{\theta}_{\epsilon, j}$ is very similar to
$\hat{\theta}$ anyways for small $\epsilon$. The difference is
$$\epsilon \restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + o(\epsilon).$$
We plug
$$\nabla_\theta L(z_i, \hat{\theta}_{\epsilon, j}) = \nabla_\theta L(z_i, \hat{\theta}) + \epsilon \nabla^2_\theta L(z_i, \hat{\theta}) \restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + o(\epsilon)$$
into the second ingredient
$$\frac{1}{n}\sum_i \nabla_\theta L(z_i, \hat{\theta}_{\epsilon, j}) + \epsilon \nabla_\theta L(z_j, \hat{\theta}_{\epsilon, j}) = 0.$$
This results in $$\begin{aligned}
&\frac{1}{n}\sum_i \left( \nabla_\theta L(z_i, \hat{\theta}) + \epsilon \nabla^2_\theta L(z_i, \hat{\theta}) \restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + o(\epsilon) \right)\\
&\hspace{2.8em}+ \epsilon \left(\nabla_\theta L(z_j, \hat{\theta}) + \epsilon \nabla^2_\theta L(z_j, \hat{\theta}) \restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + o(\epsilon) \right) = 0\\
&\iff \frac{1}{n} \sum_i \nabla_\theta L(z_i, \hat{\theta}) + \epsilon \frac{1}{n}\sum_i\nabla^2_\theta L(z_i, \hat{\theta})\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0}\\
&\hspace{2.8em}+ \epsilon \nabla_\theta L(z_j, \hat{\theta}) + \epsilon^2 \nabla^2_\theta L(z_j, \hat{\theta})\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + (\epsilon + 1) o(\epsilon) = 0\\
&\iff \frac{1}{n} \sum_i \nabla_\theta L(z_i, \hat{\theta}) + \epsilon \frac{1}{n}\sum_i\nabla^2_\theta L(z_i, \hat{\theta})\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + \epsilon \nabla_\theta L(z_j, \hat{\theta}) + o(\epsilon) = 0\\
&\overset{(1)}{\iff} \epsilon \frac{1}{n}\sum_i\nabla^2_\theta L(z_i, \hat{\theta})\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + \epsilon \nabla_\theta L(z_j, \hat{\theta}) + o(\epsilon) = 0\\
&\iff \epsilon H_{\hat{\theta}}\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + \epsilon \nabla_\theta L(z_j, \hat{\theta}) + \underbrace{o(\epsilon)}_{\lim_{\epsilon \rightarrow 0} \frac{R_1(\epsilon)}{\epsilon} = 0} = 0\\
&\iff H_{\hat{\theta}}\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + \nabla_\theta L(z_j, \hat{\theta}) + \underbrace{\frac{o(\epsilon)}{\epsilon}}_{\lim_{\epsilon \rightarrow 0} \frac{R_1(\epsilon)}{\epsilon^2} = 0} = 0\\
&\iff H_{\hat{\theta}}\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + \nabla_\theta L(z_j, \hat{\theta}) + o(\epsilon^2) = 0\\
&\overset{\epsilon \text{ small}}{\implies} H_{\hat{\theta}}\restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} + \nabla_\theta L(z_j, \hat{\theta}) \approx 0\\
&\iff \restr{\frac{\partial \hat{\theta}_{\epsilon, j}}{\partial \epsilon}}{\epsilon = 0} \approx -H_{\hat{\theta}}^{-1}\nabla_\theta L(z_j, \hat{\theta})
\end{aligned}$$
:::
Generally, the focus of research in influence function computation is
how to speed things up while keeping approximations accurate.
Interestingly, one can speed things up a lot.
### LISSA
LISSA [@agarwal2017secondorder] is a method the authors of
"[Understanding Black-box Predictions via Influence
Functions](https://arxiv.org/abs/1703.04730)" [@https://doi.org/10.48550/arxiv.1703.04730]
use to keep the inverse average Hessian calculation tractable. The LISSA
algorithm uses an iterative approximation to approximate the inverse
Hessian vector product (iHVP):
$$H_{\hat{\theta}}^{-1}\nabla_\theta L(z, \hat{\theta}).$$ (Note the
symmetricity of the average Hessian matrix.) For each test point of
interest, they can precompute the above vector, and then they can
efficiently compute the dot product between it and
$$\nabla_\theta L(z_i, \hat{\theta})$$ for each training sample $z_i$.
This also helps with the quadratic scaling of the size of the average
Hessian, as instead of computing the inverse average Hessian directly,
they approximate the Matrix-Vector (MV) product through the iterative
procedure.
The iterative approximation uses the fact that
$$A^{-1} = \sum_{k = 0}^\infty (I - A)^k$$ for an invertible matrix $A$
with all eigenvalues bounded below $1$. At the small cost of inaccuracy,
we gain a lot of speedup by this. The authors have another speedup by
subsampling the training data in the summation (like how we do SGD-based
optimization):
$$H_{\hat{\theta}} \approx \frac{1}{|I|} \sum_{i \in I} \nabla^2_\theta L(z_i, \hat{\theta}).$$
Averaging Hessians through all training samples is infeasible. If we
have a good representation of our training samples, we do not need to do
a complete pass through the training samples. Random samples very likely
give a good representation.
The final procedure estimates the inverse Hessian-Vector Product (HVP)
as $$H_i^{-1}v = v + (I - H_{\hat{\theta}})H_{i - 1}^{-1}v,$$ where
$H_{\hat{\theta}}$ is approximated on random batches (of size one or a
small enough size), $v = \nabla_\theta L(z, \hat{\theta})$, and
$i \in [t]$ is a particular iteration of the method ($H_0^{-1}v = v$).
Using this technique, the authors reduce the time complexity of
computing IF($z_j, z$) for all training points and a single test point
to $\cO(np + rtp)$ where $r$ is the number of independent repeats of the
iterative HVP calculation (where they average the results from the $r$
runs) and $t$ is the number of iterations.
**Note**: LISSA already existed before the seminal IF paper -- the
authors adapted it to their method.
### Arnoldi
Arnoldi, introduced in the paper "[Scaling Up Influence
Functions](https://arxiv.org/abs/2112.03052)" [@https://doi.org/10.48550/arxiv.2112.03052],
is a method for speeding up influence function calculations and reducing
its memory requirements. Calculating and keeping a billion-dimensional
vector (number of parameters)
$H_{\hat{\theta}}^{-1}\nabla_\theta L(z, \hat{\theta})$ in memory is
still very restrictive, and very coarse approximations (e.g.,
considering only a subset of parameters) are needed. If we consider the
diagonalized formula for IF, $H_{\hat{\theta}}$ is written as
$$H_{\hat{\theta}} = Q \Lambda Q^\top$$ and the formula is
$$\operatorname{IF}(z_j, z) = \left\langle Q^\top \nabla_\theta L(z, \hat{\theta}), Q^\top \nabla_\theta L(z_j, \hat{\theta}) \right\rangle_{\Lambda^{-1}}.$$
Here, $Q \in \nR^{p \times p}$ which is infeasibly large for efficient
use. Setting $G = Q^\top$ to contain the $k$ eigenvectors of
$H_{\hat{\theta}}$ that correspond to its largest eigenvalues as rows
(i.e., it is the projection matrix onto the span of the "top $k$
eigenvectors"), we obtain
$$\operatorname{IF}(z_j, z) = \left\langle G \nabla_\theta L(z, \hat{\theta}), G \nabla_\theta L(z_j, \hat{\theta}) \right\rangle_{\Lambda_k^{-1}}$$
where $$H_{\hat{\theta}} \approx G^\top \Lambda_k G.$$ By using this
formulation, we map the gradients to a much lower-dimensional ($k$)
space, and computations (dot product) become notably faster. $G$ and
$\Lambda_k$ are calculated once and then cached.
The top $k$ eigenvalues of $H_{\hat{\theta}}$ are the smallest $k$
eigenvalues of its inverse, thus we take the eigenvalues that have the
least influence in calculating IF. Very curiously, the authors report
that selecting the top $k$ eigenvalues of the inverse (corresponding to
the dominant terms of the dot product) performs worse. DNN loss
landscapes are highly non-convex, and the Hessian can have negative
eigenvalues. The authors select the top $k$ eigenvalues *in absolute
value*.
The actual Arnoldi method is much more detailed and sophisticated in
obtaining the referenced matrices, but the main idea is the same as was
introduced here.
::: information
Using a Subset of the Parameters Instead of the entire model, one can
also use only the final or initial layers for $\theta$. This
dramatically reduces $p$ by orders of magnitude. It has two drawbacks:
the choice of layers becomes a hyperparameter, and the viable values of
the number of parameters kept will depend on the model architecture.
Using just one layer can result in different influence estimates than
those based on the whole model. This is deemed suboptimal and is not
used in Arnoldi but was used in earlier work,
e.g., [@https://doi.org/10.48550/arxiv.1703.04730].
:::
::: information
Reducing the Search Space The following speedup is also compatible with
Arnoldi, although the authors do not use it. (They use Arnoldi for
retrieval of wrong labels. It is not aligned with the goal.) It is used
in [FastIF](https://aclanthology.org/2021.emnlp-main.808/).
$\text{IF}(z_j, z)$ is already quite expensive. We should not calculate
it for all $j$. The end goal is usually to retrieve influential training
samples $z_j$ for the test sample $z$. Instead of computing
$\text{IF}(z_j, z)$ for all training samples $z_j$, we first reduce the
search space (for candidate training samples that are likely to
influence our test samples) via cheap, approximate search. This greatly
reduces computational load. For example, we can perform $L_2$-distance
$k$-NN using the last layer features to retrieve candidates. This is a
typical trick for deep metric learning: We take the last layer
activations from a network as a good representation of our sample in a
lower-dimensional space. We compute the Euclidean distance in this space
and retrieve the top $k$ semantically similar samples from the training
set to the test sample. Instead of taking the top $k$ samples, we could
also threshold by the $L_2$ radius. Both ways reduce the search space by
a lot; thus, they reduce computational time.
:::
### LISSA vs. Arnoldi
::: {#tab:results2}
Method $\tilde{p}$ $T$, secs AUC AP
------------------------------- ------------- ----------- ---------- ----------
LISSA, $r=10$ \- 4900 98.9 95.0
LISSA, $r=100$ (10% $\Theta$) \- 32300 98.8 94.8
TracIn\[1\] \- 5 98.7 94.0
TracIn\[10\] \- 42 **99.7** **98.7**
RandProj 10 0.2 97.2 87.7
RandProj 100 1.9 98.6 93.9
RandSelect 10 0.1 54.9 31.2
RandSelect 100 1.8 91.8 72.6
Arnoldi 10 0.2 95.0 84.0
Arnoldi 100 1.9 98.2 92.9
: "Retrieval of mislabeled MNIST examples using self-influence for
larger CNN. For TracIn the $C$ value is in brackets (last or all). All
methods use full models (except the LISSA run on 10% of parameters
$\Theta$)." [@https://doi.org/10.48550/arxiv.2112.03052] TracIn\[10\]
gives the best results while staying feasible to compute. RandProj is
also a surprisingly strong method. Table is adapted
from [@https://doi.org/10.48550/arxiv.2112.03052].
:::
Results [@https://doi.org/10.48550/arxiv.2112.03052] of Arnoldi and
various other methods are given in
Table [3.3](#tab:results2){reference-type="ref"
reference="tab:results2"}. The Arnoldi authors compute AUC and AP for
the retrieval of wrong labels. They try to retrieve the wrongly put
labels in the training set using self-influence. The task is not exactly
aligned with removing training samples and retraining (precise
estimation of IF) -- this is why RandProj can also perform quite well.
In fact, it performs better than Arnoldi. (It does not need to give
precise IF estimates!) In RandProj, $G$ is a random Gaussian matrix: It
does not correspond to the eigenvectors of the top $k$ eigenvalues.
Eigenvalues are all considered to be one. In RandSelect, the eigenvalues
are also all considered to be one, and we select the (same) elements of
the two gradient vectors randomly. It needs a much larger $k$ than
RandProj. Arnoldi is $10^3-10^5$ faster than LISSA while being only a
couple of percent worse on AUC and AP.
TracIn [@https://doi.org/10.48550/arxiv.2002.08484] performs best on AUC
and AP, but Arnoldi and RandProj are an order of magnitude faster.
### TracIn
![Loss value of a 'zucchini' sample over the course of training. The
initial loss value for the test sample at the beginning of training is
shown on the left. Test losses are evaluated *after* being presented
with the shown images. Different training samples have a different
impact on the test loss of interest. When we have a similar image but
the label is different, the loss goes up for the test sample. Samples
that increase the loss are called opponents to the test sample of
interest. Samples that decrease the loss are the proponents of a test
sample. We can find proponents and opponents for each training sample
during training by considering the changes in the loss value. The
'Zucchini' training image results in a lower test loss, as the model
learns to detect zucchinis better. The 'Sunglasses' training image is
similar to the seatbelt images (car interior shown) but has a
non-car-related label. The model has to focus on a small part of the
image to predict correctly. This implicitly helps the prediction of
'zucchini'. **Note**: During training, the general trend of the loss
should be downwards, but because of the noisy behavior of SGD (e.g.,
update after every training sample) and the possibility of overfitting,
the test loss of sample $z$ does not have to decrease at every gradient
step.](gfx/03_tracin.png){#fig:tracin width="0.8\\linewidth"}
Are we asking the right question in the previous set of methods to
attribute to training samples? We only measure the change in the loss
the optimal model, given an infinitesimal change in the weight of one of
the training samples. This sounds super naive and irrelevant in
practice. Who would want to introduce an infinitesimal change in the
weight of one of the samples?
TracIn, presented in the paper "[Estimating Training Data Influence by
Tracing Gradient
Descent](https://arxiv.org/abs/2002.08484)" [@https://doi.org/10.48550/arxiv.2002.08484]
is another approach from 2020: We decompose the final test loss
$L(z, \theta_T)$ of the trained model $\theta_T$ minus the baseline loss
$L(z, \theta_0)$ of the randomly initialized model $\theta_0$ into
contributions from individual training samples. This is *global
linearization* of the final test loss the update steps.[^51]
Figure [3.53](#fig:tracin){reference-type="ref" reference="fig:tracin"}
gives an intuitive introduction to TracIn -- it considers *useful* and
*harmful* examples. *TracIn is the Integrated Gradients for training
sample attribution.*[^52]
The loss for the test sample at final iteration $T$ can be written as
the telescopic sum (i.e., everything cancels):
$$L(z, \theta_T) = L(z, \theta_0) + (L(z, \theta_1) - L(z, \theta_0)) + \dots + (L(z, \theta_T) - L(z, \theta_{T - 1})).$$
It is the sum of the original loss value and the loss differences
between consecutive parameter updates.
Let us first consider the case when $\theta$ is updated for every single
training sample $z_j$ (batch size $= 1$). There is a clear, unique
assignment of which training sample affects which parameter update step.
(We never do this in practice, but we assume this for simplicity.) Then
there is a natural notion of contribution of $z_j$ to the test loss
$L(z, \theta_T)$:
$$\operatorname{TracInIdeal}(z_j, z) = \sum_{t: z_j \text{ used for } \theta_t \text{ update}} L(z, \theta_t) - L(z, \theta_{t - 1}).$$
The summation is over the changes of loss for test sample $z$, where the
parameter update was done by training on a sample of interest $z_j$.
There will be millions/billions of iterations. We want to determine
which of these iterations corresponds to the training sample of
interest; then, we sum up these differences. This results in a
completeness property (refer back to the Integrated Gradients method for
test feature attribution): $$\begin{aligned}
L(z, \theta_T) - L(z, \theta_0) &= \sum_{t = 1}^T L(z, \theta_t) - L(z, \theta_{t - 1})\\
&= \sum_{j = 1}^n \sum_{t: z_j \text{ used for } \theta_j \text{ update}} L(z, \theta_t) - L(z, \theta_{t - 1})\\
&= \sum_{j = 1}^n \operatorname{TracInIdeal}(z_j, z).
\end{aligned}$$ This follows from each time step corresponding to a
unique training sample. Thus, we have a decomposition of (final loss -
baseline loss) into individual contributions.
Of course, the critical issue with this formulation is that, in
practice, we update models on a *batch* of training samples. When there
is a parameter update, it is hard to attribute the change in loss (due
to the update) to individual training samples in the batch. Many
training samples are involved in the difference
$L(z, \theta_t) - L(z, \theta_{t - 1})$.
Each parameter update with SGD looks as follows:
$$\theta_{t + 1} = \theta_t - \frac{\eta_t}{|B_t|} \sum_{i: z_i \in B_t} \nabla_\theta L(z_i, \theta_t)$$
where $\eta_t$ is the learning rate at step $t$ and $|B_t|$ is the size
of the batch at step $t$. This is usually kept fixed, but we often do
not drop the last truncated batch that has a smaller size. We average
the gradients over the batch. We have a nice decomposition of the
parameter update steps as a sum of individual training-sample-wise
gradients for the loss (in the batch).
We rewrite the loss $L(z, \theta_{t + 1})$ with parameters from time
step $t + 1$ as $$\begin{aligned}
L(z, \theta_{t + 1}) &= L\left(z, \theta_t - \frac{\eta_t}{|B_t|} \sum_{i: z_i \in B_t} \nabla_\theta L(z_i, \theta_t)\right)\\
&= L\left(z, \theta_t\right) + \left(- \frac{\eta_t}{|B_t|} \sum_{i : z_i \in B_t}\nabla_\theta L(z_i, \theta_t)\right)^\top \nabla_\theta L(z, \theta_t) + o(\eta_t)
\end{aligned}$$ where we performed a Taylor expansion of
$L(z, \theta_{t + 1})$ around $\eta_t = 0$
($f(\eta_t) = f(0) + \eta_t f'(0) + o(\eta_t)$) or around $\theta_t$
($f(\theta_{t + 1}) = f(\theta_t) + (\theta_{t + 1} - \theta_{t}) \nabla_\theta f(\theta_t) + o(\theta_{t + 1} - \theta_{t})$).
We can choose to do both because they have a linear relationship. This
is an *accurate approximation* because we are using a small learning
rate ($1\mathrm{e}{-3}$), so $\theta_{t + 1}$ is close to $\theta_t$.
Therefore,
$$L(z, \theta_t) - L(z, \theta_{t + 1}) \approx \frac{\eta_t}{|B_t|} \sum_{i: z_i \in B_t} \nabla_\theta L(z_i, \theta_t)^\top \nabla_\theta L(z, \theta_t).$$
In words, the difference in loss values before and after the update is
approximately equal to some constant times a summation over dot products
of training sample gradients with the test sample gradient. There is a
natural decomposition of the contribution of individual samples in the
batch towards the difference in the loss. When this difference is a
large positive number, it means the batch samples were useful for the
test sample $z$. This is the particular reason why we "flip the sign"
and choose to model $L(z, \theta_t) - L(z, \theta_{t + 1})$.
A natural notion of the contribution of sample $z_j$ towards the
difference in losses for this particular update is given by
$$\frac{\eta_t}{|B_t|}\nabla_\theta L(z_j, \theta_t)^\top \nabla_\theta L(z, \theta_t)$$
when $z_j$ is included in batch $B_t$ for updating $\theta$ and 0
otherwise. Using this approach, we make attributing to individual
training samples feasible in practice. This is a constant times the dot
product between the test sample of interest gradient and the training
sample of interest gradient. This is similar to what we have seen in the
previous methods.
Summing over the entire trajectory of model updates, we define the
contribution of $z_j$ towards the loss for $z$ as
$$\operatorname{TracIn}(z_j, z) = \sum_{t: z_j \in B_t} \frac{\eta_t}{|B_t|}\left\langle \nabla_\theta L(z_j, \theta_t), \nabla_\theta L(z, \theta_t) \right\rangle.$$
This is the final definition of TracIn, the trajectory-based influence
of sample $z_j$ towards test sample $z$. It is simply a summation of all
parameter update steps $t$ that contained $z_j$ in the batch. These are
the only relevant terms, the others are $0$. The smaller the loss
becomes on test sample $z$ between steps $t$ and $t + 1$, the more we
attribute those training samples that were in the batch of step $t$.
### TracIn vs. IF
These two methods have very similar formulations but also some key
differences. $$\begin{aligned}
\operatorname{IF}(z_j, z) &= \left\langle Q^\top \nabla_\theta L(z, \hat{\theta}), Q^\top \nabla_\theta L(z_j, \hat{\theta}) \right\rangle_{\Lambda^{-1}}\\
\operatorname{TracIn}(z_j, z) &= \sum_{t: z_j \in B_t} \frac{\eta_t}{|B_t|}\left\langle \nabla_\theta L(z_j, \theta_t), \nabla_\theta L(z, \theta_t) \right\rangle
\end{aligned}$$ Both use a form of a dot product between parameter
gradients for the training and test samples. It is quite impressive that
the final formulations end up being so simple, but it is a natural
byproduct of linearization. TracIn sums over training iterations
(checkpoints) and does not use a Hessian-based distortion of the dot
product (to squeeze/expand some of the eigenbasis directions). It is,
therefore, cheaper because we do not need to compute the Hessian.
However, it is very memory intensive. In contrast, IF considers only the
final[^53] parameter and distorts the space using the average Hessian.
Using IF, we are missing out on all contributions on the way during
training. Intuitively, TracIn makes more sense, but it is hard (if not
impossible) to say which one is better conceptually. We can only use
empirical evaluation to tell which serves our purpose better. IF has
many more assumptions that are also violated in practice. The method
considers globally optimal parameter configurations, strict convexity,
and twice-differentiability. In practice, the eigenvalues could also
become negative (saddle point) or 0 $\implies$ invertibility does not
hold when we have a 0 eigenvalue (i.e., the loss is constant in some
directions). In theory, this can happen during optimization. A small
epsilon has to be added.
## Evaluation of Attribution to Test Samples
There are two perspectives of evaluation of such methods: (1) comparing
approximate values against their GT counterparts and (2) evaluating such
attribution methods based on some end goals/downstream tasks.
### Comparison of Approximation Against GT Value
IF approximates the remove-and-retrain algorithm (remove a certain
training sample, retrain, and see how much that influences the loss
value for the test sample of interest). One can measure *soundness* by
comparing influence values against the actual remove-and-retrain
baseline. This is an evaluation of soundness. To see the correspondence,
consider the first-order Taylor approximation of
$L(z, \hat{\theta}_{\epsilon, j})$ again around $\epsilon = 0$. To avoid
confusion, we stick to the definition of IF where a larger positive
value signals positive influence.[^54] We have
$$L(z, \hat{\theta}_{\epsilon, j}) - L(z, \hat{\theta}) = \epsilon \underbrace{\restr{\frac{\partial L(z, \hat{\theta}_{\epsilon, j})}{\partial \epsilon}}{\epsilon = 0}}_{-\operatorname{IF}(z_j, z)} + o(\epsilon).$$
The notion of removal is equivalent to setting $\epsilon = -1/n$, which
is generally a very small number, so the linear approximation stays
reasonably faithful to the actual loss function. We finally obtain
$$L(z, \hat{\theta}_{\setminus j}) - L(z, \hat{\theta}) \approx \frac{1}{n} \operatorname{IF}(z_j, z).$$
To benchmark IF on how faithful it is to the remove-and-retrain
algorithm, we can compare the left quantity to the right one. We might
be interested in removing not just one sample but a group of them. This
is not modeled by the most naive version of remove-and-retrain that IF
approximates.
![Comparison of the predicted difference in loss after the removal of a
sample (i.e., the IF value) against the actual difference in loss.
Figure taken from [@https://doi.org/10.48550/arxiv.1703.04730]. The
benchmark measures faithfulness to leave-one-out retraining on MNIST.
For every training and test sample, we measure the change in test loss
by actually removing a training sample. This is quite fast for a linear
model. We also have the predicted difference in loss through the IF
computation. We compare the two values. *Left.* The gradient-based
approximation of the influence (times $1/n$ to model removal) gives
nearly the same result as the actual difference in the loss for the
linear model (logistic regression). It is a good sanity check that the
exact Hessian computation performs well approximation-wise. *Middle.*
"Linear (approx)" still considers logistic regression but uses the LISSA
approximations to speed up the Hessian computation. Even if we use LISSA
to approximate the average Hessian, we do not lose much accuracy.
*Right.* To evaluate on CNNs, we must take a leap of faith. The logistic
regression optimization is strictly convex, but the CNN one is, of
course, not. They apply the method to a small CNN on MNIST. We can see
some correlation, but many things are seemingly not working anymore. Two
groups follow the overall trend, but we do not see much correlation
between the actual and the predicted value *within* each group. We have
mixed results.](gfx/03_soundness.png){#fig:soundness
width="0.8\\linewidth"}
Soundness results of IF are shown in
Figure [3.54](#fig:soundness){reference-type="ref"
reference="fig:soundness"}. We can also try leaving a group of samples
out from the training set and seeing how the model reacts regarding the
change in the loss for a test sample. This is shown in
Figure [3.55](#fig:groupout){reference-type="ref"
reference="fig:groupout"} from the FastIF
paper [@https://doi.org/10.48550/arxiv.2012.15781], which is yet another
paper on how we can speed up IF computations. The task is MNLI, a
3-class natural language inference task with classes entailment,
neutral, and contradiction. The group of samples we remove is determined
by the influence values (we sort all training samples according to their
influence values). The influence value has parity: it can be positive or
negative. Using this book's IF definition, positive means that including
the sample helps, and negative means that by including this training
sample ($\epsilon > 0$), we are increasing the loss. The general trend
is that removing helpful samples increases the loss. (It is harmful to
remove the samples with a high IF value.) Similarly: removing harmful
samples decreases the loss. (It is useful to remove the samples with a
low IF value.) By just removing random samples, we do not see much
change in the test loss. "Full" means we use the entire dataset. The KNN
versions correspond to selecting representative samples from the
training set. We can see that this can even be beneficial.
![Leave-M-out results on
[MNLI](https://cims.nyu.edu/~sbowman/multinli/) [@N18-1101]. "Change in
loss on the data point after retraining, where we remove
$m_\text{remove} \in \{1, 5, 25, 50, 100\}$ data-points \[either
positives or negatives\]. We can see that the fast influence algorithms
\[(the KNN versions)\] produce reasonable quality estimations at just a
fraction of computation
cost." [@https://doi.org/10.48550/arxiv.2012.15781] Correct and
incorrect mean that the original predictions were correct/incorrect.
Figure taken
from [@https://doi.org/10.48550/arxiv.2012.15781].](gfx/03_soundness2.pdf){#fig:groupout
width="0.8\\linewidth"}
### Focus on the End Goal: Mislabeled Training Data Detection
IF and TracIn are eventually serving certain end goals.
Remove-and-retrain may not be very useful as the end goal. For example,
when the actual end goal is to debug/improve the model/dataset,
faithfulness to the remove-and-retrain algorithm is not of particular
interest. It is just an intermediate step (a proxy) for using the method
for improving models.[^55] We need to evaluate based on more reasonable
end goals, e.g., mislabeled training data detection. We will see how
people can use influence functions to detect mislabeled training
samples. Checking, e.g., how well our method approximates
remove-and-retrain might be good to check whether our proposed idea
works. Then, we evaluate the method using the actual end goal. This can
change the conclusion of which method is better for us.
::: definition
Self-Influence Self-influence is a metric used in training sample
attribution methods that measures how much contribution a particular
training sample $z_j$ has to its own loss.
**Example**: Using influence functions, the self-influence score for
sample $z_j$ is $\operatorname{IF}(z_j, z_j)$. Using TracIn, we can use
$\operatorname{TracIn}(z_j, z_j)$ as a self-influence score.
:::
We make use of self-influence scores for mislabeled training data
detection. If a sample is one of its kind, then it only has itself to
decrease its loss, therefore, we expect a high self-influence score.
Looking at that exact sample is the only way to decrease the loss of
that sample. On the other hand, if the sample is just like other data
points in the training set, then it is among many that decrease its
loss, therefore, we expect a low self-influence score: Including it or
not has little influence. Mislabeled data are typical examples of "one
of its kind" data. As such, we expect high self-influence scores for
them.[^56] By measuring self-influence, we should be able to tell which
samples are mislabeled.
![Results of using self-influence to detect mislabeled training samples
on CIFAR. *Left*. Fixing the mislabeled data found within a certain
fraction of the training data results in a larger improvement in test
accuracy for TracIn compared to the other methods. *Right*. TracIn
retrieves mislabeled samples much better than IFs. Figure taken
from [@https://doi.org/10.48550/arxiv.2002.08484].](gfx/03_mislabel.pdf){#fig:selfinf
width="0.8\\linewidth"}
Figure [3.56](#fig:selfinf){reference-type="ref"
reference="fig:selfinf"} shows benchmark results on mislabeled training
data detection from the TracIn paper. Mislabeled training data detection
is a typical binary detection task: we want to classify mislabeled/not
mislabeled. We know the ground truth in the benchmark; we try to
retrieve the mislabeled ones in the training set. We can use detection
metrics like AUROC and AP (AUPR), which are typical evaluation scores
for retrieval tasks. Our only feature is the attribution score. The
question is, "Is there a threshold that is extremely good for separating
mislabeled samples from not mislabeled?" In this benchmark, however,
they are *not* doing that. Instead, they sort training samples according
to self-influence values and then decrease the threshold from top to
bottom and see how many mislabeled samples are retrieved.
![Results of using self-influence to detect mislabeled training samples
on MNIST using a small CNN. AUC for retrieval of mislabeled MNIST
examples as a function of the number of eigenvalues (projections),
$\tilde{p}$. Figure taken
from [@https://doi.org/10.48550/arxiv.2112.03052].](gfx/03_mislabel2.png){#fig:mislabel2
width="0.6\\linewidth"}
We also discuss using self-influence to detect mislabeled training
samples on MNIST. The results are shown in
Figure [3.57](#fig:mislabel2){reference-type="ref"
reference="fig:mislabel2"}. The task is not perfectly aligned with IF
computation: the exact method can be surpassed.
Finally, we discuss the retrieval of mislabeled MNIST examples using
self-influence for a larger CNN. As discussed before, AUC and AP are
usual detection metrics for mislabeled samples. The results are shown in
Table [3.3](#tab:results2){reference-type="ref"
reference="tab:results2"}.
## Applications of Attribution to Test Samples
#### Fact Tracing
![Illustration of using training data attribution scores for fact
tracing. Figure taken
from [@https://doi.org/10.48550/arxiv.2205.11482].](gfx/03_facttracing.png){#fig:facttracing
width="0.5\\linewidth"}
We discuss fact tracing, an important application of test sample
attribution, as shown in the paper "[Towards Tracing Factual Knowledge
in Language Models Back to the Training
Data](https://arxiv.org/abs/2205.11482)" [@https://doi.org/10.48550/arxiv.2205.11482].
Suppose we have a language model that is trained to predict missing
words using actual facts, and we have built a dataset with the GT fact
attributions in the training set. Then we can measure fact retrieval
performance: We evaluate any Training Data Attribution method on its
ability to identify the so-called true proponents, i.e., the true
training sample information sources. We want to retrieve the true
proponents out of a large set of training examples, which is, again, a
classical retrieval task. This is illustrated in
Figure [3.58](#fig:facttracing){reference-type="ref"
reference="fig:facttracing"}.
It is a natural question to ask the model, "Did you just make this up?
Which training datum did you look at to make this decision?" Nowadays,
fact tracing matters a lot, and training data influence can be readily
used for it. LLMs are critical candidates for this method. We cannot be
sure how it would scale, but it is something to keep an eye out for.
#### Membership Inference
Given a model and arbitrary data we give to the model, we wish to see
whether that data was included in the training of that model.
- "Was this image used for training the DALL-E model?"
- "Was this image used for generating the current image that I got?"
Being able to answer such questions could be a nice tool for dealing
with copyright issues for large-scale generative models. It would also
be possible to use influence functions and training attribution in
general. Suppose we had access to the training set. Then, we could use
the scores to sort decreasingly and manually check whether the sample
was used (soft filtering). Alternatively, if we are really searching for
exact matches, we could search for matches according to the ordering
given by influence function scores. Hashing already works for checking
for exact matches very efficiently, but it would not work for matches
that are not exact (e.g. when JPEG encoding/decoding is applied). There
are only very few papers in this area so far, but it is gaining
traction. Large companies are probably also already working on this
problem.
# Uncertainty
## Introduction to Uncertainty Estimation
Uncertainty is everywhere. Having complete information and a perfect
understanding of a system can only happen in simple and closely
controlled environments. The world around us is not such an environment.
Humans learn to build complex internal models of uncertainty to cope
with incomplete information and react robustly to events that either
have not happened yet or are only partially observed.
Understanding, quantifying, and evaluating uncertainty is of crucial
importance in our everyday lives, but also in fields specialized to cope
with and leverage uncertainty. Examples include financial analysis,
economic decisions, general statistics, probabilistic modeling, and also
machine learning. Classical ML theory usually did not aim to *make
systems know when they do not know* -- the main goal was to find methods
and solutions that work well, considering them as standalone components.
These days, accuracy in most applications is not the biggest concern --
most ML solutions provide reasonably good accuracy in several tasks.
Instead, there is an ever-increasing demand to quantify sources of
uncertainty in ML models and make them understand their own limitations.
As we will soon see, uncertainty quantification is a crucial requirement
whenever we want to incorporate an ML solution into a certain pipeline.
In the Uncertainty chapter of the book, we are going to further motivate
the need for uncertainty estimation, quantify sources of uncertainty,
consider methods that can give us different kinds of uncertainty
estimations, and learn about methods to evaluate uncertainty predictions
for DNNs.
### Motivation
We first consider a meeting with another business, based on a real story
of one of the authors when they were working at a company. Teams without
ML knowledge tend to downplay the difficulty of doing technical things.
There are always typical subjects in such meetings:
- "Why does your AI system not return how sure it is about the
output?"
- "Is it not kind of trivial to make the system predict confidences?"
- "We cannot plug your system into our pipeline if there is no such
estimate."
- "We really need it, cannot you just do it?"
Unfortunately, solving such tasks is not at all trivial. However, they
are prevalent (as there are many such requests) and valid desires; we
will see methods to achieve these goals.
### Uncertainty estimation is a critical building block for many systems.
When an ML model is part of a bigger modular pipeline, uncertainty
estimation is very beneficial and often required. For an ML-based
data-driven module, it is not easy to trust everything the model
outputs. Such models are never perfect and extra care is needed to use
the model's prediction in downstream modules. This is also true when the
downstream module in question is a human -- people do not (and should
not) trust every prediction of the model.
Let us suppose for a moment that the model already knows about itself
how certain it is. We consider some example downstream use cases of
reporting uncertainty (in later modules of the pipeline).
**Human in the loop.** We only want humans to intervene when the ML
confidence is low, as human knowledge is expensive. When the model's
confidence is low, the model can say, "I am not sure about the result."
When humans need to intervene, they can take control and handle certain
requests themselves (i.e., they can fix the model's prediction).
**Risk avoidance.** When there are great risks involved in the model's
task, the ML system should only act when it is confident. If the model
is unsure, the processing pipeline should stop (or fall back to some
other safe state), as the situation is deemed too risky. An example of
this is a learning-based manufacturing robot for cars. When the robot is
uncertain about its next action, there is a high risk it is going to
make a mistake which could also result in it damaging or destroying the
car. We want the model to be able to say, "We should probably not take
care of this input and just stop."
![Simplified flowchart of the ideal integration of ML models into
modular pipelines. In addition to the prediction results, we also wish
to obtain associated uncertainty estimates to efficiently use the
predictions in downstream tasks.](gfx/04_flowchart.pdf){#fig:flowchart
width="0.5\\linewidth"}
Thus, it is very beneficial for our model to output *two* predictions
when it is part of a pipeline: the prediction results and also the
associated uncertainty estimate(s), as illustrated in
Figure [4.1](#fig:flowchart){reference-type="ref"
reference="fig:flowchart"}. This gives us many more choices of what to
do later in the pipeline in downstream modules.
#### When do we need confidence estimation?
In general, confidence estimation is needed when the outputs of a model
cannot be treated equally -- outputs for certain samples are more
confident and some of them are less trustable.
It is not needed when the system always returns perfect answers. Why
would we need it? If such a time would come when AI systems were always
giving the right answers, this study would become useless. We will learn
about whether that can happen... (Spoiler: It cannot, as in almost any
sensible scenario, there is some level of stochasticity we cannot get
rid of.)
### Example Use Cases of Uncertainty Estimation
The following examples of uncertainty estimation are inspired
by [@balajitalk].
#### Image search for products
In this example, we do not consider the old Google image search. We
consider products like Google Lens. Such products do not only search for
similar images -- they also take the user and context into account:
::: center
"Google Lens is a set of vision-based computing capabilities that can
understand what you're looking at and use that information to copy or
translate text, identify plants and animals, explore locales or menus,
discover products, find visually similar images, and take other useful
actions. \[\...\] Lens always tries to return the most relevant and
useful results. Lens' algorithms aren't affected by advertisements or
other commercial arrangements. When Lens returns results from other
Google products, including Google Search or Shopping, the results rely
on the ranking algorithms of those products." [@googlelens]
:::
Companies are usually also very motivated to link image search results
to actual products to make money. Customers can also get quick answers
from such image search results.
Given an image, the task is to find the product that is shown. What
should happen if the photo taken by the user is of poor quality? Regular
algorithms would search for the most likely product anyway, which is
usually a very poor suggestion. If the model is equipped with
uncertainty estimates, when the confidence is low, it can
1. ask the user to take another photo, and/or
2. show different results from all products that could match with high
probability.
What if the photo *does not contain* any product of interest? Again,
regular algorithms would simply return poor results. Uncertainty
estimation can allow the system to determine whether the provided photo
is relevant. When there is no object of interest, the system can output
suggestions such as "User should be posing the camera differently." or
"Try to focus on an object of interest." This feedback loop can ensure
that the model can perform correctly and does not mislead the user with
unconfident predictions.
#### High-stake decision making
The prime example of a high-stake decision-making application is
healthcare, where the model has to determine whether there is anything
wrong with our body. We can use model uncertainty to decide when to
trust the model or defer to a human. This is a crucial ability of a
model in general cost-sensitive decision-making, where mistakes can
potentially have huge costs. Costs include potential lawsuits, the death
of a patient, or fatal road accidents. The task is to provide a binary
prediction of healthy/diseased from the input image. Ideally, the model
should make a prediction and output confidence estimates as well. One
should only trust the model's predictions when they are confident.[^57]
When the model is not confident enough, we defer to a human. For
example, we can ask a human doctor to come in and take a look.
::: {#tab:costtable}
-------- ------------------------ ------------ ----------
True Label
Healthy Diseased
Action Predict Healthy 0 10
Predict Diseased 1 0
Abstain "I don't know" 0.5 0.5
-------- ------------------------ ------------ ----------
: Example cost table for decision making in healthcare. Predicting
'healthy' for a diseased person has the highest cost, as such cases
can even lead to the death of a patient. Table recreated
from [@balajitalk].
:::
In discrete cost-sensitive decision-making problems, we usually have
*cost tables*, depicted in
Table [4.1](#tab:costtable){reference-type="ref"
reference="tab:costtable"}. We have a very high stake in false negative
disease diagnoses. We incur huge costs. Thus, we want to predict
'healthy' only when the system is very certain, and we even prefer the
answer 'I don't know' over predicting false positives. Predicting 'I
don't know' defers to a human doctor. An example of such a scenario is
diabetic retinopathy detection from fundus images [@balajitalk],
illustrated in Figure [4.2](#fig:diabetic){reference-type="ref"
reference="fig:diabetic"}.
![Diabetic retinopathy detection from fundus images. Predicting
'healthy' can be catastrophic if the patient is actually diseased.
Figure taken from [@balajitalk].](gfx/04_diabetic.pdf){#fig:diabetic
width="0.4\\linewidth"}
The field of self-driving cars also requires uncertainty estimates. It
also qualifies as high-stake decision-making, as people's lives are at
stake.[^58] We do not want our current self-driving systems to drive *in
all cases*. In self-driving scenarios, we often experience *dataset
shift*. We want to make sure that our car does not crash in such cases.
Examples include changes in
- time of day/lighting (driving at night vs. in the morning),
- geographical location (inner city vs. suburban location),
- weather conditions (thunderstorm vs. clear weather),
- or traffic conditions (traffic jams, construction sites, clear
highways).
In such cases, we wish to take over control and drive responsibly. By
using uncertainty estimation, the car can tell us when it is uncertain.
#### Open-set recognition
Open-set recognition is a different scenario that is more specific to
the classification task. In the development (dev) stage
(Section [2.3](#ssec:formal){reference-type="ref"
reference="ssec:formal"}), we can pre-define a set of classes, e.g., the
100 most popular skin condition classes. When deploying in the real
world, there can be very rare diseases as test inputs for which we do
not have classes. If the model predicts 'normal skin' in such cases, it
is very harmful. However, the other scenario is not better either:
"Well, it does not look normal, but since I need to pick one from the
known cases, I will just guess Acne." A classification system should
also be able to say, "This is something I have not learned before, a new
class. This is none of the above." This can either be an explicit class,
or it can be signaled by low predictive confidence.
Open-set recognition considers different ways to deal with new classes
in deployment. There are generally two variants of open-set recognition:
models trained with or without OOD data. When they are trained with OOD
data, they also usually contain a separate dimension in the output
probability vector for indicating the probability of OOD (explicit
introduction of the 'I don't know.' class). When they are trained only
with known classes, there is no data to train this extra dimension and,
therefore, it is not added. Even in this case, the model can be trained
to predict calibrated uncertainty estimates that can then be used to
determine the 'I don't know.' class in an implicit fashion. Of course,
without explicit supervision, the latter case will likely produce worse
results.
![Example for the need for uncertainty estimation in the classification
of genomic sequences. "A classifier trained on known classes \[without
proper uncertainty calibration\] achieves high accuracy for test inputs
belonging to known classes, but can wrongly classify inputs from unknown
classes (i.e., out-of-distribution) into known classes with high
confidence." [@googleood] Figure taken
from [@googleood].](gfx/04_ood.png){#fig:ood width="0.5\\linewidth"}
The same story goes for "growing field" cases. An example is the
classification of genomic sequences. We discover more and more bacteria
classes in biology research -- new entries are coming to our database of
bacteria. We usually have high ID accuracy on known classes, but this is
not sufficient. We wish to be prepared for new bacteria classes in the
future (unknown classes, OOD scheme), but we can only train on classes
that are currently in the database. We need to detect inputs that do not
belong to any of the known classes. We wish to assign an 'I don't know.'
label for future cases. This scenario is depicted in
Figure [4.3](#fig:ood){reference-type="ref" reference="fig:ood"}.
Samples predicted as 'I don't know.' can be used later on for further
training the model: we can put labels on them once we discover them. For
example, we can initialize a new row in the classifier layer's weight
matrix, add a new bias scalar, and then we can predict one more output
class after learning to predict such samples. The keyword here is
*class-incremental learning*, which deals with efficiently increasing
the number of classes over time without sacrificing the original
classification score.
#### Active learning
![General overview of active learning. We can get away with labeling
significantly fewer samples for our model if we label the "right" ones.
Figure taken from [@balajitalk].](gfx/04_active.pdf){#fig:active
width="0.6\\linewidth"}
Active learning, illustrated in
Figure [4.4](#fig:active){reference-type="ref" reference="fig:active"},
is concerned with finding samples to label smartly. Instead of going
through a huge set of unlabeled samples to label everything, we pick the
samples the model is very likely to be confused about, and then ask for
human feedback on those samples in an iterative fashion. This way, we
maximize the utility of humans (that are expensive). We can use model
uncertainty to improve data efficiency and the model's performance in
"blind spots". To tell which of the unlabeled samples is most likely to
have the highest return when annotated by a human, we should rely on a
notion of uncertainty and confidence values.
#### Hyperparameter optimization and experimental design
Hyperparameter optimization and experimental design are widely used
across large organizations and the sciences. Such methods often employ
*Bayesian optimization*. Examples include photovoltaics, chemistry
experiments, AlphaGo, electric batteries, and material design.
The setup is as follows. We are searching through a huge (combinatorial)
space of possibilities for configurations/settings. For example, in a
very naive hyperparameter search for an ML model, we might have
$$\begin{aligned}
5 \text{ learning rates} &\times 4 \text{ numbers of layers} \times 5 \text{ net widths} \times 3 \text{ weight decays}\\
&\times 10 \text{ augmentations} \times 3 \text{ numbers of epochs} \times 3 \text{ optimizers} = 27000
\end{aligned}$$ possible hyperparameter settings to iterate over.
Usually, we have thousands or millions of possible combinations, even in
quite simple cases. It is clearly infeasible to consider all possible
configurations. Bayesian optimization reduces the uncertainty of
performance in this complex landscape while also choosing performant
configurations. By observing a few data points where the configurations
were chosen smartly (i.e., considering the trade-off between uncertainty
reduction and exploitation), it constantly updates its beliefs based on
the training results of the well-studied configurations. This reduces
uncertainty over time, and eventually, we find a configuration that will
likely maximize our return. To explore the space most efficiently, we
need a notion of uncertainty. An example use of Bayesian optimization
for experimental design is shown in
Figure [4.5](#fig:bayesopt){reference-type="ref"
reference="fig:bayesopt"}.
![Role of uncertainty in optimizing battery charging protocols with ML.
"First, batteries are tested. The cycling data from the first 100 cycles
(specifically, electrochemical measurements such as voltage and
capacity) are used as input for an early outcome prediction of cycle
life. These cycle life predictions from a machine learning (ML) model
are subsequently sent to a BO algorithm, which recommends the next
protocols to test by balancing the competing demands of exploration
(testing protocols with high uncertainty in estimated cycle life) and
exploitation (testing protocols with high estimated cycle life). This
process iterates until the testing budget is exhausted. In this
approach, early prediction reduces the number of cycles required per
tested battery, while optimal experimental design reduces the number of
experiments required. A small training dataset of batteries cycled to
failure is used to train the early outcome predictor and to set BO
hyperparameters." [@Attia2020ClosedloopOO] The linear model the predicts
cycle life of a battery (and also gives a CI for the predictions). The
GP relates protocol $x$ to cycle life $y$ through its internal
parameters $\theta$. Here, the GP outputs uncertainties naturally.
Figure taken
from [@Attia2020ClosedloopOO].](gfx/04_bayesopt.pdf){#fig:bayesopt
width="0.9\\linewidth"}
#### Object detection pipeline
![Fast(er) R-CNN is a renowned model in object detection. One of its
distinguishing features is its modularity. When proposing bounding boxes
for objects, referred to as Regions of Interest (RoIs), the method also
provides a confidence or "objectness" score for each box. This score is
crucial; it allows the system to prune less likely boxes before it
refines and classifies the remaining ones, ensuring both accuracy and
efficiency. Figure taken
from [@https://doi.org/10.48550/arxiv.1504.08083].](gfx/04_faster.pdf){#fig:faster
width="0.6\\linewidth"}
In object detection, we produce a bounding box and a class label for
each object. Two-stage detectors (propose then refine) use multiple
modules by construction. We will likely require confidence scores
whenever we have multiple modules in any ML setting. Fast(er)
R-CNN [@https://doi.org/10.48550/arxiv.1504.08083], the most popular
object detection pipeline, is illustrated in
Figure [4.6](#fig:faster){reference-type="ref" reference="fig:faster"}.
In Faster R-CNN, we have the following stages.
1. Propose boxes with confidence scores. (Between $10^3$ and $10^6$
boxes are proposed.) This is the *objectness score*.
2. Prune boxes by thresholding the confidence/objectness scores. We
return only the most likely boxes containing any objects. Then we
further perform non-maximum suppression.
3. Classify the pruned boxes and refine the boxes.
## Types and Causes of Uncertainty {#ssec:types}
In this section, we aim to discuss different sources of uncertainty and
how they relate to each other. In particular, we will discuss the terms
*predictive uncertainty*, *epistemic uncertainty*, and *aleatoric
uncertainty*. In the last paragraph of each of the subsections
discussing these sources, we give an introduction to *how* we can
evaluate these.
### Predictive Uncertainty {#ssec:predictive}
::: definition
Predictive Uncertainty Predictive uncertainty refers to the degree of
uncertainty or lack of confidence that a machine learning model has in
its predictions for a given input.
In particular, predictive uncertainty is typically referred to as the
probability of the prediction's correctness. If for a fixed input sample
$x$ we define the indicator variable $L: \Omega \rightarrow \{0, 1\}$,
$$L = \begin{cases}1 & \text{if prediction \(f(x)\) is correct} \\ 0 & \text{otherwise,}\end{cases}$$
then predictive uncertainty is usually defined as
$$c(x) = P(L = 1) = \text{probability that \(f(x)\) is correct}.$$
**Note**: Here, $f(x)$ denotes a single prediction from model $f$, *not*
a distribution over predictions.
:::
To summarize the above definition, predictive uncertainty tries to
measure if we are likely to make an error in our prediction. Most of the
ML uncertainty literature specifies two possible typical causes of
predicting 'I am not sure.' -- i.e., two main sources of predictive
uncertainty. First, we give an informal description of these two
sources, and then we discuss them in more detail.
1. **Epistemic uncertainty**: "I am not sure because I have not seen it
before."
2. **Aleatoric uncertainty**: "I have experienced it before, I know
what I am doing, but I think there is more than one good answer to
your question, so I cannot choose just one."
Evaluation always requires quantification -- a quantified definition of
the concept. Without evaluation, we cannot progress. How should we
quantify whether a specific uncertainty estimate is reasonable? For
discussing the basic evaluation of predictive uncertainty, we stick to
the scalar confidence values introduced in the definition of predictive
uncertainty, where we equate $c(x)$ to the probability of the
prediction's correctness. Predictive uncertainty depends on both the
model and the data. In particular, it increases both when the input is
ambiguous and when the model is uncertain in its parameters (arising
from the undefined behavior in no-data regions of the input space). Some
evaluation metrics measure the true likelihood of the model failing and
compare that to the given predictive uncertainty estimation. This is a
direct way to benchmark predictive uncertainty estimates. In later
sections, we will consider exact methods.
### Aleatoric Uncertainty
::: definition
Aleatoric Uncertainty Aleatoric uncertainty is uncertainty that arises
due to the inherent variability or randomness in the data or the
environment. This type of uncertainty cannot be reduced by collecting
more data or improving the model, as it is an intrinsic property of the
system being modeled. Examples of sources of aleatoric uncertainty
include measurement noise, natural variability in the data, or
incomplete information.
:::
Intuitively, aleatoric uncertainty translates to "I do not know because
there are multiple plausible answers." For a predictive task of
predicting $Y$ from $X$, aleatoric uncertainty takes place whenever the
*true distribution* $P(Y \mid X = x)$ is non-deterministic (according to
human knowledge), thus has a non-zero entropy. We have aleatoric
uncertainty when $Y \mid X = x$ has some entropy. It simply means that a
sample $x$ accommodates multiple possible $y$s.[^59]
Examples from the CIFAR-10H [@peterson2019human] dataset are shown in
Figure [4.7](#fig:aleatoric){reference-type="ref"
reference="fig:aleatoric"}. For some samples (lower ship and bird),
humans are quite uncertain, even without time constraints. We have high
aleatoric uncertainty; the true $Y \mid X = x$ (according to human
knowledge) has high entropy.[^60] The approximation of it by several
human inspectors (47-63 per image for the CIFAR-10H
dataset [@peterson2019human]) has a high entropy (non-deterministic).
They have disagreements.
![Example of the absence and presence of aleatoric uncertainty. Examples
of images and their human choice proportions are given. For many images
(upper plane and cat), the label choices are unambiguous. We have very
low aleatoric uncertainty, i.e., the true $Y \mid X = x$ has a very low
entropy. The approximation of it by ten human inspectors has no entropy
(deterministic); they all agree on the label. The bottom samples
accommodate various labels. The single GT label does not always exist.
Figure taken from [@balajitalk].](gfx/04_aleatoric.pdf){#fig:aleatoric
width="0.6\\linewidth"}
![Sample from the N-digit MNIST dataset. There are multiple
possibilities for the original image. Figure adapted from
from [@https://doi.org/10.48550/arxiv.1810.00319].](gfx/04_hedged.pdf){#fig:hedged
width="0.4\\linewidth"}
#### Many faces of aleatoric uncertainty
First, we consider *ambiguity in the observation*. This can arise, e.g.,
when features are missing (lack of information). An illustration of how
missing features can introduce overlaps in two classes is shown in
Figure [\[fig:aleatoric2\]](#fig:aleatoric2){reference-type="ref"
reference="fig:aleatoric2"}. This is a typical source of aleatoric
uncertainty. We can also take [N-digit MNIST
samples](https://arxiv.org/abs/1810.00319) [@https://doi.org/10.48550/arxiv.1810.00319]
and consider intentionally corrupted versions of them, shown in
Figure [4.8](#fig:hedged){reference-type="ref" reference="fig:hedged"}.
The input goes through corruption/occlusion that removes some features.
Then, multiple labels might make sense (e.g., $41, 11$). For larger
corruptions, we might have
$$P(Y = 41 \mid X = x) = 0.5 = P(Y = 11 \mid X = x).$$ Many people also
have poor handwriting, and it is generally difficult to tell a $1$ apart
from a $7$. No artificial perturbations are required in these cases, as
the observation already had an inherent ambiguity.
We can also refer back to the CIFAR images from
Figure [4.7](#fig:aleatoric){reference-type="ref"
reference="fig:aleatoric"}. When the photo of the ship was taken, it
went through extra corruption (resolution reduction) to obtain
thumbnails. This removes information and introduces aleatoric
uncertainty. If objects are seen in the real world (all features are
present), then there is probably no ambiguity.[^61]
Out-of-focus images are also examples of ambiguity in the observation.
Here, we have measurement noise. This also introduces missing features
(information). We cannot tell how many people are in the image if it is
severely corrupted.
Let us now consider *ambiguity in the question*. In general, the task
may be formulated so that multiple answers are naturally plausible. In
the ImageNet-1K dataset, there are several such examples. Consider an
image of a desk with many objects on it, illustrated in
Figure [4.9](#fig:desk){reference-type="ref" reference="fig:desk"}. The
ImageNet-1K label is 'desk', but other ImageNet-1K categories also make
sense: 'screen', 'monitor', or 'coffee mug'. It is quite likely, in
general, that multiple classes are present on a single image. In such
cases, $P(Y \mid X = x)$ is multimodal. This dataset is not a "solvable"
problem, as all labels mentioned are plausible, and neither could not be
deemed wrong. Annotators, in this case, will arbitrarily choose one
category among them. They are only allowed to provide a single label per
image. Referring back to the question of whether neural networks will
ever become perfect predictors, it is now clear why the answer is
negative. Inherent aleatoric uncertainty is *irreducible*, and correct
quantification of uncertainty is, therefore, always needed.
Another example of inherent ambiguity in the question/task is image
synthesis. Consider DALL-E image synthesis for the caption
"`crayon drawing of several cute colorful monsters with ice cream cone bodies on dark blue paper`"
illustrated in Figure [4.10](#fig:dalle){reference-type="ref"
reference="fig:dalle"}. Here, $P(\text{image} \mid \text{caption})$ is
highly multimodal -- we expect multiple good answers. DALL-E generates
multiple plausible outputs for the caption, and all of them make sense.
Thus, we have aleatoric uncertainty -- we do not have a single good
answer. (Many images fit the caption, as decided by humans.) In a real
dataset, we will not see the same caption twice. We do not exactly have
this multitude of possible images given the same caption in the dataset,
but it can be an indicator if we see a very close caption corresponding
to a completely different GT image. This "approximate multimodality" of
our outputs is also counted as aleatoric uncertainty.
![ImageNet-1K sample with label 'desk'. Aleatoric uncertainty arises
naturally because many objects corresponding to different ImageNet-1K
labels are present in the image. There is no *single* good answer to
this task, therefore, networks should also not be overconfident in one
particular prediction. Figure taken
from [@pmlr-v119-shankar20c].](gfx/04_desk.png){#fig:desk
width="0.3\\linewidth"}
![Four samples from DALL-E for the prompt "crayon drawing of several
cute colorful monsters with ice cream cone bodies on dark blue paper".
Each of the synthesized images is a plausible image given the prompt,
leading to the presence of aleatoric uncertainty in $Y \mid X = x$ where
$Y$ is the image and $x$ is the exact
prompt.](gfx/04_dalle.pdf){#fig:dalle width="0.5\\linewidth"}
In summary, when we have ambiguities and multiple plausible answers for
a task, whatever the source is, we call it aleatoric uncertainty.
#### Reducing aleatoric uncertainty
Unfortunately, we cannot reduce aleatoric uncertainty by observing more
data.[^62] When $Y \mid X = x$ has a non-zero entropy, an infinite
amount of data will present data samples with *mixed supervision*. For
the same $x$, different supervision signals $y$ will be given. Of
course, for finite datasets like ImageNet-1K, we do not see the same
image with different labels but very similar images with different
labels. By seeing ambiguities multiple times, we do not reduce them. The
model learns to see similarities between images and gets confused if it
sees similar images but with very different labels.
To address aleatoric uncertainty, one must...
1. ...formulate a model architecture that accommodates multiple
plausible outputs. That is normal for classifiers but not for usual
regressors. They usually predict a single number/vector, not a set
of plausible answers.
2. ...adopt a learning strategy that lets the model learn multiple
plausible outputs rather than sticking to one. This is true for the
CE loss for classification. However, it is not true for the $L_2$
loss for regression! It learns the mean of the labels.
Even though aleatoric uncertainty does not depend on the model, the only
possible way to approximate it for a general test input is to use a
data-driven model. Then, the focus becomes to formulate models that give
reliable aleatoric uncertainty predictions. If we know the generative
process (i.e., the true distribution $P(Y \mid X = x)$) or have multiple
samples from it, then we can compare aleatoric uncertainty predictions
against the true "spread" of $P(Y \mid X = x)$ or the empirical spread,
e.g., by comparing against its variance or entropy. Proxy tasks can also
be used for benchmarking aleatoric uncertainty predictions. For example,
even though aleatoric uncertainty differs from predictive uncertainty,
one might want to evaluate the aleatoric uncertainty predictions on
predictive uncertainty benchmarks. One reason is practicality. If we do
not have access to $P(Y \mid X = x)$, benchmarking against predictive
uncertainty is better than not benchmarking at all. Another reason is
correlation. Predictive uncertainty necessarily monotonically increases
by increasing aleatoric uncertainty. If we assume that epistemic
uncertainty (discussed in [4.2.3](#ssec:epi){reference-type="ref"
reference="ssec:epi"}) does not vary too much on the test samples, we
can use the true predictive uncertainty values as ground truth for
*ranking* the test samples, and we can measure how well the ranking
based on aleatoric uncertainty estimates agrees with it. This is a
strong assumption, and such an evaluation is usually used as a
heuristic.
### Epistemic Uncertainty {#ssec:epi}
Epistemic uncertainty is uncertainty from lack of experience: "I do not
know because I have not experienced it." Let us first consider an
example of epistemic uncertainty in a binary classification setting to
motivate the formalism that follows.
#### Example of Epistemic Uncertainty: Training Data for Binary Classification
We consider a toy example that showcases the presence of epistemic
uncertainty, shown in Figure [4.11](#fig:epistemic){reference-type="ref"
reference="fig:epistemic"}. There are several possible classifiers
compatible with the data we have observed. While they agree on the data
we have observed, we are epistemically uncertain about how to classify
points where the models disagree. We wish to sample data from
underexplored regions[^63] to increase our certainty in the choice of
the model.
![Example of the presence of epistemic uncertainty arising from
underexplored data regions. The dataset accommodates many models. Models
can be from the same hypothesis class (e.g., linear classifiers in the
top subfigure or belong to different hypothesis classes (bottom
subfigure). To increase our certainty in the "correct" model from the
model (= hypothesis) space, we wish to obtain more data from the
underexplored regions. Figure taken
from [@balajitalk].](gfx/04_epistemic.pdf){#fig:epistemic
width="0.4\\linewidth"}
#### Formal Treatment of Epistemic Uncertainty
Let us consider a more formal definition of epistemic uncertainty than
the intuitive description given at the beginning of
Section [4.2.3](#ssec:epi){reference-type="ref" reference="ssec:epi"}.
::: definition
Epistemic Uncertainty Epistemic uncertainty is a reducible source of
uncertainty that arises due to a lack of knowledge or information. This
type of uncertainty can be reduced by collecting more data or improving
the model class, as it is a result of the limitations of the current
knowledge or understanding of the process being modeled. Examples of
sources of epistemic uncertainty include model parameter uncertainty or
model *structure* uncertainty.
:::
During learning, we "reduce the possible list of models" to ones that
agree with the data (Figure [4.21](#fig:bayesian){reference-type="ref"
reference="fig:bayesian"}). One popular way of encoding a "list of
plausible models" is via the uncertainty over network parameters in
*Bayesian machine learning*:
$$\text{\stackanchor{No experience}{Prior over parameters}} \rightarrow \text{\stackanchor{Observations}{Likelihood of data}} \rightarrow \text{\stackanchor{Prediction based on experience}{Posterior over parameters}}$$
We start from our prior knowledge. The prior that encodes our initial
beliefs about plausible models is usually broad and has many
possibilities for $\theta$. We have high epistemic uncertainty in
regions with no observations, so in the beginning, we have high
epistemic uncertainty in general. Then, we accumulate observations. By
doing so, we reduce epistemic uncertainty. The likelihood of the data
$\cD$ is a stack of likelihoods of each data point $X_i$ (IID
assumption). By merging our prior knowledge with the observations, we
obtain our *posterior beliefs*. Finally, we can make our prediction
based on our posterior beliefs using the posterior predictive
distribution.
$$P(\theta \mid \cD) \propto P(\theta)P(\cD \mid \theta) \overset{\mathrm{IID}}{=} P(\theta) \prod_{i = 1}^n P(X_i \mid \theta)$$
Typically, the entropy for $\theta$ decreases with multiple
observations.
#### Model Misspecification and Effective Function Space
The uncertainty arising from the restriction of the model class we are
learning over (e.g., all linear models or all GPs), i.e., the
uncertainty about *choosing the right model family*, is a part of
epistemic uncertainty.[^64]
::: definition
Model Misspecification Model misspecification in ML happens when the
inductive biases and prior assumptions injected into the model disagree
with the (usually stochastic) process that generated the data.
:::
We leave model misspecification out in the remainder of the book, always
assuming that the model class includes the true $P(Y \mid X = x)$ so
that the epistemic uncertainty can be reduced to 0.[^65] We quickly
formalize this below.
::: definition
Function Space The function space corresponding to a neural network
architecture is the set of all functions we can represent using
different parameterizations of the architecture:
$$\cH = \left\{ f_\theta\colon \cX \rightarrow \cY \middle| \theta \in \Theta\right\}$$
where
- $\theta$ is a particular parameterization,
- $\cX$ is the input space and $\cY$ is the output space,
- and $\Theta$ is the space of all possible parameterizations. For
example, for a linear regressor with input $x \in \nR^n$,
$\Theta = \nR^n$.
The above definition does *not* consider the training algorithm, the
regularizers, or the optimizer.
:::
::: definition
Effective Function Space The effective function space of a neural
network is a subspace of the function space that the network can
represent. It is a set of functions that the network can actually learn
or achieve, given the training procedure, optimization algorithm, and
other hyperparameters.
The effective function space of a neural network is influenced by the
dataset. For example, a dataset with high noise may require more
regularization or early stopping to prevent overfitting, which may limit
the effective function space of the network. Conversely, a
well-structured and informative dataset may allow the network to explore
a wider effective function space as we vary the dataset.
The effective function space of a neural network is also influenced by
the optimization algorithm and the training procedure. Different
optimization algorithms, such as stochastic gradient descent or Adam,
may converge to different local minima (or saddle points), which may
affect the set of optimal parameters that the network can achieve.
Similarly, the training procedure, such as the choice of learning rate,
batch size, or data augmentation, may affect the set of optimal
parameters that the network can reach.
:::
When leaving model misspecification out, we can give an alternative
definition for epistemic uncertainty: Epistemic uncertainty arises when
multiple models out of our *effective* function space can fit the
training data well. So, epistemic uncertainty is uncertainty in the *set
of plausible models* (but not a property of each individual model). But
let's return to the gist of it:
#### Epistemic uncertainty is reducible.
Considering the appropriate distribution, epistemic (= model)
uncertainty *vanishes* (reduces to 0) in the limit of infinite data (=
observations).[^66] One can thus completely rule out specific models in
the limit, and in fact, we can uniquely determine which model is the
right one, i.e., which one "generated the data".[^67]
::: definition
Data Manifold Informally speaking, the data manifold is a region of the
input space where elements look more natural and realistic.
As a more formal definition, data manifold refers to the underlying
geometric structure of the (usually high-dimensional) data that is being
modeled. It describes the intrinsic, underlying structure of the data in
a lower-dimensional space that captures the essential features and
relationships between the data points.
The data manifold is typically assumed to be smooth and continuous, and
it is usually modeled as a lower-dimensional submanifold embedded in the
high-dimensional feature space. The dimensionality of the data manifold
is determined by the number of intrinsic degrees of freedom in the data,
which is almost always lower than the dimensionality of the original
feature space, especially in the case of sensory data.
:::
If the data distribution we sample from does not cover specific areas of
interest in the input space (the *data manifold*), then we will still
have uncertainty there in limit. It is, therefore, important to sample
from *underexplored regions* $P(X)$ of the data manifold that are still
realistic but underrepresented in the original training data to achieve
this (OOD samples). However, we do not care about images that are purely
Gaussian noise or that are away from the data manifold. As soon as we
collect and label many OOD samples, we can reduce epistemic uncertainty
as much as we like.[^68]
**Example**: Active learning reduces epistemic uncertainty efficiently
by acquiring supervision on underexplored samples. We can use epistemic
uncertainty to sample from regions where the model needs the most
samples. For such a scheme, the model must provide us with
well-calibrated epistemic uncertainty estimates.
#### Example Sources of Epistemic Uncertainty in Practice
Let us first consider two possible sources of epistemic uncertainty that
often arise in practice.
**Distribution shifts.** For example, a self-driving car was mostly
trained on daylight videos, but it is deployed in a night scenario. On
OOD samples, we (usually) have high epistemic uncertainty.
**Novel concepts.** For example, new objects, words, or classes (open
set recognition). These naturally have high epistemic uncertainty (but
not always -- this highly depends on the employed inductive biases).
For epistemic uncertainty, many definitions exist (e.g., refer to
[@shaker2021ensemblebased; @valdenegrotoro2022deeper; @lahlou2023deup]),
and it is not exactly clear what the best way is to properly benchmark
such estimates. One possibility is to employ proxy tasks that should be
reasonably well correlated with epistemic uncertainty. Another
possibility is to consider a binary OOD/not OOD prediction task. This is
only a proxy task for epistemic uncertainty because the true "OOD-ness"
of a sample is independent of any model. However, we still expect
epistemic uncertainty to be higher on OOD samples, so the use of such
benchmarks is justified to some extent. This is further discussed in
[4.3.1](#sssec:connection_ood){reference-type="ref"
reference="sssec:connection_ood"}.
### Epistemic vs. Aleatoric Uncertainty
Aleatoric uncertainty is *data uncertainty*. It means there is a
*multiplicity of possible answers*. When class-conditional distributions
overlap, $P(Y \mid X = x)$ has a considerable entropy.[^69] Aleatoric
uncertainty is inherent to the data distribution.
Epistemic uncertainty is *model uncertainty*. It means there is a
*multiplicity of possible models*. It arises from underexplored data
regions. Epistemic uncertainty is inherent to the dataset that allows
multiple possible hypotheses.
Treating epistemic and aleatoric sources of uncertainty separately is
not only done for philosophical reasons. If we only obtained new samples
based on regions with high *predictive* uncertainty, it could very well
happen that the epistemic uncertainty was actually *low* in that region
but a high *irreducible* value of aleatoric uncertainty caused the high
predictive uncertainty. For the sake of intuition, we might consider
predictive uncertainty as simply the sum of epistemic and aleatoric
uncertainty. Later we will see that the decompositions are not this
straightforward and require many assumptions. However, we can still say
in general that both aleatoric and epistemic uncertainty influence
predictive uncertainty.
In essence, both types of uncertainty arise from the data. While
epistemic uncertainty depends on the set of plausible models, aleatoric
uncertainty does not, as it only depends on the entropy/variance of the
true $Y \mid X = x$ variable. This also agrees with the statement that
epistemic uncertainty is reducible, while aleatoric is inherent to the
data generating process and, therefore, is irreducible. However, they
both *influence* predictive uncertainty.
In general, it is hard to disentangle general predictive uncertainty
into aleatoric and epistemic sources and is an open research topic.
[@shaker2021ensemblebased; @valdenegrotoro2022deeper; @lahlou2023deup]
::: information
Epistemic vs. Aleatoric Uncertainty in Computer Vision We consider a
method that models both epistemic and aleatoric uncertainty in computer
vision, introduced in the paper "[What Uncertainties Do We Need in
Bayesian Deep Learning for Computer
Vision?](https://arxiv.org/abs/1703.04977)" [@https://doi.org/10.48550/arxiv.1703.04977].
This is illustrated in Figure [4.12](#fig:cv){reference-type="ref"
reference="fig:cv"}. The task is semantic segmentation, which is
pixel-wise classification. This method is capable of measuring epistemic
and aleatoric uncertainty at the same time.
Aleatoric uncertainty arises at boundaries between classes (e.g.,
pavement/road). People annotate pixel-wise, and mistakes usually take
place around boundaries. Mixed supervision around boundaries leads to
high aleatoric uncertainty in these regions.
Epistemic uncertainty arises at parts of the image the model has not
seen before. It seems that the model has not seen similar pavements
before.
:::
![Example application of epistemic and aleatoric uncertainty estimation
in computer vision. These two sources of uncertainty are fundamentally
different, which is further highlighted by the uncertainty maps in the
Figure. Figure taken
from [@https://doi.org/10.48550/arxiv.1703.04977].](gfx/04_cv.png){#fig:cv
width="0.8\\linewidth"}
## Connection of Uncertainty Estimates to Earlier Chapters
The subfields of Trustworthy Machine Learning are very interconnected.
Here, we briefly discuss some of the connections to OOD generalization
and explainability.
### Connection of Epistemic Uncertainty to OOD Generalization {#sssec:connection_ood}
Epistemic uncertainty and OOD generalization have many connections,
though they should not be treated interchangeably, as discussed
previously. Still, epistemic uncertainty should be high for OOD samples.
If we have access to $M$ models in the form of an ensemble, then the
epistemic uncertainty for a sample $x$ is closely linked to the
diversity of predictions $f_1(x), \dots, f_M(x)$ by the set of trained
models. Let us assume that we have a diagonal dataset
(Section [2.7.1](#ssec:spurious){reference-type="ref"
reference="ssec:spurious"}) and multiple plausible models that are fit
to this dataset. As the models are all trained on the training samples
(ID data), they all perform well on the training samples (given
sufficient expressivity). However, as the models still differ, they will
generally not agree on off-diagonal samples. This is emphasized even
more if the models are *regularized* to be diverse. Therefore, the
off-diagonal samples will have high output variance (high epistemic
uncertainty), and the training samples will have very low output
variance (low epistemic uncertainty). We can, therefore, measure
epistemic uncertainty by training multiple models and seeing how much
they agree on a particular sample. This is the essence of Bayesian ML:
training multiple models simultaneously more smartly and efficiently,
and checking their divergence on certain test samples.
### Connection of General Uncertainty Estimation to Explainability
When a model returns its predictive confidence or other uncertainties
and is well-calibrated, it is a great way for the user to learn about
the model and the output. Such uncertainty estimates are great
explanation tools. Some interesting questions that relate explainability
to uncertainty estimation are listed below.
- How uncertain was the model?
- Due to which factor was the model uncertain? (If there are multiple
factors, see above.)
- What additional training data will make the model more confident?
(What regions suffer from high epistemic uncertainty?)
### Trustworthiness and Confidence Estimates
A critical component of the trustworthiness of an ML system is the
"truthfulness" of the confidence estimates $c(x)$. The most popular
demand for *predictive uncertainty* estimates is that $c(x)$ must
quantify the actual probability of the model to get the prediction right
(known as true predictive uncertainty). A model needs to address two
tasks now: (1): Predicting the GT label $y$, and (2): Predicting the
correctness of the prediction $L$. Then, we want to obtain
$c(x) = P(L = 1)$.
## Formats of Uncertainty
Let us first consider different approaches people use to
represent/estimate uncertainty. What is the appropriate data format for
uncertainty? In the following sections, we will refer to confidence and
uncertainty "interchangeably", with
"$\text{confidence} = 1 - \text{uncertainty}$".
#### The simplest form: a scalar.
The model $f$ on input $x$ produces an output $f(x)$ *and* a scalar
confidence score $c(x) \in [0, 1]$, where
$$c(x) = \text{probability that \(f(x)\) is correct}.$$ This is a type
of *predictive uncertainty* which subsumes aleatoric and epistemic
uncertainty (and also the model not being expressive enough).
**Note**: Whenever we have a scalar in $[0, 1]$, we can treat it as a
probability.
![A blurry image of a person, generated by DALL-E. The blurriness
corresponds to high aleatoric uncertainty.](gfx/04_blur.pdf){#fig:blur
width="0.3\\linewidth"}
#### A vector.
The model can also report $c(x) \in \nR^d$, an array of scalars, as a
representation of uncertainty. The question we ask is "Which
attributes/features/concepts does the model lack confidence in?" We
attach a confidence value to each attribute (evidence) of the sample.
Consider a *person identification task*. Let the prediction
$f(x) := \text{person name}$. A possible input $x$ is shown in
Figure [4.13](#fig:blur){reference-type="ref" reference="fig:blur"}. We
might obtain the following confidence values over various evidence:
$$\begin{aligned}
c_{\text{hair color}} &= 0.99 & \text{(we kind of see it)}\\
c_{\text{eye color}} &= 0.39 & \text{(we cannot see it well)}\\
c_{\text{ear shape}} &= 0.1 & \text{(we are not sure at all)}.
\end{aligned}$$ The value $c$ can model predictive uncertainty like here
(how sure the model is in the correctness of its prediction, broken down
into confidences along various evidence), but analogously also aleatoric
uncertainty (how much variance does the true $Y \mid X = x)$ have along
various evidence), or epistemic uncertainty (how much uncertainty there
is arising from the lack of observations in the sample along various
evidence). For these, different evaluations exist.
#### A matrix and a vector.
This section is inspired by the work "[Probabilistic Embeddings for
Cross-Modal
Retrieval](https://arxiv.org/abs/2101.05068)" [@https://doi.org/10.48550/arxiv.2101.05068].
Uncertainty cannot only arise in the outputs of discriminative models
that aim to model $Y \mid X = x$. If we want to embed our data into a
lower-dimensional space using probabilistic methods, modeling
uncertainty has several advantages. We discuss probabilistic embeddings
in
Section [\[sssec:representation_learning\]](#sssec:representation_learning){reference-type="ref"
reference="sssec:representation_learning"}; here, we only consider the
representation of uncertainty.
One can have $c(x) = \left[\mu_\theta(x), \Sigma_\theta(x)\right]$
interpreted as parameters of a distribution/density. The prediction of
the network is a distribution, not a single point. We obtain the
posterior over the embedding (probabilistic embeddings), which
represents aleatoric uncertainty:
$$P(z \mid x) = \cN\left(\mu_\theta(x), \Sigma_\theta(x)\right)$$ with
$$\Sigma_\theta(x) \in \nR^{D \times D}.$$ The network outputs a
Gaussian for each $x$, just like a Gaussian Process (GP) would.
$\Sigma_\theta(x)$ is a representation of the aleatoric uncertainty in
the embedding (covariance of $\cN$). This is a more complicated way of
uncertainty representation.
#### A "disentangled" representation.
We consider the work "[What Uncertainties Do We Need in Bayesian Deep
Learning for Computer
Vision?](https://arxiv.org/abs/1703.04977)' [@https://doi.org/10.48550/arxiv.1703.04977]
to highlight the possibility to separately obtain aleatoric uncertainty
estimations $c_{\mathrm{al}}(x)$ and epistemic uncertainty estimations
$c_{\mathrm{ep}}(x)$.[^70] Then, we can give our *approximate*
predictive uncertainty as $c(x) = c_\mathrm{al}(x) + c_\mathrm{ep}(x)$.
Consider a regression task, and in particular, the problem of monocular
depth estimation, where the network has to output per-pixel depth
estimates from a single image. Suppose that we have a distribution
$Q(W)$ over the weights $W$ of the model by using dropout (discussed in
detail in Section [4.11.4](#sssec:dropout){reference-type="ref"
reference="sssec:dropout"}), and each model outputs a mean prediction
and a variance term that measures aleatoric uncertainty. In the
referenced paper, the authors calculate these as $$\begin{aligned}
c_\mathrm{al}(x) &= \frac{1}{T} \sum_{t = 1}^T \hat{\sigma}_t^2 \approx \nE_q\left[\hat{\sigma}_t^2\right]\\
c_\mathrm{ep}(x) &= \frac{1}{T} \sum_{t = 1}^T \hat{y}^2_t - \left(\frac{1}{T}\sum_{t = 1}^T \hat{y}_t\right)^2 \approx \operatorname{Var}_q\left[\hat{y}\right]
\end{aligned}$$ where
$$\left\{\hat{W}_t\right\}_{t = 1}^T \sim Q(W), \qquad\left[\hat{y}_t, \hat{\sigma}_t^2\right] = f^{\hat{W}_t}(x).$$
$c_\mathrm{al}(x)$ is the average learned spread (variance) of
$Y \mid X = x)$ by the ensemble members and $c_\mathrm{ep}$ is the
variance among the ensemble predictions. Here, $\hat{y}_t$ is a single
output scalar, corresponding to the mean prediction of model $t$ for a
particular input pixel. These uncertainties are calculated for all
pixel-wise depth predictions $\hat{y}_t$ of the different networks
$\left\{f^{\hat{W}_t} \middle| t \in \{1, \dots, T\} \right\}$. Thus,
when performing monocular depth estimation, we have as many aleatoric
and epistemic uncertainty scalars as there are input pixels.
The Bayesian training for a single input image $x$ is then performed by
minimizing the following loss function. This is learned loss attenuation
(attenuating the $L_2$ loss with the learned weight of error
$\sigma^2$).
$$\cL_{\mathrm{BNN}}(\theta) = \frac{1}{D} \sum_{i = 1}^D \left[\frac{1}{2\hat{\sigma}_i^2} (y_i - \hat{y}_i)^2 + \frac{1}{2}\log \hat{\sigma}_i^2\right],$$
where
$$\hat{W} \sim Q(W), \qquad \left[\hat{y}, \hat{\sigma}^2\right] = f^{\hat{W}}(x).$$
The likelihood is Gaussian and heteroscedastic (pixels and samples).
$\hat{y}$ is the predicted mean, and $\hat{\sigma}^2$ is the predicted
variance (aleatoric uncertainty), both vectors with as many dimensions
as there are pixels. $q$ is the approximate posterior over the weights
modeled by dropout, which corresponds to epistemic uncertainty. Not only
does the formulation allow for modeling epistemic and aleatoric
uncertainty, but it also improves accuracy.
## Proper Scoring Rules
We discuss a useful and general framework for training and measuring
uncertainty estimates: the framework of *proper scoring rules*.
Considering the simplest case of scalar uncertainty estimates, we
generally want to learn a value $c(x)$ for a particular sample $x$ that
corresponds to the true probability (be it predictive, epistemic, or
aleatoric uncertainty). Luckily, there is a class of scores/losses that
*ensures this automatically*.
In subsections [4.5.1](#sssec:motivation){reference-type="ref"
reference="sssec:motivation"},
[4.5.2](#sssec:logprob){reference-type="ref" reference="sssec:logprob"},
and [4.5.3](#sssec:brier){reference-type="ref" reference="sssec:brier"},
we do not make connections to ML concepts, such as the correctness of
prediction $L = 1$. We will simply aim to match a predicted probability
$q$ of a binary event $Y = 1$ to the true probability $p$ of it. Later,
in subsection [4.5.4](#sssec:role){reference-type="ref"
reference="sssec:role"}, we will see that this is indeed very useful for
matching probabilities corresponding to different sources of uncertainty
in neural networks.
### Motivation: Binary Forecasting Task {#sssec:motivation}
Consider a simple weather forecasting task. We let subjects bet on the
probability of rain tomorrow, which is a binary random variable
$Y: \Omega \rightarrow \{0, 1\}$ according to the distribution $P(Y)$.
The prediction is the scalar $q \in [0, 1]$. We want to encourage the
prediction of the correct probability among people. To this end, we give
$S(q, Y)$ USD to subjects. $S$ is a function of the reported probability
$q$ and the true outcome $Y$. $Y = 1$ if it actually ends up raining and
$Y = 0$ otherwise. Let us assume that the subjects are rational, i.e.,
they maximize the expected money they get. We want to give the maximum
amount of money to people who predict the actual probability of rain.
How should we design $S$?
The expected reward for the subject is
$$\nE_{P(Y)} S(q, Y) = S(q, 0)P(Y = 0) + S(q, 1)P(Y = 1),$$ as $Y$ is a
binary random variable. Depending on the actual outcome, we get a
different amount of money. We wish to find a function $S$ such that
$$\max_{q \in [0, 1]} \nE_{P(Y)} S(q, Y)$$ is attained iff
$q = P(Y = 1)$, i.e., the predicted probability truly represents the
probability of rain. That is,
$$\nE_{P(Y)} S(q, Y) \le \nE_{P(Y)} S(P(Y = 1), Y)\ \forall q \in [0, 1]$$
and the equality implies $q = P(Y = 1)$. Such a function is called a
*strictly proper scoring rule*, formally defined below.
::: definition
Proper/Strictly Proper Scoring Rule Let us consider a function
$S\colon \cQ \times \mathcal{Y} \rightarrow \nR$ where $\cQ$ is a family
of probability distributions over the space $\mathcal{Y}$, called the
label space. For a particular distribution $Q(Y) \in \cQ$ and a sample
$y$ from a GT distribution $P(Y)$, the function $S$ outputs a real
number.
**Proper Scoring Rule**
$S$ is called a proper scoring rule iff
$$\max_{Q \in \cQ} \nE_{P(Y)} S(Q, Y) = \nE_{P(Y)} S(P, Y),$$ i.e., $P$
is one of the maximizers of $S$ in $Q$.
**Strictly Proper Scoring Rule**
$S$ is a *strictly* proper scoring rule iff
- $S$ is a proper scoring rule and
- $\argmax_{Q \in \cQ} \nE_{P(Y)} S(Q, Y) = P$ is the *unique*
maximizer of $S$ in $Q$.
**Note**: The family of distributions $\cQ$ can be a parameterized
distribution with parameters $\theta \in \nR^n$ that uniquely define the
distribution. In this case, the scoring rule can also be defined over
the space parameters $\Theta$ instead of the space of distributions
$\cQ$.
:::
According to the note in the definition of proper scoring rules, instead
of considering the family of Bernoulli distributions $\cQ$ in the above
example, we considered its parameter $q$ for working with proper scoring
rules.
Luckily, many often-used loss functions fulfill this criterion.[^71] We
will look at some examples below.
### The Log Probability is a Strictly Proper Scoring Rule {#sssec:logprob}
Define
$$S(q, y) \overset{(1)}{:=} \begin{cases}\log q & \text{if } Y = 1 \\ \log(1 - q) & \text{if } y = 0\end{cases} \overset{(2)}{=} y\log q + (1 - y)\log (1 - q).$$
Using this definition, a very confidently wrong prediction gives
$-\infty$ "reward". The "reward" is non-positive in this case. We can
think of it as "I will take less money from you if you get the
prediction right."
**Note**: The two expressions (1) and (2) above have different domains.
The first one has
$D_{S} = \left((0, 1] \times \{1\}\right) \cup \left([0, 1) \times \{0\}\right),$
whereas the second one has $D_{S} = (0, 1) \times \{0, 1\}.$ If we take
the expectation of both expressions $Y$, both domains become
$D_{\nE_{P(Y)}S} = (0, 1) \times \{0, 1\}.$
The expected reward for the subject is
$\nE_{P(Y)} S(q, Y) = P(Y = 0)\log(1 - q) + P(Y = 1) \log q.$
::: claim
$S$ defined above is a strictly proper scoring rule.
:::
::: proof
*Proof.* For the score $S$ to be well-defined, we have to restrict its
domain to $D_S := (0, 1) \times \{0, 1\}.$ (Otherwise, we could obtain
"$0 \cdot -\infty$" parts in the expectation below. The case distinction
formulation of the score makes $S(1, 1)$ and $S(0, 0)$ also
well-defined, but the expectation below would *not* be well-defined if
we included $q \in \{0, 1\}$.)
Let $a := P(Y = 1)$. Then
$\nE_{P(Y)} S(\cdot, Y)\colon (0, 1) \rightarrow \mathbb{R}$,
$$\begin{aligned}
\nE_{P(Y)} S(q, Y) &= P(Y = 0)S(q, 0) + P(Y = 1)S(q, 1)\\
&= (1 - a) \cdot \log(1 - q) + a\cdot \log(q).
\end{aligned}$$
To show that $S$ defined above is a strictly proper scoring rule, we can
leverage the first-order condition for optimality $q \in (0, 1)$ when
$a \in (0, 1)$. $$\begin{gathered}
\frac{\partial}{\partial q} \nE_{P(Y)} S(q, Y) = -\frac{1-a}{1-q} + \frac{a}{q} \overset{!}{=} 0\\
\iff\\
\frac{a}{q} = \frac{1-a}{1-q}\\
\iff\\
a - aq = q - aq\\
\iff\\
a = q\\
\iff\\
P(Y = 1) = \hat{P}(Y = 1).
\end{gathered}$$ $q = a$ is the only stationary point when
$a \in (0, 1)$. To verify that it corresponds to the global maximizer of
$\nE_{P(Y)} S(q, Y)$, we can use the second derivative test:
$$\frac{\partial^2}{\partial q^2} \nE_{P(Y)} S(q, Y) = -\frac{\overbrace{1-a}^{> 0}}{\underbrace{(1-q)^2}_{> 0}} - \frac{\overbrace{a}^{> 0}}{\underbrace{q^2}_{> 0}} < 0,$$
which verifies that $\nE_{P(Y)} S(q, Y)$ is strictly concave in $q$ for
$a \in (0, 1)$ and $q = P(Y = 1)$ is thus the unique maximizer.
Strictly speaking, when $a \in \{0, 1\}$, there are no stationary points
of the above formulation as $q \in (0, 1)$, according to the domain of
the score. However, in these cases, we can trivially simplify
$\nE_{P(Y)} S(q, Y)$, which allows us to extend the domain to allow
$q = a$ even in these extreme cases: $$\begin{aligned}
a = 0\colon\qquad &\nE_{P(Y)} S(q, Y) = \log(1 - q), &\text{unique maximizer is } q = 0,\\
a = 1\colon\qquad &\nE_{P(Y)} S(q, Y) = \log(q), &\text{unique maximizer is } q = 1.
\end{aligned}$$
This concludes the proof that $S$ is a strictly proper scoring rule. ◻
:::
### The Brier Score is a Strictly Proper Scoring Rule {#sssec:brier}
Define $S(q, y) := -(q - y)^2$ where $q$ is our belief in a binary event
$Y = 1$, and $y$ is an actual outcome of the event (0 or 1) according to
random variable $Y$. The reward is higher when our belief matches the
outcome. But in proper scoring maximization, we want to maximize the
*expectation* in random variable $Y$ (and also in $X$, considering an
entire data distribution $P(X)$ and not just a single sample $x$). The
expected reward for the subject is
$\nE_{P(Y)} S(q, Y) = -P(Y = 0)q^2 - P(Y = 1)(1 - q)^2.$
::: claim
$S$ defined above is a strictly proper scoring rule.
:::
::: proof
*Proof.* Analogous to the proof of the log probability being a strictly
proper scoring rule. ◻
:::
### Role of Proper Scoring Rules {#sssec:role}
A proper scoring rule encourages a subject to report the true
probability $p$ of some binary event $Y = 1$ as $q$. As such, it also
encourages them to report their true beliefs, as this corresponds to
their best approximation of the true probability. Intuitively, it does
not make sense to lie. Now we turn away from considering general binary
events $Y = 1$ and consider a use case of proper scoring maximization
for ML. In particular, we can use proper scoring maximization to
encourage a model to choose its confidence value $c(x)$ such that it is
equal to the probability of getting the prediction for sample $x$ right
($L = 1 \iff Y = \hat{Y}$).
In the case of ML models, predicting the random variable $L$ implicitly
conditioned on $x$ is a binary classification task of whether we are
going to make a correct prediction. The original problem of predicting
$Y \mid X = x$ can be multi-class classification as well.
### Binary Cross-Entropy for True Predictive Uncertainty
::: definition
Binary Cross-Entropy (BCE) Loss Consider a classifier
$f\colon \cX \rightarrow [0, 1]$ that, for a particular input
$x \in \cX$, predicts the probability of $x$ belonging to class 1, i.e.,
$P(Y = 1 \mid X = x)$. For a GT label $y$ sampled from
$P(Y \mid X = x)$, the Binary Cross-Entropy (BCE) loss is defined as
$$\cL(f, x, y) = \begin{cases} -\log f(x) & \text{if } y = 1 \\ -\log(1 - f(x)) & \text{otherwise.} \end{cases}$$
This is the most prominent loss for binary classification when training
DNNs.
:::
Consider a binary prediction problem of classifying into classes 0
and 1. Let $f(x) \in [0, 1]$ be the predicted probability of model $f$
for class 1 on sample $x$. It follows that $1 - f(x) \in [0, 1]$ is the
prediction of the model for class 0. We predict class 1 when
$f(x) \ge 0.5$. Otherwise, we predict class 0.
We define our confidence measure as
$c(x) := \max \left(f(x), 1 - f(x)\right)$, called the *max-probability*
or max-prob confidence estimate between classes 0 and 1. It is easy to
see that $c(x) \in [0.5, 1]$. Other confidence estimates also exist,
such as entropy-based ones. These also consider probabilities of other
classes. (Implicitly, max-prob does, too.)
We wish to make sure that $c(x)$ estimates the probability of the
prediction being correct ($L = 1$). As seen in
[4.5.2](#sssec:logprob){reference-type="ref" reference="sssec:logprob"},
we can encourage the model to report $c(x) = P(L = 1)$ (the true
predictive uncertainty) by letting the model maximize the log
probability proper scoring in expectation of $L$.
::: claim
The negative of the BCE loss is a proper scoring rule for
$c(x) := \max \left(f(x), 1 - f(x)\right)$ to report the true predictive
certainty $P(L = 1)$.
:::
::: proof
*Proof.* According to the definition of the log probability proper
scoring rule,
$$S(c, L) := \begin{cases} \log c(x) & \text{if } L = 1 \\ \log (1 - c(x)) & \text{if } L = 0.\end{cases}$$
One can observe that
- $f(x) < 0.5, Y = 0 \iff L = 1 \land S(c, L) = \log c(x) = \log (1 - f(x))$;
- $f(x) < 0.5, Y = 1 \iff L = 0 \land S(c, L) = \log(1 - c(x)) = \log f(x)$;
- $f(x) \ge 0.5, Y = 0 \iff L = 0 \land S(c, L) = \log (1 - c(x)) = \log (1 - f(x))$;
- $f(x) \ge 0.5, Y = 1 \iff L = 1 \land S(c, L) = \log c(x) = \log f(x)$.
Therefore,
$$S(c, L) = \begin{cases} \log c(x) & \text{if } L = 1 \\ \log (1 - c(x)) & \text{if } L = 0\end{cases} = \begin{cases} \log f(x) & \text{if } Y = 1 \\ \log (1 - f(x)) & \text{if } Y = 0.\end{cases}$$
Maximizing the expectation of the above encourages the true predictive
uncertainty when our confidence measure is
$c(x) = \max(f(x), 1 - f(x))$. This is exactly the log-likelihood
criterion for binary classification. Maximizing this reward on a
training set is equivalent to minimizing the BCE loss (negative
log-likelihood). ◻
:::
**Conclusion**: BCE encourages not only the correctness of
classification $f(x)$ but also the truthfulness of the max-prob
confidence $c(x) = \max (f(x), 1 - f(x))$. BCE is excellent in this
regard.
#### Remarks for binary cross-entropy
When the prediction is correct, $\log c(x)$ reward is given. As
$c(x) \ge 0.5$, we can, at worst, obtain $\log 0.5$ reward when our
prediction is correct. When the prediction is incorrect, but $c$ is very
large, we can obtain an arbitrarily negative reward. We can see the role
of aleatoric uncertainty, as $Y$ is random. We can also see the role of
epistemic uncertainty, as $P(Y = f(x))$ depends on whether the model has
seen such a sample already or not.
**Note**: Looking at the log probability proper scoring rule, one might
mistakenly think that naively setting $c(x) = 1$ is enough to maximize
the expected reward on sample $x$ when the model is correct according to
one labeling. However, $L$ is a random variable because
$L = \bone(Y = f(x))$ and $Y$ is a random variable. There is an inherent
stochasticity in $L$ whenever $P(Y \mid X = x)$ has a non-zero entropy:
We want $c(x)$ to maximize the *expected* reward, not just the reward
for one particular observation of $L$.
#### Proper Scoring Maximization on Finite Datasets
When performing ERM, we have no expectation over the loss. We have
deterministic $(x, y)$ pairs in our training set and minimize BCE on the
batches. (Multiples *can* be present in the dataset with different
labels. Very similar inputs can also correspond to different labels. But
every $(x, y)$ pair we have is deterministic.) In this case, we have no
guarantee of recovering the true predictive uncertainty $P(L = 1)$ for
all samples. We only have the guarantee of recovering the empirical
probabilities $\hat{P}(L = 1)$ based on our dataset. We also have no
guarantees of how faithful our predictive uncertainty scores are on
unseen (e.g., OOD) samples, as we can arbitrarily overfit our predictive
uncertainty predictions. This is important to keep in mind.
Therefore, the truthfulness of the max-prob confidence estimates is only
encouraged the empirical probability of correctness on the training set.
When we consider the idealistic case of having infinitely many samples
from $P(X)$ (i.e., we optimize the expectation), then we have the
guarantee that $c(X)$ will recover $P(L = 1)$ for all samples
$X \sim P(X)$.
By optimizing the BCE, our model also becomes better on the training
samples (until a certain point, given by how expressive the model is).
Therefore, the well-calibratedness -- as measured by log probability
proper scoring -- and the accuracy usually improve hand-in-hand.[^72] We
saw above that BCE encourages the prediction of the true probability of
correctness. We can consider two corner cases here, depending on the
expressivity of our model.
1. Consider a shallow model, such as a logistic regression classifier.
Further, assume that the dataset's generative model is non-linear;
there is model misspecification. Unfortunately, even in the limit of
infinite data, training with the BCE loss (and in general with any
negative proper scoring rule) *does not ensure* that we get
well-calibrated predictive uncertainty estimates. Proper scoring
rules only guarantee that they are maximized at the GT distribution
in expectation. They do not give any guarantees for calibration when
this maximizer cannot be attained in our function class. However,
when our estimator is consistent, we are guaranteed to have
calibrated predictive uncertainty estimates in the limit of infinite
data when using strictly proper scoring rules.
2. Now, let us assume that we have a very expressive model: one that is
capable of fitting to the generative model extremely well. When
trained with the BCE loss, in the limit of infinite data, the model
will give very accurate predictive uncertainty estimates. If we
consider a case with low aleatoric uncertainty, these estimates will
be very confident in the model being correct -- and the model will
indeed be correct most of the time.
It is hard to create an expressive model using only this criterion that
is well-calibrated but inaccurate, as both are optimized simultaneously.
### Multi-Class Cross-Entropy (CE) for True Predictive Uncertainty {#ssec:ce_pu}
::: definition
Multi-Class Cross-Entropy (CE) Loss Consider a classifier
$f\colon \cX \rightarrow \Delta^{K}$ that, for a particular input
$x \in \cX$, predicts an element of the $(K-1)$-dimensional probability
simplex, i.e., predicts a vector of probabilities corresponding to each
class. For a GT label $y$ sampled from $P(Y \mid X = x)$, the
(multi-class) Cross-Entropy (CE) loss is defined as
$$\cL(f, x, y) = -\log f_y(x).$$ This is the most prominent loss for
multi-class classification when training DNNs.
:::
In multi-class classification, we usually use CE as our loss function.
We will see that it also encourages the correct predictive confidence.
Let $f(x) \in \nR^K$ be a vector of probabilities for each class
$k \in \{1, \dotsc, K\}$. That is,
$\forall i \in \{1, \dotsc, K\}\colon$ $f_i(x) \ge 0$ and
$\sum_{i = 1}^K f_i(x) = 1$. We can define our confidence measure as the
max-probability among class probabilities: $c(x) := \max_k f_k(x).$
Then, just like before, we could apply the log probability proper
scoring rule. This rewards the model for how correct it is on its own
most likely prediction. But notice the following, using the shorthand
$k_\mathrm{max} := \argmax_k f_k(x)$: $$\begin{aligned}
&S(c, L)\\
&= \begin{cases} \log c(x) & \text{if } L = 1 \\ \log (1 - c(x)) & \text{if } L = 0\end{cases}\\
&= \begin{cases} \log \max_k f_k(x) & \text{if } Y = k_\mathrm{max} \\ \log \sum_{k \ne k_\mathrm{max}} f_k(x) & \text{if } Y \ne k_\mathrm{max}\end{cases}\\
&= \begin{cases} \log f_Y(x) & \text{if } Y = k_\mathrm{max} \\ \log\left(f_Y(x) + \sum_{k: k \notin \{Y, k_\mathrm{max}\}} f_k(x)\right) & \text{if } Y \ne k_\mathrm{max} \end{cases}\\
&\ge \log f_Y(x).
\end{aligned}$$
The proper scoring rule $S$ for $L = 1$ can be bounded from below with
$\log f_Y(x)$, i.e., the log probability the model assigns to the true
class. The negative log probability $-\log f_Y(x)$ is the CE loss, one
of the most widely used losses for training classifiers. Maximizing the
lower bound $\log f_Y(x)$ (minimizing the CE loss) encourages
$c(x) = \max_k f_k(x)$ to be the truthful predictive uncertainty (either
$\hat{P}(L = 1)$ or $P(L = 1)$, depending on whether we consider the
expectation or its Monte Carlo (MC) approximation). While in general,
when maximizing a lower bound, we do not have any guarantee that we also
maximize the original objective, we can prove just that here: In
Section [\[ssec:proper_au_pu\]](#ssec:proper_au_pu){reference-type="ref"
reference="ssec:proper_au_pu"}, we will prove that this lower bound is
*also* a strictly proper scoring rule for the correctness of prediction
(thereby saving the CE loss's reputation). In that chapter, we will also
uncover important relationships between proper scoring rules for
predictive uncertainty and aleatoric uncertainty.
### Strictly Proper Scoring Rules can Behave Differently
We have now discovered two strictly proper scoring rules for the
correctness of prediction: the log probability of the model's most
likely class and the log probability of the true class. Which one should
we use? The important bit is that being strictly proper does not
necessarily mean that they are also good training objectives. When
training deep neural networks, we are solving a highly non-convex
optimization problem. Different objectives might induce noisier and more
complex loss surfaces: It could be that one of the scoring rules
provides a better regularization of the loss surface (which, perhaps, is
smoother). In that sense, it is also meaningful to empirically compare
the two scores.
::: information
Benchmarking Strictly Proper Scoring Losses Let us compare training with
the objective
$$\cL_1(f(x), y) = \begin{cases} -\log \max_k f_k(x) & \text{if } y = \argmax_{k} f_k(x) \\ -\log \sum_{k \ne \argmax} f_k(x) & \text{if } y \ne \argmax_k f_k(x),\end{cases}$$
to usual CE training using $$\cL_2(f(x), y) = -\log f_y(x).$$ This
experiment is conducted in the [linked
notebook](https://colab.research.google.com/drive/1Y5HZSD7lMBulUrraftGP6YTSbxJR_k73?usp=sharing).
For a toy dataset like MNIST, a shallow CNN (3 convolutional layers)
fits the training data very well with both losses and produces
equivalent results across the ECE, log probability, and Brier Score
metrics. However, training with $\cL_1(f(x), y)$ converges slower, even
after tuning hyperparameters to have a fair comparison.
[Comparing](https://colab.research.google.com/drive/1OR0KDD9JC2aoBaHK0Fb25leA9X-g3iGS?usp=sharing)
the losses on a slightly more realistic dataset, CIFAR-10, the model is
not expressive enough to get close to interpolating the training
dataset. The network trained with CE achieves an accuracy of around 67%.
The $\cL_1(f(x), y)$ loss variant converges even slower than before, and
plateaus much earlier. Even after hyperparameter tuning, it only reaches
an accuracy of 54% on average. Even though the solution sets are
identical, the loss surface corresponding to $\cL_1(f(x), y)$ is
considerably noisier. Regarding calibration, the Brier Score and
log-probability scores are higher for the NLL-trained network (which is
partly expected because it also has a considerably higher accuracy) but
the ECE value for the $\cL_1(f(x), y)$ loss network is very slightly
better. Checking how the uncertainty estimates perform in predicting
aleatoric uncertainty would also be a curious research objective.
In conclusion, $\cL_1(f(x), y)$ *can* train a model, but generally with
worse accuracy and predictive uncertainty estimates (as measured by
proper scoring rules). This might come as a surprise, given that
minimizing a proper scoring loss directly tries to optimize the metric
we evaluate on. However, numerical optimization can be quite unintuitive
and is generally unpredictable. Not all strictly proper scoring rules
are equally good training objectives.
:::
### Multi-Class Brier Score
Some researchers also report the multi-class Brier score:[^73]
$$S(f(x), y) = -(1 - f_{y}(x))^2 - \sum_{k \ne y} f_k(x)^2.$$
::: claim
The above multi-class Brier score provides a lower bound on the Brier
score for the max-prob confidence estimate, $S(c, l) = -(c(x) - l)^2.$
where $l$ is a realization of the Bernoulli random variable $L$.
:::
::: proof
*Proof.* $$\begin{aligned}
S(c, l) &= -(c(x) - l)^2\\
&= \begin{cases} -(c(x) - 1)^2 & \text{if } l = 1 \\ -c(x)^2 &\text{if } l = 0 \end{cases}\\
&= \begin{cases} -\left(\max_k f_k(x) - 1\right)^2 &\text{if } y = \argmax_k f_k(x) \\ -\left(\max_k f_k(x)\right)^2 &\text{if } y \ne \argmax_k f_k(X) \end{cases}\\
&= \begin{cases} -\left(1 - f_y(x)\right)^2 &\text{if } y = \argmax_k f_k(x) \\ -\left(\max_k f_k(x)\right)^2 &\text{if } y \ne \argmax_k f_k(x) \end{cases}\\
&\ge \begin{cases} -\left(1 - f_y(x)\right)^2 &\text{if } y = \argmax_k f_k(x) \\ -\sum_{k \ne y}f_k(x)^2 &\text{if } y \ne \argmax_k f_k(x) \end{cases}\\
&\ge -\left[(1 - f_y(x))^2 + \sum_{k \ne y} f_k(x)^2\right]\\
&= S(f(x), y).
\end{aligned}$$ ◻
:::
Perhaps unsurprisingly, this lower bound is, in fact, also a strictly
proper scoring rule for the correctness of prediction. We will show this
in
Section [\[ssec:proper_au_pu\]](#ssec:proper_au_pu){reference-type="ref"
reference="ssec:proper_au_pu"}.
::: information
Can learning theory be used for uncertainty guarantees? We have not yet
seen learning theory used for uncertainty prediction. In learning
theory, we have many results based on the 0-1 loss and binary
classification. In predictive uncertainty, we also have a binary
classification problem: Is the prediction correct or not? However, it is
not a standalone classification problem. First, we make a prediction,
and then based on that, we can make the meta-output of whether the
prediction was correct. It would be interesting to have such results,
but it is very underexplored at the moment.
:::
### Empirical Evaluation of Predictive Uncertainties
#### Using a Test Set to Measure Generalization
As discussed previously, a good objective does not necessarily imply
that the final trained model behaves nicely if we train with that
objective. For the training set samples, it trivially does. However, we
can still arbitrarily overfit to training set samples (during the
optimization, anything can go wrong) and be very confidently wrong on
test samples. The model then fails to represent its uncertainty
generally. This is already problematic for ERM without uncertainty
quantification. So we need some metrics to evaluate the uncertainty
estimates on test sets.
#### Using Proper Scoring Rules to Evaluate Predictive Uncertainties
We need empirical evaluation for predictive uncertainty. For empirical
evaluation, we always need a sensible evaluation metric. And what metric
could be better than one for which we know it achieves its minimum if
and only if the prediction is correct? (Strictly) proper scoring rules
to the rescue!
**Log probability.** As the log probability is a strictly proper scoring
rule for the correctness of prediction, we can use the average CE (NLL)
over the test samples as the evaluation metric (where lower is better)
for multi-class classification:
$$\cL_\mathrm{NLL} = -\frac{1}{N_\mathrm{test}}\sum_{i = 1}^{N_\mathrm{test}} \log f_{y_i}(x_i).$$
Luckily, many papers report NLL tables besides, say, accuracy or RMSE.
This allows judging the correctness of confidence predictions.
In NLP, people use perplexity instead of CE (especially for language
models, used in benchmarks), which is very similar to CE:
$$\begin{aligned}
\cL_\mathrm{NLL} &= -\frac{1}{N_\mathrm{test}}\sum_{i = 1}^{N_\mathrm{test}} \log f_{y_i}(x_i)\\
\cL_\mathrm{Perplexity} &= 2^{-\frac{1}{N_\mathrm{test}}\sum_{i = 1}^{N_\mathrm{test}} \log_2 f_{y_i}(x_i)}
\end{aligned}$$
The perplexity is the exponentiated NLL value, using base 2 in both the
exponential and the logarithm.[^74] It shows the same information but is
generally deemed more intuitive because of the following reasons.
1. Perplexity can be interpreted as the weighted average branching
factor of a language model [@10.5555/555733]. In the context of
language models, the branching factor refers to the number of words
that can follow a given context (with non-zero probability). The
word 'weighted' is used because the language model usually assigns
different probabilities to different words that can follow --
perplexity takes this into consideration. A lower perplexity means
the language model is less "perplexed" or less uncertain, i.e., it
is more confident in its predictions. This intuition can be easier
to understand compared to the raw log-likelihood.
2. Exponentiating with base 2 "undoes" the $\log_2$ operation, bringing
the metric back into the probability space.
**Note**: One can verify that larger LLMs seem to have lower test
perplexities, meaning they *seemingly* give better predictive
uncertainty estimates
(Figure [4.14](#fig:perplexity){reference-type="ref"
reference="fig:perplexity"}. However, the NLL and perplexity metrics mix
calibration with accuracy (see above). Therefore, we should only
conclude that larger LLMs fit the data distribution better, which is not
a surprising outcome.
![Leaderboard of perplexity of Penn Treebank on
04.03.2023 [@perplexityleaderboard]. Test perplexity shows a decreasing
trend with increasing model
capacity.](gfx/04_perplexity.png){#fig:perplexity width="\\linewidth"}
**Multi-class Brier score.** As the multi-class Brier score is also a
proper scoring rule for the correctness of prediction, we can evaluate
our predictions using the loss
$$\cL_\mathrm{Brier} = \frac{1}{N_\mathrm{test}} \sum_{i = 1}^{N_\mathrm{test}} \left[(1 - f_{y_i}(x_i))^2 + \sum_{k \ne y_i} f_k(x_i)^2\right].$$
**Remarks for the two previous examples.** The lower $\cL_\mathrm{NLL}$
and $\cL_\mathrm{Brier}$ are, the better our predictive uncertainty
estimates are. However, there are a few important things to keep in
mind.
1. We do not know the lowest possible value of these values in
expectation over the data generating process. It depends on the
aleatoric uncertainty $P(Y \mid X = x)$ on samples
$X \sim P(X)$.[^75]
2. The NLL can be challenging to interpret. If we take its exponential,
then we *roughly* get the average probability assigned to the
correct class -- not exactly because of the order of sum and exp.
For the correctness of prediction, this still does not give rise to
an intuitive explanation. Further, it is unbounded from above and
bounded from below by the true aleatoric uncertainty, which is
generally unknown. The Brier score can be easier to interpret in
this regard.
3. In general, proper scoring rules for predictive uncertainty using
max-prob *mix good calibration with good accuracy*. Notably, this is
not the case for ECE
(Section [4.6](#sec:calibration){reference-type="ref"
reference="sec:calibration"}) that can capture calibration
*independently* from accuracy.
4. The pointwise Bayes predictor (the predictor with the minimal
pointwise risk), $P(Y \mid X = x)$, is a maximizer of these scoring
rules with a max-prob confidence estimate, but it is also a
maximizer of proper scoring rules for aleatoric uncertainty.
Therefore, epistemic uncertainty is not taken into account -- proper
scoring only gives statements in expectation over labels, and the
Bayes predictor necessarily has an epistemic uncertainty of zero as
it only models aleatoric uncertainty.
## A New Notion of Calibration {#sec:calibration}
We have seen that proper scoring rules can be used to define a notion of
calibration, but their values are often hard to interpret. In this
section, we discuss an easily interpretable notion of calibration.
However, we will also see that, unlike proper scoring rules, it can be
cheated.
### Evaluating Calibration
Let us first discuss how we can [evaluate
calibration](https://arxiv.org/abs/1706.04599) [@https://doi.org/10.48550/arxiv.1706.04599],
quantifying it in an alternative way compared to proper scoring rules.
Let the input be $x \in \cX$, the output be
$y \in \cY = \{1, \dots, K\}$ (multi-class classification problem) and
the model output be $$h(x) = (\hat{y}, c(x)),$$ which is a pair of the
class prediction and the confidence estimate, respectively. $c(x)$ does
not have to be a max-prob confidence estimate.
::: definition
Perfect Calibration A model is *perfectly calibrated* if
$P(\hat{Y} = Y \mid C = c) = c\quad \forall c \in [0, 1].$
:::
Intuitively, for confidence level $c$, the probability of correct
prediction should be $c$, as the confidence level should faithfully
reflect the probability of correctness. This is very similar to what we
meant by the correct prediction of predictive uncertainty.
**Example for the empirical probability in practice**: Predictions for
any sample in our dataset with confidence score $c = 0.8$ must only be
correct $80\%$ of the time. A rough outline of a procedure that checks
for this (refined later) can be given as follows.
1. *Collect all samples in the test dataset with confidence score
$c = 0.8$.*
2. Compute accuracy across all samples.
3. Check whether this gives us $80\%$ accuracy.
::: definition
Model Calibration *Model calibration* is defined as
$$\nE_{{c} \sim C}\left[\left|P(\hat{Y} = Y \mid C = c) - c\right|\right] = \int \left|P(\hat{Y} = Y \mid C = c) - c\right| dC(c).$$
:::
Informally, model calibration quantifies the deviation of our model from
perfect calibration. Of course, in practice, we do not have access to
the data generating process and, therefore, cannot compute model
calibration. If we resort to empirical probabilities, a problem with the
rough outline we discussed above is that we never have samples with
exactly the same confidence scores, so we cannot calculate the model's
accuracy on them this way. An easy fix is to *introduce binning*. The
Expected Calibration Error (ECE) metric does exactly that.
::: definition
Expected Calibration Error (ECE) *Expected Calibration Error* is a
finite approximation of model calibration that uses binning:
$$\mathrm{ECE} = \sum_{m = 1}^M \frac{|B_m|}{n} \left|\mathrm{acc}(B_m) - \mathrm{conf}(B_m)\right|$$
where $$\begin{aligned}
\mathrm{acc}(B_m) &= \frac{1}{|B_m|} \sum_{i \in B_m} \bone\left(\hat{y}_i = y_i\right),\\
\mathrm{conf}(B_m) &= \frac{1}{|B_m|} \sum_{i \in B_m} c_i.
\end{aligned}$$
:::
The ECE measures the deviation of the model's confidence predictions
from the corresponding actual accuracies on a test set. It is a weighted
average of bin-wise miscalibration. $\mathrm{acc}(B_m)$ is the
proportion of correct predictions (the accuracy) in the $m$th bin, and
$\mathrm{conf}(B_m)$ is the average confidence in the $m$th bin. We take
the average of the confidences to ensure we follow the actual confidence
values in this range more precisely. Further, we weight by the bin size
for the correct approximation of the expectation:
$\hat{C}(c) = \frac{|B_m|}{n}$.
Computing the ECE in practice can be done as follows.
1. Train the neural network on the training dataset.
2. Create predictions and confidence estimates using the test data.
3. Group the predictions into $M$ bins (typically $M = 10$) based on
the confidences estimates. Define bin $B_m$ to be the set of all
predictions $(\hat{y}_i, c_i)$ for which it holds that
$$c_i \in \left(\frac{m - 1}{M}, \frac{m}{M}\right].$$
4. Compute the accuracy and confidence of each bin $B_m$ using the
above formulas for $\mathrm{acc}(B_m)$ and $\mathrm{conf}(B_m)$.
5. Compute the ECE by taking the mean over the bins weighted by the
number of samples in them.
::: information
Relationship of the above metrics What we would ideally want to achieve
is that the model returns *truthful predictive uncertainty estimates*,
i.e., $c(x) = P(L=1 \mid x) \forall x$. However, that is impossible to
measure. So we measure a necessary (not sufficient!) condition: If the
model always returns truthful predictive uncertainty estimates, then it
also needs to be *perfectly calibrated* (across all $x$ that have the
same $c(x)$.
This condition is quantified by the *model calibration*: The model
calibration is zero if and only if the model is perfectly calibrated. To
measure this in practice, we need to approximate it by the *ECE*. This
is basically a discretized version of the model calibration integral.
Due to the approximation, we cannot theoretically guarantee that an ECE
of 0 implies a model calibration of 0 or vice versa (and, in fact, we
show how to game both below). But an ECE close to zero means the model
calibration should also be close to zero. This, in return, at least
checks one of the boxes a model with truthful predictive uncertainties
has to fulfill. It is the best we can do in practice.
:::
While the ECE is a useful metric, for high-risk applications we might be
interested in worst-case metrics. The *Maximum Calibration Error*
computes such a worst-case discrepancy.
::: definition
Maximum Calibration Error The *Maximum Calibration Error* is a useful
metric for high-risk applications:
$$\mathrm{MCE} = \max_{m \in \{1, \dotsc, M\}} \left|\mathrm{acc}(B_m) - \mathrm{conf}(B_m)\right|.$$
:::
MCE computes the maximal bin-wise miscalibration (difference between
empirical accuracy and average confidence value). This might be a very
pessimistic metric if for
$$m' := \argmax_{m \in \{1, \dotsc, M\}} \left|\mathrm{acc}(B_m) - \mathrm{conf}(B_m)\right|,$$
$\frac{|B_{m'}|}{M}$ is very small, depending on our end goal. For
high-risk applications, we could also define the worst-case ECE per
class if our concern is per-class performance.
### Gaming the ECE Metric
ECE is usually a good *indicator* of whether something is fairly
well-calibrated. Its main advantage is that ECE scores are often more
interpretable and intuitive than proper scoring rules, as they denote
deviations from the perfect calibration in a bounded manner: The ECE is
a number between 0 and 1. It tells us how much we are deviating from the
$x = y$ line a weighted average. In comparison, NLL scores can be
arbitrarily large. When we consider the log probability, the sign flips,
which can be confusing. We cannot immediately tell what is good or bad.
It is difficult to interpret what the numbers mean, and it heavily
depends on the scoring rule of choice.
Although it has many nice properties, *the ECE is not a proper scoring
rule*. One can easily achieve $\text{ECE} = 0$ (the minimal value) even
when the model is not reporting the true predictive uncertainties. This
can give us a false sense of calibration and can kill the purpose of the
metric. In particular, if we predict a constant $c$ for all samples,
where $c = P(\hat{Y} = Y)$ is the global accuracy of the model on the
data distribution. Then the conditional probability is only defined for
$c = P(\hat{Y} = Y)$, as this is the only value with a positive measure
(i.e., we have a Dirac measure at the global accuracy), and for this
value, the definition holds by construction. To game the ECE metric, one
does not even need access to labeled validation data. All one needs to
know is the prior probability of correctness, $P(\hat{Y} = Y)$. The same
trick can game the more theoretical notion of model calibration.
Therefore, perfect calibration does *not* imply that $c(x) = P(L = 1)$,
i.e., that $c(x)$ is the GT probability of predicting the output
correctly for all individual inputs $x$. Predictive uncertainties can be
arbitrarily incorrect per sample ($c(x) \ne P(L = 1)$). This is because
the *conditional* probability $P(\hat{Y} = Y \mid C=c)$ *aggregates* all
samples with the same value $c(x)$. As long as this group has the
correct accuracy on average, it is considered perfect. The intention of
ECE and related metrics is still to ensure $c(x) = P(L = 1)$, but they
fail to fully encode this requirement. This can be exploited to, e.g.,
win competitions and benchmarks.
Another important drawback of the ECE metric is that it depends on the
binning. Using twenty bins gives us a different score than using ten.
There should be an agreed-upon number of bins across papers and methods.
This is usually ten but there are several papers using different numbers
as well. However, fixing it is probably not a good idea in the long run:
There are pros and cons of fixing the number of bins. Eventually, models
will be making more and more correct predictions. We should probably
make binning more fine-grained near the $90\% - 100\%$ confidence range,
as there will probably be a lot more samples there.
### Reliability Diagrams
Instead of quantifying calibration in a single number, we can also
*visualize* how well-calibrated a model is by leveraging *reliability
diagrams*
(Figure [\[fig:reliability\]](#fig:reliability){reference-type="ref"
reference="fig:reliability"}).
::: definition
Reliability Diagram A reliability diagram is a visualization of model
calibration that uses binning. It is calculated as follows.
1. Bin through different confidence values and take the mean accuracy
per bin on the test set: for each bin, calculate $\mathrm{acc}(B_m)$
and $\mathrm{conf}(B_m) - \mathrm{acc}(B_m)$ as defined previously.
2. Visualize the discrepancies between the bin-wise accuracies and
confidences using a barplot.
:::
![image](gfx/confidence.pdf){width="0.5\\columnwidth"}
![image](gfx/comparison_with_caruana.pdf){width="0.5\\columnwidth"}
Reliability diagrams allow us to judge whether a model is under- or
overconfident (or a mixture). While ECE only concerns the distance to
the true $c$, the diagram tells us whether the actual accuracy is higher
or lower than the model predicts. If the line is above, then the model
is *underconfident*. If it is below, it is *overconfident* (as in
Figure [\[fig:reliability\]](#fig:reliability){reference-type="ref"
reference="fig:reliability"}).
Reliability diagrams also allow us to look at the MCE, while ECE can
often hide that. But they do not allow inferring the ECE because we do
not know the bin sizes (the weights). Seemingly large discrepancies
might be weighted with a negligible weight if only a couple of samples
are in those bins. If the model on the right had a tiny gap for the last
bin, it could have a lower ECE value than the one on the left. Even if
the weights are reported as histograms along with the reliability
diagrams (the original paper did this, but most follow-ups drop this),
the reliability diagram might still give the wrong impression *at first
glance*.
![Connection between the reliability diagram and the ECE, MCE scores.
Accuracy: $P(\hat{Y} = Y \mid C = c)$, confidence: $C = c$. Figure taken
from [@fluri].](gfx/04_reli2.pdf){#fig:connection
width="0.5\\linewidth"}
The connection between reliability diagrams and the ECE and MCE scores
can be seen in Figure [4.15](#fig:connection){reference-type="ref"
reference="fig:connection"}. Note that the plot starts at 0.1 and not at
0. This is not a coincidence: If we use the max-prob class as a
prediction, its lowest possible $c$ can only be $1/K$. This becomes even
more visible when we only have 10 or 2 classes.
For binary classification, there is also a second definition of
reliability where the y-axis shows the probability of the positive
class. Thus, it always starts at 0 and does not include the mind-flip
that the confidence may also be the probability of the 0 class. However,
it requires a different mind-flip: An underconfident model, in this
case, would have an S-shaped diagram. In the definition
of [@https://doi.org/10.48550/arxiv.1706.04599] above, an underconfident
model has a curve that is always above the line. This version of a
reliability diagram is common in traditional statistics, where classes
are not equal, but the 1 class is more important. So, if one sees a
binary reliability diagram, it is better to double-check its axis
labels.
## Summary of Evaluation Tools for the Truthfulness of Confidence
Let us provide a collection of evaluation tools for the truthfulness of
confidence (predictive uncertainty).
#### Proper Scoring
As we have seen before, one can use the negative log-likelihood (NLL)
loss or the log probability scoring rule on the test dataset to evaluate
the truthfulness of predictive uncertainty estimates. Similarly, one can
use the Brier score or its multi-class variant on a test dataset. These
are all proper scoring rules/losses for the correctness of
prediction.[^76]
#### Metrics Based On Model Calibration
One can use the ECE score for an expected deviation from perfect
calibration (in a binned fashion). For high-risk applications where we
are concerned with the "worst-case bin," one can also employ the MCE
score.
It is also possible to visualize calibration by using reliability
diagrams. However, it is also important to plot confidence histograms,
as reliability diagrams alone can be misleading.
These metrics/visualization tools are all used for predictive
uncertainty (correctness of prediction $L = 1$).
## Excourse: How well-calibrated are DNNs?
Let us consider some findings from the literature on DNN calibration.
### On Calibration of Modern Neural Networks
We discuss the seminal paper titled "[On Calibration of Modern Neural
Networks](https://arxiv.org/abs/1706.04599)" [@https://doi.org/10.48550/arxiv.1706.04599].
In particular, we refer to
Figure [\[fig:reliability\]](#fig:reliability){reference-type="ref"
reference="fig:reliability"}. Both LeNet and ResNet are trained with the
NLL loss, which is the negative of a lower bound of a proper scoring
rule for multi-class predictive uncertainty under max-prob. According to
the Figure, LeNet is relatively well-calibrated, and ResNet performs
worse than LeNet regarding calibration.
It is important to note that this finding is not a general observation.
ResNet-50s usually perform well on calibration
benchmarks [@galil2023learn]. Training procedures and best practices
since this work have also improved considerably, which might have
compounding effects on the results shown in
Figure [\[fig:reliability\]](#fig:reliability){reference-type="ref"
reference="fig:reliability"}.
::: information
The Use of ResNets in Modern DL
In medium-sized models, ResNets are still among the top performers (see
the "[What Can We Learn From The Selective Prediction And Uncertainty
Estimation Performance Of 523 Imagenet
Classifiers](https://arxiv.org/abs/2302.11874)" paper [@galil2023learn].
They are often used in practice as "the smallest possible model that
still allows experimenting with DL."
:::
![Influence of depth, filters per layer, batch normalization, and weight
decay on the error and calibration of different ConvNet architectures.
Figure taken
from [@https://doi.org/10.48550/arxiv.1706.04599].](gfx/04_miscal.pdf){#fig:miscal
width="\\linewidth"}
**Why is this the case?** Let us consider
Figure [4.16](#fig:miscal){reference-type="ref" reference="fig:miscal"}.
Greater model capacity is known to improve model
generalizability [@goodfellow2016deep]. We can see a decrease in error
as the capacity increases.[^77] However, it also leads to greater
miscalibration. We can see an increase in ECE. In particular, increasing
the depth or the number of filters per layer ("width") both result in
decreased calibration.
![Test error and NLL of ResNet-110 over a training run. While the test
NLL starts to overfit (i.e., uncertainty estimates become less
calibrated), the error keeps decreasing. NLL is scaled in order to fit
the Figure. Note the scheduled LR drop at epoch 250. Figure taken
from [@https://doi.org/10.48550/arxiv.1706.04599].](gfx/04_miscal2.pdf){#fig:miscal2
width="0.6\\linewidth"}
Let us now turn to Figure [4.17](#fig:miscal2){reference-type="ref"
reference="fig:miscal2"}. We can measure predictive uncertainty
faithfulness with the test NLL. At epoch 250, we have a scheduled LR
drop. Both the test error and test NLL decrease a lot. The grey area is
between epochs in which the best validation loss and validation error
are produced. The test NLL tends to increase after epoch 250. It shows
the overfitting of $c(x)$ to the training samples. It does not go back
to epoch 250 levels, not even after the scheduled LR drop at epoch 375.
The test error also shows a little overfitting, as it increases by
$1-2\%$ after epoch 250. However, it drops again after the scheduled LR
drop at epoch 375, surpassing epoch 250 levels. The authors draw the
following conclusions. "In practice, we observe a disconnect between NLL
and accuracy, which may explain the miscalibration in
\[Figure [4.16](#fig:miscal){reference-type="ref"
reference="fig:miscal"}\]. This disconnect occurs because neural
networks can overfit to NLL without overfitting to the 0-1 loss. We
observe this trend in the training curves of some miscalibrated models.
\[Figure [4.17](#fig:miscal2){reference-type="ref"
reference="fig:miscal2"}\] shows test error and NLL (rescaled to match
error) on CIFAR-100 as training progresses. Both error and NLL
immediately drop at epoch 250, when the learning rate is dropped;
however, NLL overfits during the remainder of the training.
Surprisingly, overfitting to NLL is beneficial to classification
accuracy. On CIFAR-100, test error drops from 29% to 27% in the region
where NLL overfits. This phenomenon renders a concrete explanation of
miscalibration: the network learns better classification accuracy at the
expense of well-modeled probabilities. We can connect this finding to
recent work examining the generalization of large neural networks. Zhang
et al. (2017) observe that deep neural networks seemingly violate the
common understanding of learning theory that large models with little
regularization will not generalize well. The observed disconnect between
NLL and 0-1 loss suggests that these high capacity models are not
necessarily immune from overfitting, but rather, overfitting manifests
in probabilistic error rather than classification
error." [@https://doi.org/10.48550/arxiv.1706.04599]
### Modern Results on Model Calibration
![The ViT, BiT, and MLP-Mixer architectures are well-calibrated and
accurate. *Left.* ECE is plotted against classification error on
ImageNet for various classification models. *Right.* Confidence
distributions and reliability diagrams of various architectures on
ImageNet. "Marker size indicates the relative model size within its
family. Points labeled "Guo et al." are the values reported for
DenseNet-161 and ResNet-152 in Guo et al.
(2017)." [@https://doi.org/10.48550/arxiv.2106.07998] Figure taken
from [@https://doi.org/10.48550/arxiv.2106.07998].](gfx/04_vit.pdf){#fig:vit
width="\\linewidth"}
For more recent models, [@galil2023learn] provides an extensive
calibration analysis. Several MLP-Mixers [@tolstikhin2021mlpmixer]
(fully connected vision models),
ViTs [@https://doi.org/10.48550/arxiv.2010.11929] (vision transformers),
and BiTs [@kolesnikov2020big] (ResNet-based models) are among the most
calibrated *and* accurate models, considering both the NLL loss and the
ECE. In particular, knowledge-distilled variants of these usually
perform better. This disagreement with the previous study shows that
there is no unanimous agreement on the matter of model calibration in
the literature.
ViT and Mixer are [reported to be
well-calibrated](https://arxiv.org/abs/2106.07998) [@https://doi.org/10.48550/arxiv.2106.07998]
in other works as well, however, as shown in
Figure [4.18](#fig:vit){reference-type="ref" reference="fig:vit"}.
Notably, no recalibration is performed for the Figure. "Several recent
model families (MLP-Mixer, ViT, and BiT) are both highly accurate and
well-calibrated compared to prior models, such as AlexNet or the models
studied by Guo et al. (2017). This suggests that there may be no
continuing trend for highly accurate modern neural networks to be poorly
calibrated, as suggested previously. In addition, we find that a recent
zero-shot model, CLIP, is well-calibrated given its
accuracy." [@https://doi.org/10.48550/arxiv.2106.07998]
Calibration depends a lot on the architecture family. There are huge
differences even between ConvNet-variants.
**Remark**: The decrease in ECE values for recent NN-variants *could*
also be attributed to them being trained on more data. However, the
authors of [@https://doi.org/10.48550/arxiv.2106.07998] find that "Model
size, pretraining duration, and pretraining dataset size cannot fully
explain differences in calibration properties between model families."
(Well-calibratedness has a lot to do with overfitting. Increasing the
number of training samples could result in better ECE on its own.
However, this is apparently not the deciding factor.)
"The poor calibration of past models can often be remedied by post-hoc
recalibration such as temperature scaling (Guo et al., 2017), which
raises the question of whether a difference between models remains after
recalibration. We find that the most recent architectures are better
calibrated than past models even after temperature
scaling." [@https://doi.org/10.48550/arxiv.2106.07998]
### Easy Fix for Better ECE: Temperature Scaling
Let us discuss [Temperature
Scaling](https://arxiv.org/abs/1706.04599) [@https://doi.org/10.48550/arxiv.1706.04599].
For DNN classifiers, one could fix their calibration via post-processing
on the softmax outputs. Suppose that the model output $f(x)$ is the
result of a softmax operation over logits $g(x)$:
$$f(x) = \operatorname{softmax}(g(x)) \in \nR^K.$$ Softmax converts the
logits to parameters of a categorical distribution. We define
temperature scaling with the temperature $T > 0$ as follows:
$$f(x; T) = \operatorname{softmax}(g(x) / T).$$ In words, we divide each
logit value by $T$.
When $T \downarrow 0$, the elements of the argument of the softmax
explode to infinity, the differences between the $\argmax$ and the other
elements increase more and more. Thus, the output of softmax, $f(x; T)$,
becomes a one-hot vector. (As the difference grows, we are stressing the
argmax value more and more.)
When $T \rightarrow \infty$, the elements of the argument of the softmax
go to 0. The differences between the elements decrease more and more.
Thus, the output of softmax, $f(x; T)$, becomes uniform.
One can find the $T > 0$ that returns the best ECE score over a
validation set. We let the model's predictive confidence be
$$\left\{\max_k f_k(x_i; T)\right\}_{i = 1, \dots, N_\mathrm{val}}$$
over the validation set and search for the $T > 0$ that minimizes the
ECE. We can perform a grid search over different $T$ values and find the
one that works best.
**Temperature scaling improves calibration quite dramatically.** Results
are shown in Table [4.2](#tab:temp){reference-type="ref"
reference="tab:temp"}. $T = 1$ usually results in suboptimal ECE
results; the models are not well-calibrated. $T = T^*_\mathrm{val}$
(after performing the search over the val set) results in sub-$2\%$ ECE
values in general, whereas before the average was around $8$-$10\%$.
This is a nice and easy fix.[^78]
::: {#tab:temp}
Dataset Model Uncalibrated ($T = 1$) Temp. Scaling ($T = T_\text{val}^*$)
------------------ ----------------- ------------------------ --------------------------------------
Birds ResNet 50 9.19% **1.85%**
Cars ResNet 50 4.3% 2.35%
CIFAR-10 ResNet 110 4.6% 0.83%
CIFAR-10 ResNet 110 (SD) 4.12% **0.6%**
CIFAR-10 Wide ResNet 32 4.52% **0.54%**
CIFAR-10 DenseNet 40 3.28% **0.33%**
CIFAR-10 LeNet 5 3.02% **0.93%**
CIFAR-100 ResNet 110 16.53% **1.26%**
CIFAR-100 ResNet 110 (SD) 12.67% 0.96%
CIFAR-100 Wide ResNet 32 15.0% **2.32%**
CIFAR-100 DenseNet 40 10.37% 1.18%
CIFAR-100 LeNet 5 4.85% **2.02%**
ImageNet DenseNet 161 6.28% **1.99%**
ImageNet ResNet 152 5.48% **1.86%**
SVHN ResNet 152 (SD) 0.44% 0.17%
20 News DAN 3 8.02% 4.11%
Reuters DAN 3 0.85% 0.91%
SST Binary TreeLSTM 6.63% 1.84%
SST Fine Grained TreeLSTM 6.71% 2.56%
: Comparison of temperature scaling with an untuned baseline.
Temperature scaling can lead to a drastic improvement in calibration.
Table adapted from [@https://doi.org/10.48550/arxiv.1706.04599].
:::
## Do we really need proper scoring?
### Ranking Condition
The previous proper scoring rules for the correctness of prediction
demanded that $c(x) = P(L = 1 \mid X = x)$ be their optimal value, i.e.,
that confidences directly give the probabilities of correctness.
Calibration followed a similar principle. Let us now consider slightly
weaker [ranking conditions](https://arxiv.org/abs/1610.02136).
::: center
If $P(L = 1 \mid x_1) > P(L = 1 \mid x_2)$ then $c(x_1) > c(x_2)$.
:::
That is, we want to have the confidence values in the right order.
Instead of requiring $c(x)$ to be equal to the actual probability, we
only require that the ranking is preserved. If this condition holds,
there exists a monotonic calibration function
$g\colon \nR \rightarrow \nR$ such that $g(c(X)) = P(L = 1 \mid X)$ for
input variable $X$. That is, the ranking condition is almost the same as
the calibration condition, up to a monotonic transformation. (We, of
course, would have to find this $g$ as a post-processing step if we
wanted truthful predictive uncertainties.) This is more approachable
than requiring DNNs to be outputting the true confidence values. And it
is, in fact, sufficient for many applications, such as when we filter
out too-uncertain examples via a threshold.
Based on this intuition, people have produced different metrics for
quantifying the ranking condition. Essentially, we have two ingredients:
1. **Confidence estimates.** $c_i := c(x_i) \in \nR$ is the
*unnormalized* confidence value for test sample $x_i$.
2. **Correctness of prediction.**
$L_i := \bone(\argmax_k f_k(x_i) = y_i) \in \{0, 1\}$ for test
sample $x_i$.
Instead of trying to estimate the true predictive uncertainty $p_i$ from
$L_i$ and comparing ranking (we can do this with binning), one may use
the raw binary $L_i$ to benchmark the $c_i$ estimates. In ECE, we binned
the confidence values (restricted to $[0, 1]$) and took the average of
the $L_i$s in the bin, which was our estimate of $p_i$ (very coarse).
Now we simply use the raw binary values and benchmark how predictive the
confidence estimates are for the $L_i$ values per sample.
We turn the task into a binary detection task for $L_i$, where the only
feature is $c_i$. The question is: Can $c_i$ tell us anything about the
prediction correctness?
### Binary Detection Metrics
Given features $c_i$ and target binary labels $L_i$ as well as a
threshold $t \in \nR$, we predict 1 ("correct") when $c_i \ge t$ and 0
when $c_i < t$. This lets us define the following index sets:
$$\begin{aligned}
\text{True positives: }\mathrm{TP}(t) &= \left\{i: L_i = 1 \land c_i \ge t\right\}\\
\text{False positives: }\mathrm{FP}(t) &= \left\{i: L_i = 0 \land c_i \ge t\right\}\\
\text{False negatives: }\mathrm{FN}(t) &= \left\{i: L_i = 1 \land c_i < t\right\}\\
\text{True negatives: }\mathrm{TN}(t) &= \left\{i: L_i = 0 \land c_i < t\right\}\\
\mathrm{Precision}(t) &= \frac{|\mathrm{TP}(t)|}{|\mathrm{TP}(t)| + |\mathrm{FP}(t)|}\\
\mathrm{Recall}(t) &= \frac{|\mathrm{TP}(t)|}{|\mathrm{TP}(t)| + |\mathrm{FN}(t)|}.
\end{aligned}$$ Informally, precision tells us how pure our positive
predictions are at threshold $t$. Out of the positively predicted
samples, how many were correct? Similarly, recall tells us how many of
the actual positive samples in the dataset are recalled (predicted
positive) at threshold $t$.
One can draw a curve for $\mathrm{Precision}(t)$ and
$\mathrm{Recall}(t)$ for all possible thresholds $t$ from $-\infty$ to
$+\infty$ or, for a probability $c_i$, from $0$ to $1$. This is the
*precision-recall curve*, shown in
Figure [4.19](#fig:pr){reference-type="ref" reference="fig:pr"}.
![Example precision-recall curve that showcases a random classifier, a
perfect one, and one in between. Figure taken
from [@steen].](gfx/04_pr.png){#fig:pr width="0.5\\linewidth"}
As we go on the recall axis from left to right, we observe the following
values for precision and recall. First, we predict all samples as
negative. In this case, precision is undefined. Then we recall the
sample with the highest $c_i$ that is actually positive. Recall is
almost 0, and precision is 1. We continue..., and at the last point, we
recall all actual positive samples (i.e., the recall is one). As we
predict everything to be positive, the precision is the fraction of true
positive samples. This point is always on the line of the random
detector.
To summarize this curve, we can compute the area under the
precision-recall curve (AUPR). This is a metric for how well we are
predicting (how correct our predictions are based on $c_i$ values). For
the perfect detector, $\mathrm{AUPR} = 1$. While we recall all the
actual positive samples, we also never recall actual negative samples.
For a random detector, $\mathrm{AUPR} = P(L = 1)$ where $P(L = 1)$ is
the ratio of positive samples in the dataset. AUPR can be calculated in
two ways: AUPR-Success is the method we discussed above. In AUPR-Error,
we use errors ($L = 0$) as the positive class. Both are often reported
together for predictive uncertainty evaluation.
A drawback of the AUPR is that the random classifier's performance
depends on $P(L = 1)$. For example, if $P(L = 1) = 0.99$ (i.e., the test
set is severely imbalanced), then AUPR is already $99\%$ for a random
detector. It lacks the resolution to see the improvement above the
random detector baseline.
The Receiver Operating Characteristic (ROC) curve fixes this. It
compares the following quantities: $$\begin{aligned}
\mathrm{TPR}(t) = \mathrm{Recall}(t) &= \frac{|\mathrm{TP}(t)|}{|\mathrm{TP}(t)| + |\mathrm{FN}(t)|} = \frac{|\mathrm{TP}(t)|}{|\mathrm{P}|}\\
\mathrm{FPR}(t) &= \frac{|\mathrm{FP}(t)|}{|\mathrm{FP}(t)| + |\mathrm{TN}(t)|} = \frac{|\mathrm{FP}(t)|}{|\mathrm{N}|}.
\end{aligned}$$ Here, FPR tells us how many of the actual negative
samples in the dataset are recalled (predicted positive) at threshold
$t$. This is "1 - the recall for the negative samples."
Similarly to the Precision-Recall curve, one can draw a curve of
$\mathrm{TPR}(t)$ and $\mathrm{FPR}(t)$ for all $t$ from $-\infty$ to
$+\infty$ or, for a probability $c_i$, from $0$ to $1$. This is the *ROC
curve*, shown in Figure [4.20](#fig:roc){reference-type="ref"
reference="fig:roc"}.
![Example ROC curve showing results for a perfect classifier, a random
one, and ones in between. Figure taken
from [@roc].](gfx/04_roc.pdf){#fig:roc width="0.5\\linewidth"}
As we go on the FPR axis from left to right, the FPR and TPR values
change as follows. First, we predict all samples as negative. There, TPR
is 0, and FPR is 0. We continue until the last point, where we predict
all samples as positive. There, TPR is 1, and FPR is 1.
The area under the ROC curve (AUROC) can be computed as a summary
metric. The AUROC has a nice interpretation: It gives the probability
that a correct sample ($L=1$) has a higher certainty $c(x)$ than an
incorrect one. This very much captures our ranking goal. For the perfect
ordering $\text{AUROC} = 1$. And, interestingly, for a random order,
$\text{AUROC} = 0.5$, regardless of $P(L = 1)$. *This makes AUROC the
recommended metric over AUPR, especially on unbalanced datasets.*
## $c(x)$ as Non-Predictive Uncertainty
So far, we have expected $c(x)$ to be an estimate of the predictive
(un)certainty -- whether the model is going to get the answer right or
wrong. $c(x)$, the *confidence estimate*, was required to be a good
representation of the likelihood of getting the answer right ($L = 1$).
However, we have discussed two more equally important uncertainties: the
epistemic and the aleatoric components. We can design benchmarks for
each of these sources separately, i.e., measure the quality of a
particular $c(x)$ as the predictor for other factors (i.e., not
predictive uncertainty anymore). Here, we impose no restrictions on the
estimator $c(x)$ we might use, only that it returns a probability
$\in [0, 1]$ for a binary prediction task. Possibilities for
non-predictive uncertainty benchmarks are listed below.
**Is the sample $x$ an OOD sample?** In this case, we can treat $c(x)$
is an OOD detector. This is not perfectly aligned with predictive
uncertainty. Even if a sample is OOD, the model might get the answer
confidently right, and even if it is ID, the model can be unconfident.
It is rather a measure of epistemic uncertainty.
**Is the sample $x$ severely corrupted?** Corruption is related to
predictive uncertainty, but they are not perfectly aligned: The level of
corruption in an input sample is only one source of uncertainty. This
aspect has close ties to aleatoric uncertainty.
**Does the sample $x$ admit multiple answers?** This is -- by definition
-- aleatoric uncertainty.
As can be seen, these questions are more closely related to identifying
particular aspects of uncertainties tied to either epistemic or
aleatoric sources.
### $c(x)$ as an OOD Detector
We write $Y$ for the binary variable indicating
- $Y = 1$ if $x$ is from outside the training distribution.
- $Y = 0$ if $x$ is from inside the training distribution.
Then, we would expect high $P(Y = 1)$ for higher uncertainty values
$1 - c(x)$. That is, $c(x)$ shall be a good estimator for epistemic
uncertainty. $c(x)$ can be, again, treated as a feature for the binary
prediction of OOD-ness. We may then evaluate $c(x)$ for its OOD
detection performance with AUPR or AUROC. These are evaluation metrics
we already know from predictive uncertainty that are generally used for
*ranking* uncertainties. In the literature for OOD detection or general
uncertainty estimation, we often see OOD detection performances reported
in terms of area under curve metrics.
### $c(x)$ as a Multiplicity Detector
This is a much less popular choice. We write $Y$ for the binary variable
indicating
- $Y = 1$ if the true label for $x$ has multiple possibilities, maybe
because of inherent ambiguity in the task or due to corruption.
- $Y = 0$ if there exists a unique label for $x$.
Then, we would expect high $P(Y = 1)$ for higher uncertainty values
$1 - c(x)$. That is, $c(x)$ shall be a good estimator for aleatoric
uncertainty (whether the sample accommodates more than one answer).
$c(x)$ can be, again, treated as a feature for the binary prediction of
aleatoric uncertainty. We may then evaluate $c(x)$ for its multiplicity
detection performance with AUPR or AUROC.
### Summary of Evaluation Methods so far for Uncertainty
For *predictive uncertainty* (whether the model is going to get the
prediction right), we have seen (1) proper scoring rules such as log
probability and Brier score, (2) metrics based on model calibration such
as ECE, MCE, and reliability diagrams (that are more intuitive metrics),
and (3) ranking (or "weak calibration") using AUROC or AUPR. The third
approach uses different thresholds for the retrieval of correctly
predicted samples.
If we only care about *epistemic uncertainty*, it makes sense to
consider the downstream proxy task of OOD detection to measure the
quality of our uncertainty estimates. We can measure OOD detection
performance using AUROC or AUPR. Plotting the ROC or precision-recall
curves can also be insightful.
For *aleatoric uncertainty*, one might want to look at
multiplicity/corruption detection. Detection performance can be, again,
measured by AUROC or AUPR.
## Estimating Epistemic Uncertainty
As we have seen, epistemic uncertainty means we are unsure about our
prediction because several models could fit the training data (because
we have not experienced enough training data to distinguish the correct
from the incorrect model.) There are two possibilities:
1. The size of our training set is too small, and so the variance of
our estimator is too high.
2. The training data distribution does not cover some meaningful
regions in the input space; there are some underexplored areas.
Epistemic uncertainty has a close connection with Bayesian machine
learning. A great tool for dealing with multiple possibilities in maths
is probability theory:
$$P(\theta \mid \cD) \propto P(\theta)P(\cD \mid \theta) = P(\theta) \prod_{i = 1}^N P(x_i \mid \theta).$$
A posterior distribution over the parameter space is the Bayesian way of
saying, "This space accommodates multiple possible solutions after
observing the training set and taking our prior beliefs into
consideration." A "wider" distribution means higher uncertainty
regarding the true model. (The one that "generated" the dataset.)
It can be instructive to consider the "input space point of view": We
are adding more and more observations to underexplored regions of the
input space. These give more and more supervision: We are narrowing down
the possible range of $\theta$s based on the observations. This should
ideally be happening with Bayesian ML as we observe more data.
### Space of Model Parameters $\theta$
This space is at the center of our attention in Bayesian ML. The notion
of parameters $\theta$ is often interchangeably used with weights $w$
and, sadly, also with functions, models, or hypotheses $h$. Using
Bayesian inference
$$P(\theta \mid \cD) \propto P(\theta)P(\cD \mid \theta) = P(\theta) \prod_{i = 1}^N P(x_i \mid \theta),$$
we are narrowing down our hypothesis space from the wide prior space by
observing more and more data until we arrive at the final posterior. We
hope this distribution contains the true model (the one that actually
"generated" the dataset) with high probability.
Figure [4.21](#fig:bayesian){reference-type="ref"
reference="fig:bayesian"}(b) is the ideal visualization of what should
happen with Bayesian ML.
![Different scenarios for optimization in the hypothesis space. "(b) By
representing a large hypothesis space, a model can contract around a
true solution, which in the real world is often very sophisticated. (c)
With truncated support, a model will converge to an erroneous solution.
(d) Even if the hypothesis space contains the truth, a model will not
efficiently contract unless it also has reasonable inductive
biases." [@https://doi.org/10.48550/arxiv.2002.08791] Figure taken
from [@https://doi.org/10.48550/arxiv.2002.08791].](gfx/04_bayesian.png){#fig:bayesian
width="0.9\\linewidth"}
### Approximate Posterior Distribution Families
In the previous section, we discussed why a posterior over models is a
great way to represent multiple possibilities. In most cases, however,
the true posterior (i.e., the one given by Bayes' rule) over our
weights/models is intractable. (The prior specification is also often
left implicit.) Therefore, we have to make some approximations to our
true posterior. We need to define the distributional format of our
posterior approximations; in other words, the approximate posterior
distribution family. The posterior $P(\theta \mid \cD)$ can be thought
of as an infinite set of models (using sensible priors). We denote our
approximation by $Q_\phi(\theta)$, where $\phi$ are the parameters of
this parametric distribution. The posterior is often approximated
without explicitly specifying the prior.
::: definition
Dirac Delta Measure The Dirac delta is a generalized function over the
real numbers whose value is zero everywhere except at zero. For our
purposes, it represents the fact that we only have one possible
parameter configuration $\theta$ in our posterior. Formally, it is a
measure. Without going too much into Lebesgue integration theory, the
gist is that it acts like a Kronecker delta.
**Note**: This definition only acts as an intuitive description of the
Dirac measure. Interested readers should refer to measure theoretical
treatments of the notion.
:::
$Q_\phi(\theta)$ can be, e.g., ...
- **...a generic multimodal distribution.** For example, it can be a
Mixture of Gaussians (MoG), but any other distribution can be
chosen. A MoG with an appropriate number of modes is enough to cover
any continuous distributions if we allow an arbitrary number of
modes.
- **...a uni-modal Gaussian distribution.** Many people like to use
this for computational simplicity and tractability.
- **...a sum of Dirac delta distributions.** Some people use such
semi-deterministic $Q_\phi(\theta)$s.
- **...a single Dirac delta distribution.** This takes us back to
deterministic ML. A deterministic posterior approximation means a
single point estimate for $\theta$ (MLE, MAP).
Of course, under any sensible prior belief and problem setup, the true
posterior $P(\theta \mid \cD)$ will never be a sum of Dirac deltas.
Nevertheless, we might use it as an *approximation* to the true
posterior. In this section, we will always *approximate* the true
posterior $P(\theta \mid \cD)$ either an implicit or explicit prior
distribution.
#### Deterministic vs. Bayesian ML
There is a whole spectrum between probabilistic Bayesian ML and
deterministic ML. We may also recover the original deterministic ML
formulation by choosing our approximate posterior family to be the
family of Dirac deltas. Thus, the Bayesian framework is a generalization
of deterministic ML. One could express various forms of posterior
uncertainty by considering different approximate posterior distribution
families.
*Deterministic ML* first optimizes a single model (parameter set) over
the training set, $\theta^*(\cD)$. Then, for a test sample, it predicts
the label as $$P(y \mid x, \cD) = P(y \mid x, \theta^*(\cD)).$$ We use
only this single model to produce the output for the input of interest.
From the Bayesian perspective, this is equivalent to having a Dirac
posterior. As epistemic uncertainty arises from the existence of
multiple plausible models, but we only consider a single one in
deterministic ML, we cannot represent epistemic uncertainty using
deterministic ML (i.e., we treat is as 0).
*Bayesian ML* finds a distribution of models, $Q_\phi(\theta \mid \cD)$,
the approximate posterior over the models after observing the training
data. Think of Bayesian ML as training an infinite number of models
simultaneously (whenever our approximate posterior does not only
accommodate a finite set of models).
#### Quantifying Epistemic Uncertainty
Now, we have the most important ingredient to represent epistemic
uncertainty: a set of models. However, measuring the diversity of this
set directly is hard. Therefore, people usually look at the averaged
prediction of the models, formalized as follows. For a test sample,
Bayesian ML predicts the label using Bayesian Model Averaging
(BMA)/marginalization:
$$P(y \mid x, \cD) = \int P(y \mid x, \theta) \underbrace{Q_\phi(\theta)}_{\approx P(\theta \mid \cD)}\ d\theta = \nE_{Q_\phi(\theta)}\left[P(y \mid x, \theta)\right].$$
Thus, we take the average prediction from the approximate posterior
distribution (the voting from an "infinite number of models") at test
time. This can be further approximated as
$$P(y \mid x, \cD) \overset{\mathrm{MC}}{\approx} \frac{1}{M} \sum_{i = 1}^M P(y \mid x, \theta^{(i)}), \qquad \theta^{(i)} \sim Q_\phi(\theta).$$
The entropy $\nH(P(y \mid x, \cD)$ or the max-prob for classification
$\max_k P(Y = k \mid x, \cD)$ are popular choices to quantify epistemic
uncertainty.
**Intuition of BMA.** We expect the outputs of all models in the
posterior to be similar on the training data, as we explicitly train the
models on the training set. When we have a test sample in the training
data region, we expect $P(y \mid x, \theta)$ (i.e., the vector of
probabilities in classification) to be similar across the models, as the
sample will probably lie on the same side of the decision boundaries of
the models (which gets tricky to think about in multi-class
classification). The models will also be confident in the predictions
(up to aleatoric uncertainty), having been trained on similar samples.
Therefore, the BMA output $P(y \mid x, \cD)$ will show high confidence
(e.g., it will have max-prob = $99\%$). When we have a test sample in an
underexplored region, we expect the individual $P(y \mid x, \theta)$s to
be divergent, as nothing forces the models' decision boundaries to agree
in these regions (as we have not trained on samples from these
regions).[^79] Therefore, the models give divergent answers (i.e., the
max-prob indices are different). The BMA output will show low
confidence: e.g. max-prob = $59\%$. Averaging/integrating gives us a
mixture, and the maximal value of the mixture will be more smoothed out.
Even if the individual models are overconfident, the average output will
not be. By averaging, the arg max can even become different from all
individual arg maxes. For example, in the case of a discrete set of
models,
$$\operatorname{avg}\left((0.51, 0.01, 0.48), (0.01, 0.51, 0.48)\right) = (0.26, 0.26, 0.48).$$
To provide further intuition for why the BMA can represent epistemic
uncertainty: Models are sure about different things; when we average
their outputs, it makes the ensemble more unsure. Thus, we get better
epistemic uncertainty prediction. **Note**: The BMA output still
contains the aleatoric uncertainty represented by the individual models
-- the BMA represents predictive uncertainty (both epistemic and
aleatoric uncertainty) in the most precise sense.
### Ensembling
Since ensembling is hugely successful in practice, we will focus a bit
more on it in the next sections. Ensemble learning is usually done as
follows (popularized by Balaji
[@https://doi.org/10.48550/arxiv.1612.01474]).
1. Select $M$ different random seeds. These are different starting
points for the optimization in the parameter space.
2. Train the $M$ models regularly, using either a bagged dataset for
each model (where we sample with replacement from the original
training set) or the original training set.
As the loss landscape is highly non-convex, we usually end up with
different local minima depending on where we start. Therefore, we
usually get a diverse set of models. Random seeds also control the noise
on the objective function (loss landscape) itself, not only the starting
points on "the" landscape. The seeds influence ...
- **...the formation of batches of training samples for SGD.** If we
change the seed, we change the batching, as the reshuffling of the
dataset is seeded differently.
- **...the random components of the data augmentation process.**
Therefore, the actual loss landscape is also changed. We almost
always perform data augmentation.
- **...the random network components, such as Dropout, DropConnect, or
Stochastic Depth.** In DropConnect (2013) [@pmlr-v28-wan13], instead
of dropping out activations (neurons), we drop connections between
neurons in subsequent layers. Stochastic Depth
(2016) [@https://doi.org/10.48550/arxiv.1603.09382] shrinks the
network's depth during training, keeping it unchanged during
testing. It randomly drops entire ResBlocks during training:
$$H_l = \mathrm{ReLU}(b_l f_l(H_{l - 1}) + \mathrm{Id}(H_{l - 1}))$$
where $b_l$ is a binary random variable.
Changing the random seed, therefore, changes many things, which usually
encourages enough diversity in our ensemble. Using bagging to obtain
separate training sets for each model further encourages diversity.
#### BMA with Ensembles
In *model ensembling*, we train several deterministic models on the same
(or subsampled) data simultaneously. In this case, our posterior
approximation becomes a mixture of $M$ Dirac deltas, where $M$ is the
number of models in our ensemble. We claim that this is Bayesian.
$$Q_\phi(\theta) = Q_{\theta^{(1)}, \dots, \theta^{(M)}}(\theta) = \frac{1}{M} \sum_{m = 1}^M \delta(\theta - \theta^{(m)}).$$
After training the $M$ models, BMA boils down to taking the average over
the ensemble members' predictions $$\begin{aligned}
P(y \mid x, \cD) &= \int P(y \mid x, \theta) P(\theta \mid \cD)\ d\theta\\
&\approx \int P(y \mid x, \theta) Q_\phi(\theta)\ d\theta\\
&= \int P(y \mid x, \theta) \frac{1}{M} \sum_{m = 1}^M \delta(\theta - \theta^{(m)})\ d\theta\\
&= \frac{1}{M}\sum_{m = 1}^M \int P(y \mid x, \theta) \delta(\theta - \theta^{(m)})\ d\theta\\
&= \frac{1}{M}\sum_{m = 1}^M P(y \mid x, \theta^{(m)}).
\end{aligned}$$ If the reader is not well versed in measure theory, the
last equality can be considered a part of the Dirac measure's
definition.[^80] This corresponds to averaging the predictions of
individual models. $y$ can be a scalar value in regression, one
particular class in a classification setting, or even the whole class
distribution.
Previously, we have discussed an intuitive explanation for why the BMA
can represent epistemic uncertainty. On the side, ensembles also often
provide better accuracy. The intuition here is that single predictors
make different mistakes and overfit differently. This noise cancels out
by averaging, and we get a better test accuracy. This phenomenon is
formalized and widely used in statistics: Readers might find the various
techniques for bootstrap aggregation (or bagging) interesting.
Let us collect the pros and cons of ensembles.
- **Pro**:
- Conceptually simple -- run the training algorithm $M$ times and
average outputs.
- Applicable to a wide range of models -- from linear regression
to ChatGPT.
- Parallelizable -- if we have a lot of computational resources,
we can train multiple models simultaneously on different cluster
nodes (GPUs).
- Performant -- ensembles are not only able to represent epistemic
uncertainty but are also often more accurate.
- **Contra**:
- Ensembles do not realize the full potential of Bayesian ML (no
infinite number of models, no connectivity between the models).
- Space and time complexities scale linearly with $M$. If we have
a limited number of GPUs, we must wait until the previous model
finishes training (the same holds even for evaluation). Compute
scales linearly even if we parallelize, time might not. To
summarize, this does not scale nicely. However, we often share
some weights to increase the number of models we include in the
ensemble (e.g., to infinity). We will discuss several approaches
to training an "infinite number of models" below.
Finally, we note that ensembling roughly approximates the true posterior
that is given the weight initialization scheme, which is our implicit
prior. In other methods, we have no such connections, and the (implicit)
prior remains undisclosed.
### Dropout {#sssec:dropout}
Having a combinatorial number of models during training sounds familiar.
We have used dropout for model training for quite some time. When using
dropout [@JMLR:v15:srivastava14a], we sample the dropout masks in every
iteration, so a different model is being trained at every iteration. The
models are, of course, very correlated. Every time we are training our
net with different neurons missing. This is an ensemble of many models.
We train each of them for just a couple of steps, but they are so
similar that optimizing one model translates over to improving the other
models too. On the spectrum of Bayesian methods, dropout is between the
sum of Diracs (training a few models) and the variational approach (that
trains an infinite number of models). Some people say dropout is
Bayesian.
The dropout objective is
$$\frac{1}{N} \sum_{n = 1}^N \log P(y_n \mid x_n, s \odot \theta).$$
This is a simple CE loss over the training dataset, but we turn on/off
each weight dimension randomly in each iteration:
$s^{(i)} \sim \mathrm{Bern}(s \mid p)$. This is very similar to the BBB
data term. However, we draw $\theta(s) = s \odot \theta$ from a huge
discrete, categorical distribution, not a Gaussian.
What makes this interesting to create ensemble predictions is that we
can also use dropout at inference time, as introduced in the paper
"[Dropout as a Bayesian Approximation: Representing Model Uncertainty in
Deep
Learning](https://arxiv.org/abs/1506.02142)" [@https://doi.org/10.48550/arxiv.1506.02142].
Eventually, any configuration of turning on/off parameters may fit the
given training data well (resulting in low NLL loss). This does not have
to be the case for non-training data: there will be disagreements
between models for OOD samples. The method is good for detecting such
OOD samples. Although we have always trained many models simultaneously
using dropout, we have not taken advantage of that during inference. To
apply dropout at inference time (test time), we do BMA across different
Bernoulli mask choices (different weight samples) and average the
predictions: $$\begin{aligned}
P(y \mid x, \cD) &= \int P(y \mid x, \theta)P(\theta \mid \cD)\ d\theta\\
&\approx \int P(y \mid x, \theta)Q_\phi(\theta)\ d\theta\\
&\overset{\mathrm{MC}}{\approx} \frac{1}{K} \sum_{k = 1}^K P(y \mid x, \theta^{(k)})\qquad s^{(k)} \sim \mathrm{Bern}(s \mid p), \theta^{(k)} = s^{(k)} \odot \theta.\\
\end{aligned}$$
### Evaluation of Ensembling and Dropout in Practice
Let us discuss a paper on [evaluating ensembling and dropout in
practice](https://arxiv.org/abs/1612.01474) [@https://doi.org/10.48550/arxiv.1612.01474].
#### Results of Ensembling
We start with Figure [4.22](#fig:ensembling3){reference-type="ref"
reference="fig:ensembling3"}. In distribution, the spread of the output
categorical distribution is minimal, no matter how many models we have
in the ensemble. The prediction is nearly always close to a one-hot
vector. The spread here is measured by the entropy of the distribution.
Entropy 0 means the categorically distributed random variable is
constant, $p$ is a true one-hot vector. High entropy means $p$ is close
to being uniform. Out of distribution, a single model still produces
close to 0 entropy values. However, the ensemble over more and more
models has increasing entropy on the OOD samples.
![Entropy values on ID and OOD datasets with a varying number of models
in the ensemble. Ensembling results in higher entropy values on OOD
samples. Base figure taken
from [@https://doi.org/10.48550/arxiv.1612.01474].](gfx/04_ens3.pdf){#fig:ensembling3
width="0.5\\linewidth"}
::: {#tab:ensembling4}
---- ------------- ------------- ------- -----------------
M Top-1 error Top-5 error NLL Brier Score
\% \% $\times10^{-3}$
1 22.166 6.129 0.959 0.317
2 20.462 5.274 0.867 0.294
3 19.709 4.955 0.836 0.286
4 19.334 4.723 0.818 0.282
5 19.104 4.637 0.809 0.280
6 18.986 4.532 0.803 0.278
7 18.860 4.485 0.797 0.277
8 18.771 4.430 0.794 0.276
9 18.728 4.373 0.791 0.276
10 18.675 4.364 0.789 0.275
---- ------------- ------------- ------- -----------------
: Quantitative results of ensembling on ImageNet. All considered
metrics improve with more models. Both the NLL and Brier scores
correlate calibration with accuracy. Table taken
from [@https://doi.org/10.48550/arxiv.1612.01474].
:::
Finally, let us consider the quantitative results of
Table [4.3](#tab:ensembling4){reference-type="ref"
reference="tab:ensembling4"} from an ImageNet experiment. Ensembling
also works at the ImageNet scale. Adding more members to the ensemble
decreases error and increases accuracy. (Training on NLL also improves
accuracy, not just uncertainty estimates.) Test NLL and Brier scores
also improve by increasing the number of models. One could conclude that
we obtain better predictive and aleatoric uncertainties. However, it
could also be the case that the improvements in these scores are just
due to the higher accuracy. Drawing conclusions from proper scoring rule
values is, therefore, tricky.
#### Comparison of Ensembling and Dropout
![*Top.* Evaluation of epistemic uncertainty estimation methods on the
MNIST dataset using a 3-layer MLP. *Bottom.* Evaluation on the SVHN
dataset using a VGG-style convnet. In both cases, ensembling improves
both accuracy and proper scoring metrics. Dropout plateaus earlier and
gives suboptimal results. AT: Adversarial training added. R: Random
signed vector added (baseline, no difference). Figure taken
from [@https://doi.org/10.48550/arxiv.1612.01474].](gfx/04_ens1.pdf){#fig:ensembling
width="\\linewidth"}
![*Top.* Evaluation of epistemic uncertainty estimation methods on the
MNIST dataset using a 3-layer MLP. *Bottom.* Evaluation on the SVHN
dataset using a VGG-style convnet. In both cases, ensembling improves
both accuracy and proper scoring metrics. Dropout plateaus earlier and
gives suboptimal results. AT: Adversarial training added. R: Random
signed vector added (baseline, no difference). Figure taken
from [@https://doi.org/10.48550/arxiv.1612.01474].](gfx/04_ens2.pdf){#fig:ensembling
width="\\linewidth"}
Ensembling and dropout seem to be plausible ways to represent epistemic
uncertainty. Let us now focus on the top part of
Figure [4.24](#fig:ensembling){reference-type="ref"
reference="fig:ensembling"}. We take the NLL and the Brier Score of the
true label.
**Ensembling.** As we add more and more nets to the ensemble, we see a
decrease in the classification error (or, equivalently, an increase in
accuracy). This is not surprising, as everyone is doing ensembling to
get better accuracies. Ensembling with more models also seems to produce
better aleatoric and predictive uncertainty estimation. We can conclude
this because, for multi-class classification, the log probability
scoring rule and the multi-class Brier score are strictly proper scoring
rules for both *predictive* and *aleatoric* uncertainty estimation using
max-prob. Therefore, by measuring the log-likelihood and the multi-class
Brier score, we are also measuring how far away we are from perfect
aleatoric uncertainty prediction. **Note**: By training on NLL, we
encourage each model to give correct predictive uncertainties on the
training set, and we also ensemble to get correct epistemic
uncertainties. The models usually generalize better by ensembling, and
we also get better predictive uncertainties on the test samples.
Ensembling seems to work for a small dataset and a simple neural
network.
**Dropout.** Sampling more and more nets from dropout seems to plateau
quite early and at notably worse values than what we can achieve by
ensembling. These days, MC dropout is treated as a method that does not
really work. Many people are critical of it.
**Note**: Aleatoric uncertainty cannot be reduced by ensembling or using
dropout: It is completely independent of the model. However, the model
posterior might become better at *modeling* the aleatoric uncertainty.
Let us now turn to the bottom part of
Figure [4.24](#fig:ensembling){reference-type="ref"
reference="fig:ensembling"} that evaluates a VGG-style ConvNet on SVHN
(street view house numbers). Ensembling is also scalable to large models
and "large" datasets. We can use ensembling for any model: We simply
have to average the outputs.
### Training an Infinite Number of Models -- Bayes By Backprop
Now, we consider a method for training an infinite number of models,
called [Bayes By Backprop](https://arxiv.org/abs/1505.05424). Training
an infinite number of models is possible when the approximate posterior
is a continuous distribution (e.g., Gaussian). Expressing infinite
possibilities with a finite number of parameters can be easily achieved
using parameterized probability distributions. This work explicitly
models $P(\theta)$ and approximates the true posterior this prior and
the training likelihood by
$$P(\theta \mid \cD) \approx \cN\left(\theta \mid \mu^*(\cD), \Sigma^*(\cD)\right) =: Q_\phi(\theta),$$
where \* denotes the $\mu$ and $\Sigma$ values attained by training on
dataset $\cD$. We simply model the mean and variance, assuming the
posterior is approximately Gaussian. Instead of training $\theta$
directly, we are training $\mu$ and $\Sigma$ on $\cD$. $\theta$s are
just samples from the Gaussian. One can choose $P(\theta)$ arbitrarily.
However, to keep things closed-form, one usually also chooses a
Gaussian.
Of course, we know that the true posterior the chosen prior is likely
not a Gaussian. It is usually much more complex. Nevertheless, we may
still search for the best Gaussian describing the posterior. This is
called *variational approximation/inference*. We minimize the "distance"
between our true posterior and the Gaussian approximation:
$$\min_{\mu, \Sigma} d\left(\cN\left(\mu(\cD), \Sigma(\cD)\right), P(\theta \mid \cD)\right).$$
A popular choice for measuring the divergence (not distance!) between
two distributions is the Kullback-Leibler (KL) divergence. With that
choice, our problem becomes
$$\min_{\mu, \Sigma} \mathrm{KL}\left(\cN\left(\mu(\cD), \Sigma(\cD)\right)\ \Vert\ P(\theta \mid \cD)\right).$$
Training this directly is impossible, as we do not know the true
posterior. However, we can still derive an equivalent optimization
problem that does not require us to calculate the exact posterior.
Figure [4.25](#fig:gaussian){reference-type="ref"
reference="fig:gaussian"} illustrates this optimization problem.
![Informal illustration of the Bayes By Backprop optimization problem.
The procedure aims to find the best Gaussian approximation of the true
posterior. The use divergence between the two is the KL
divergence.](gfx/04_kl.pdf){#fig:gaussian width="0.4\\linewidth"}
Using the fact that $$\begin{aligned}
\log \frac{1}{P(\theta \mid \cD)} &= -\log P(\theta \mid \cD)\\
&= -\log \frac{P(\cD \mid \theta)P(\theta)}{P(\cD)}\\
&= -\log P(\cD \mid \theta)P(\theta) + \log P(\cD)\\
&= \log \frac{1}{P(\cD \mid \theta)P(\theta)} + C
\end{aligned}$$ and $$\begin{aligned}
\log P(\cD \mid \theta) &\overset{\mathrm{IID}}{=} \log \prod_{n = 1}^N P(x_n, y_n \mid \theta)\\
&= \log \prod_{n = 1}^N \left(P(y_n \mid x_n, \theta)P(x_n \mid \theta)\right)\\
&= \sum_{n = 1}^N \left(\log P(y_n \mid x_n, \theta) + \log P(x_n)\right) & (x_n \indep \theta)\\
&= \sum_{n = 1}^N \log P(y_n \mid x_n, \theta) + \underbrace{\sum_{n = 1}^N\log P(x_n)}_{C'},
\end{aligned}$$ we rewrite our training objective as $$\begin{aligned}
&\mathrm{KL}\left(\cN\left(\mu(\cD), \Sigma(\cD)\right) \ \Vert\ P(\theta \mid \cD)\right)\\
&= \int \cN\left(\theta \mid \mu, \Sigma\right) \log \frac{\cN\left(\theta \mid \mu, \Sigma\right)}{P(\theta \mid \cD)}\ d\theta\\
&= \int \cN\left(\theta \mid \mu, \Sigma\right) \log \frac{\cN\left(\theta \mid \mu, \Sigma\right)}{P(\cD \mid \theta)P(\theta)}\ d\theta + C\\
&= \int \cN\left(\theta \mid \mu, \Sigma\right) \log \frac{\cN\left(\theta \mid \mu, \Sigma\right)}{P(\theta)}\ d\theta - \int \cN\left(\theta \mid \mu, \Sigma\right) \log P(\cD \mid \theta)\ d\theta + C\\
&= \mathrm{KL}\left(\cN(\mu, \Sigma) \ \Vert\ P(\theta)\right) - \underbrace{\nE_{\theta \sim \cN(\mu, \Sigma)} \log P(\cD \mid \theta)}_{\mathrm{CE}}\ +\ C\\
&= \mathrm{KL}\left(\cN(\mu, \Sigma) \ \Vert\ P(\theta)\right) - \sum_n \nE_{\theta \sim \cN(\mu, \Sigma)} \log P(y_n \mid x_n, \theta) + C\\
&\overset{\mathrm{MC}}{\approx} \mathrm{KL}\left(\cN(\mu, \Sigma) \ \Vert\ P(\theta)\right) - \frac{1}{K}\sum_{n = 1}^N \sum_{k = 1}^K \log P(y_n \mid x_n, \theta^{(k)}) + C
\end{aligned}$$ where $\theta^{(k)} \sim \cN(\mu, \Sigma)$ and we
collapse all terms into $C$ that do not contain $\mu$ and $\Sigma$, the
parameters we optimize. In the MC sampling, usually, we take $K = 1$ for
training. This is usually fine because we are MC estimating the expected
gradient anyway, with a small batch size (SGD). This expectation
approximation can also be made coarse, as noise in SGD was shown to be a
regularizer and promote better
generalization [@https://doi.org/10.48550/arxiv.2101.12176].
Our final optimization problem is thus
$$\min_{\mu, \Sigma} \mathrm{KL}\left(\cN(\mu, \Sigma) \ \Vert\ P(\theta)\right) - \frac{1}{K}\sum_n \sum_k \log P(y_n \mid x_n, \theta^{(k)}) \qquad \theta^{(k)} \sim \cN(\mu, \Sigma).$$
The first term is the prior term, the regularizer. The second term is
the data term, the likelihood. We took conceptual, rigorous steps to
justify what we are deriving, but this equation makes sense on its own
as well.
This is already a convenient loss function, but we want to make it *more
DNN-friendly*. We have complete freedom to choose the prior for the KL
term. We only need to encode our beliefs through our prior, which can be
anything. (This, of course, influences the true posterior but not the
true model that generated the data. We want the true model to have high
density in the true posterior.) In the parameter space, there are many
symmetries; equivalent solutions are spread across the entire space.
Regardless of which part of the space we choose, it is very likely that
we will find a suitable solution locally. This might serve as a weak
justification of the choice of a standard normal distribution as the
prior:[^81] $$P(\theta) := \cN\left(\theta \mid 0, I\right).$$ We also
restrict our posterior to Gaussians with diagonal covariance matrices:
$$\Sigma = \operatorname{diag}(\sigma^2).$$ (The full covariance matrix
with full degrees of freedom would introduce many computational
problems.) Thus, we approximate $P(\theta \mid \cD)$ with a
heteroscedastic diagonal Gaussian. Then the KL divergence can be given
in closed form, as it is between two multivariate Gaussians:
$$\operatorname{KL}\left(\cN(\mu, \operatorname{diag}(\sigma^2))\ \Vert\ \cN(\theta \mid 0, I)\right) = \frac{1}{2} \sum_i \left[\mu_i^2 + \sigma_i^2 - \log \sigma_i^2 - 1\right].$$
The only remaining problem why the loss is not DNN-friendly is that the
loss does not depend on $\mu$ and $\Sigma$ straightforwardly. We have to
sample from a distribution parameterized by $\mu, \Sigma$, which is not
differentiable in the naive way $\mu, \Sigma$. The reparameterization
trick is used here to detach $\mu$ and $\Sigma$ from the randomness in
the approximate posterior. We compute the model parameter via
$$\theta = \mu + \sigma \odot \epsilon,$$ where $\odot$ means pointwise
multiplication and $\epsilon \sim \cN(0, I)$. We only have to sample
$\epsilon$s (the random part which does not depend on $\mu$, $\Sigma$)
and push it through the above transformation to obtain the $\theta$
values. This separates the randomness and backpropagation.
Lastly, we need to ensure that the $\sigma$ vector is always positive.
It cannot be just an unbounded parameter, like usual. We counteract this
by parameterizing $\rho$ instead (which is a normal `nn.Parameter`),
which may take on negative values too, and setting
$$\sigma := \operatorname{softplus}(\rho) = \log (1 + \exp(\rho)) > 0$$
where all operations are element-wise. The actual softplus function also
has a hyperparameter $\beta$ -- we keep everything minimal here.
Therefore, we obtain a closed-form, differentiable loss for
$\mu, \sigma$ without any constraints. An example PyTorch code for BBB
in a network with a single linear layer is given in
Listing [\[lst:bbblinear\]](#lst:bbblinear){reference-type="ref"
reference="lst:bbblinear"}. We consider multi-class logistic regression
in the BBB formulation. It can be trained with backpropagation and SGD.
::: booklst
lst:bbblinear class BBBLinear(nn.Module): def \_\_init\_\_(self,
input_dim, output_dim): super().\_\_init\_\_() self.mu = nn.Parameter(
torch.tensor(input_dim, output_dim).uniform\_(-0.1, 0.1) ) self.rho =
nn.Parameter( torch.tensor(input_dim, output_dim).uniform\_(-3, 2) ) \#
Sizes: number of weights in the model.
def forward(self, x): eps = torch.randn_like(self.mu) \# requires_grad
is not propagated, K = 1 sigma = F.softplus(self.rho) theta = self.mu +
sigma \* eps return x @ theta \# logits
def compute_loss(self, logits, targets): \# K = 1, negative sum of
log-probs neg_log_likelihood = F.cross_entropy( logits, targets,
reduction=\"sum\" ) sigma = F.softplus(self.rho) kl_prior = 0.5 \* (
self.mu \*\* 2 + sigma \*\* 2 - torch.log(sigma \*\* 2) - 1 ).sum()
return kl_prior + neg_log_likelihood
:::
Variational approximation (which justifies what we are doing
theoretically) consists of a prior KL term and a likelihood term. For
the likelihood, we sample a parameter $\theta$ from the infinite
possibilities of models at every iteration. This is the "secret sauce"
for training an infinite number of models simultaneously while sharing
weights ($\mu, \Sigma$) and saving computation. To separate the sampling
operation from BP (i.e., to have gradient flow to the parameters of the
approximate posterior), we use the reparameterization trick. Frequently,
we need to clip parameter values to a certain range -- we use softplus
to ensure $\sigma > 0$.
After training the model with the given formulation, we obtain the
optimal parameters $\mu^*, \Sigma^*$ for our Gaussian approximation. We
then compute the BMA based on the learned approximate posterior as
$$\begin{aligned}
P(y \mid x, \cD) &= \int P(y \mid x, \theta)P(\theta \mid \cD)\ d\theta\\
&\approx \int P(y \mid x, \theta)Q_{\mu^*, \Sigma^*}(\theta)\ d\theta\\
&= \int P(y \mid x, \theta)\cN(\theta \mid \mu^*, \Sigma^*)\ d\theta\\
&\overset{\mathrm{MC}}{\approx} \frac{1}{K} \sum_{k = 1}^K P(y \mid x, \theta^{(k)})
\end{aligned}$$ where $\theta^{(k)} \sim \cN(\mu^*, \Sigma^*)$.
**Note**: All models with high mass in our posterior make sense. This is
a huge statement, as we can sample infinitely many models, meaning we
have an entire nice *region* of models in the parameter space. At test
time, BBB works the same as ensembles. However, we always draw a new set
of $\theta$s, and we truly integrate over all $\theta$s of our
approximate posterior. In contrast, the $\theta$s in ensembles are
fixed.
#### Gaussian posterior approximations are restrictive...Why is this better than ensembles?
Gaussian posterior approximation is a different way to model the
posterior than the sum of Diracs. The training procedure and the form of
the approximation (meaning of the posterior space) are different.
- **Pro**: We can think about confidence intervals for the approximate
posterior (in such a high-D space still; one number for each
weight). It is also meaningful to ensure that a certain area around
$\mu$ is always a solution. If we care about getting a region in the
parameter space where everything is a solution, then this has more
edge. This is a huge volume (a subset of a million-D space) compared
to ensembles (that have no volume, as they are just points).
- **Contra**: We have to specify an explicit prior with which the
problem remains tractable.
In general, this is not a *better* solution than the ensemble method.
The ensemble method does not give a variational approximation and
basically samples from the true posterior the weight initialization
prior.
#### An overview of training an infinite number of models
One possible recipe for training an infinite number of models is as
follows.
1. Sample $\theta^{(k)} \sim \cN(\mu, \Sigma)$ and train that model
($\mu, \Sigma$) with the likelihood and the KL term. This is
training an infinite number of models represented by
$\cN(\mu, \Sigma)$ at once.
2. After training, we trust $\cN(\mu^*, \Sigma^*)$ (the approximate
posterior) to represent a good set of plausible models (of infinite
cardinality) that generally work well for the training data. If
those models disagree on some sample $x$ (i.e.,
$K^{-1}\sum_k P(y \mid x, \theta^{(k)})$ has high uncertainty), then
$x$ is likely to be alien to these models. (The epistemic
uncertainty is high, the sample is likely to be OOD and from an
unseen region.)
::: information
Regularization Term Why is there no $\lambda$ term in the BBB objective
formulation to balance the effect of the two terms? We can do it, it is
a more general formulation. Here we did not augment the derived formula
with any hyperparameters. However, this effect is already controlled by
the prior variance to some extent (not exactly, check the effect in the
KL term). This is a nice gain from the probabilistic formulation.
:::
::: information
Choosing a Diagonal $\Sigma$ in BBB Choosing a diagonal covariance
matrix for our Gaussian approximation can be questionable if our true
posterior is elongated in some directions. This is illustrated in
Figure [4.26](#fig:elongated){reference-type="ref"
reference="fig:elongated"}.
We generally cannot know whether this will happen in advance without
extensive investigation in random directions. We force our variational
posterior under the true posterior because we use the *reverse KL
divergence* as our objective (true posterior is the second argument of
KL), which "squishes" our approximate posterior into regions of the true
posterior with high density. With the *forward KL divergence*
(approximate posterior is the second argument of KL), the exact opposite
happens: we want to have high density with our approximate posterior
wherever the true posterior has high density.
:::
![Gaussian posterior approximation using the reverse Kullback-Leibler
divergence. The resulting approximate posterior only fits a small
high-density region of the true
posterior.](gfx/04_varpost.pdf){#fig:elongated}
::: information
BBB looks just like VAEs. What is different? These are called
variational because they use variational inference. They consider a
complex posterior (over an unobserved (not a training sample) variable
$\theta$ or $z$, intermediate latent values) and use a tractable family
$Q$ to approximate it. In VAEs we want to maximize $P(x \mid \theta)$,
and we approximate the intractable $P(z \mid x, \theta)$ with
$Q_\phi(z \mid x) = \cN(z \mid \eta_\phi(x), \Lambda_\phi(x))$. The
general idea is the same: estimate the posterior over the latent
variable given the training data. Build a KL distance between the true
posterior and the approximate posterior. (Eventually, we get a KL term
against the prior plus a data term. We train the parameters of the
approximate posterior. In
VAEs [@https://doi.org/10.48550/arxiv.1312.6114], we also train the
decoder $\theta$, not just the encoder $\phi$.) However, in VAEs, we
optimize the ELBO (evidence lower bound); here, we optimize an MC
approximation of the true objective. Another difference: In BBB, we are
performing variational inference over the parameters we have to train,
but in VAEs, we are doing inference over the intermediate outputs $z$,
not the parameters. In VAEs, we use MLE to learn the parameters $\theta$
and $\phi$. In BBB, our entire problem is about the variational
approximation of the true posterior.
Variational approximations happen in many contexts. As an analogy, we
can train a DNN for some problem, and it is always the same
story...Well, it is, but several things are different.
VAEs are from ICLR 2014. BBB is from ICML 2015. BBB actually cites the
VAE paper.
:::
### Weight Space
We recommend looking at loss landscape visualizations. Imagining these
when discussing Bayesian ML and loss landscapes, in general, makes the
topics a lot easier to interpret. It is nice to share these
visualizations in our heads. In particular, we refer to two videos, [The
Loss landscape](https://www.youtube.com/watch?v=aq3oA6jSGro) and [Loss
Landscape Explorer 1.1](https://www.youtube.com/watch?v=As9rW6wtrYk). We
can see a visualization of traveling on the loss landscape to a local
minimum. The latter video shows the loss landscape explorer 1.1, which
can explore the loss landscape live on real data.
The way these visualizations are created is discussed in the FAQ session
of the [webpage](https://losslandscape.com/faq/) of the authors.
::: center
"How to deal with so many dimensions? It is very challenging to
visualize a very large number of dimensions. If we want to understand
the shape of the loss landscapes, somehow we need to reduce the number
of dimensions. One of the ways in which we can do that is by using a
couple of random directions in space, random vectors that have the same
size of our weight vectors. Those two random directions compose a plane.
And that plane slices through the multidimensional space to reveal its
structure in 2 dimensions. If we then add a 3rd vertical dimension, the
loss value at each point in that plane, we can then visualize the
structure of the landscape in our familiar 3 dimensions. (Visualizing
the Loss Landscape of Neural Nets, Li )" [@losslandscape]
:::
It is easy to make such landscape visualizations look nice by
"cheating": One can pick random directions until they get something that
is visually appealing, then report only these as cherry-picked results.
Of course, this is academic malpractice, but its possibility should
always be considered, especially when reviewing novel works.
### Training a Curve of an Infinite Number of Models
![Training loss surface of a Resnet-164 model on CIFAR-100. *All* models
on the obtained curves have low training loss. The training loss is an
$L_2$-regularized CE loss. Two different parametric curves are shown
after optimization. Figure taken
from [@https://doi.org/10.48550/arxiv.1802.10026].](gfx/c100_resnet_b_3.pdf){#fig:curve
width="\\textwidth"}
![Training loss surface of a Resnet-164 model on CIFAR-100. *All* models
on the obtained curves have low training loss. The training loss is an
$L_2$-regularized CE loss. Two different parametric curves are shown
after optimization. Figure taken
from [@https://doi.org/10.48550/arxiv.1802.10026].](gfx/c100_resnet_p_3.pdf){#fig:curve
width="\\textwidth"}
![Piecewise uniform distribution over a piecewise linear curve, treated
as the approximate posterior
$Q_\phi(\theta)$.](gfx/04_lineseg.pdf){#fig:lineseg
width="0.7\\linewidth"}
Based on loss landscape visualizations, we can find creative new ways to
train an infinite number of models. For example, we can [parameterize a
curve](https://arxiv.org/abs/1802.10026) [@https://doi.org/10.48550/arxiv.1802.10026]
between two trained models in the parameter space. This is illustrated
in Figure [4.28](#fig:curve){reference-type="ref"
reference="fig:curve"}. Previously, we fit a Gaussian around a point in
the parameter space (BBB). We might also think about training more
global connections between two, possibly faraway points in the parameter
space. We can get an x-y cut of the parameter space, where the training
loss values are indicated by colors. Here, x and y indicate two selected
axes from the parameter space. They are determined by the third trained
point of the curve.
Here, one trains not just a single $\theta$, but a continuous set of
$\theta$s along the curve. [^82] On the plots, we see an infinite number
of models that perform well on the training set. Along the curves, the
training loss is always very low. Perhaps all points along the curve are
good solutions for the training set. This is also Bayesian, as we are
training an infinite number of models according to a learned parametric
approximate posterior.
We train the curve above as follows. (This is the same story as before.)
1. Train two independent models $\theta_1$ and $\theta_2$ on different
seeds. They are fixed throughout and treated as constants. They are
the endpoints of our parametric curve.
2. We parameterize a curve via a third model $\phi$. We define the
curve via the line segments $\theta_1 - \phi$ and $\phi - \theta_2$.
$$\theta_\phi(t) = \begin{cases} 2(t\phi + (0.5 - t)\theta_1) & \text{if } t \in [0, 0.5) \\ 2((t - 0.5)\theta_2 + (1 - t)\phi) & \text{if } t \in [0.5, 1]\end{cases},$$
which is a bijection between $\theta_\phi(t)$ and $t$ (if
$\theta_1 \ne \theta_2$).
3. We model a piecewise uniform distribution over our parametric curve
embedded in a high-D space. This is shown in
Figure [4.29](#fig:lineseg){reference-type="ref"
reference="fig:lineseg"}.[^83]
4. At each iteration, sample one parameter from the piecewise uniform
distribution over the curve at a time. We sample
$t \sim \operatorname{Unif}[0, 1]$; then, $\theta_\phi(t)$ is a
sample of the approximate posterior $Q_\phi(\theta)$.
5. $\phi$ is optimized such that any model on the curve has low
training loss. The trained curve is supposed to be a subset of the
solution set of the training loss function. We use the
reparameterization trick to separate randomness and backprop to the
parameters of the distribution that describe the curve.
The optimization problem is as follows (which is typical BNN training).
$$\min_\theta \nE_{\theta \sim Q_\phi(\theta)}\left[-\frac{1}{N}\sum_n \log P(y_n \mid x_n, \theta)\right].$$
The objective function can be rewritten as follows using the
reparameterization trick:
$$\nE_{t \sim \operatorname{Unif}[0, 1]} \left[-\frac{1}{N}\sum_n \log P(y_n \mid x_n, \theta_\phi(t))\right]$$
with the curve defined above. The curve is piecewise linear $t$ (not
differentiable at $t = 0.5$) but is entirely linear $\phi$ ($t$ is just
a fixed parameter then), so it is differentiable everywhere.
It is also easy to see that the procedure is differentiable after
selecting $t$, as that is the only source of randomness. Sampling $t$
and obtaining the actual parameter $\theta_\phi(t)$ are well separated
by design. We do not have to use the reparameterization trick, it is
"already used".
![Sampling uniformly from the curve during training ensures that all
models on the curve have a low training loss. The 2D training loss
surface slice is plotted in which the parameterized curve resides. The
authors argue that the parameters in the middle of the curve tend to
*generalize* better than the endpoints (i.e., their test loss is lower
than those of the endpoints, for being embedded in the middle of a wider
basin of the loss surface). Base figure taken
from [@https://doi.org/10.48550/arxiv.1802.10026].](gfx/04_lowloss.pdf){#fig:lowloss
width="0.6\\linewidth"}
The training ensures that every point $\theta$ on the surface has low
training loss. This is empirically verified in
Figure [4.30](#fig:lowloss){reference-type="ref"
reference="fig:lowloss"}, where the training loss values are plotted.
All losses are below $\approx 0.11$ on the curve. A general observation
is that almost all pairs of independently trained models
$(\theta_1, \theta_2)$ for DNNs are connected through a third point
$\phi$ in a low-loss "highway" that we can easily find. This gives an
interesting intuition for the loss landscape: Most solutions in the DL
landscape are connected by some piecewise linear curve. This is not so
surprising: We have millions/billions of dimensions to choose from. We
can likely find a 2D cut of the loss in which there exists a parametric
curve parameterized by $\phi$ that connects the two endpoints with a low
training loss.
The NN has a vast capacity (many dimensions) to accommodate an infinite
number of solutions globally rather than around a certain point.
Previously, we have shown that it is possible to train an infinite set
of models around a specific point locally (Gaussian posterior). Here, we
are expanding that idea to global traversal of the parameter space. This
was the first work that showed that it is possible.
After training the model with the given formulation, we compute the BMA
based on the learned approximate posterior as $$\begin{aligned}
P(y \mid x, \cD) &= \int P(y \mid x, \theta)P(\theta \mid \cD)\ d\theta\\
&\approx \int P(y \mid x, \theta)Q_{\phi}(\theta)\ d\theta\\
&\overset{\mathrm{MC}}{\approx} \frac{1}{K} \sum_{k = 1}^K P(y \mid x, \theta_\phi(t^{(k)}))
\end{aligned}$$ where $t \sim \operatorname{Unif}[0, 1]$.
![Mode connectivity visualization. The direct path between the two local
minima contains high-loss parameter configurations as well. However, we
can find a line connecting the two where *all* configurations result in
low loss. Figure taken
from [@losslandscape].](gfx/04_loss.jpg){#fig:mode width="\\linewidth"}
A visualization of mode connectivity is given in
Figure [4.31](#fig:mode){reference-type="ref" reference="fig:mode"}. We
have two solutions that can be connected by some curve in the parameter
space. On the curve, *test accuracy is also nearly constant*. This has
strong implications for generalization.
![Negative log-likelihood of the diagonal training set and unbiased test
set with different labels. *Left.* On the training set, the found curve
of models has a low loss. One of the endpoints is a color-biased model,
the other is orientation-biased. Therefore, one can obtain a curve of
models that interpolates between two models with different biases.
*Middle.* When considering a test set with color labels, the
color-biased endpoint performs much better, as expected. However, there
are many models on the curve that also perform well. There is a
relatively quick shift between color-biased and orientation-biased
models on the curve. There are also more color-biased models (as the
blue area is larger). *Right.* On the same test set with orientation
labels, the orientation-biased endpoint performs well, and also a small
region of models on the curve (corresponding to the blue region). Base
figure taken
from [@https://doi.org/10.48550/arxiv.2110.03095].](gfx/04_shortcut.pdf){#fig:surprise
width="\\linewidth"}
::: information
Is this method Bayesian? This is not a purely Bayesian approach: The
true posterior $P(\theta \mid \cD)$ is approximated by the density
$Q_\phi(\theta)$, which is obtained by maximum likelihood (we maximize
the likelihood of the dataset $\phi$). So, we do not consider any prior
beliefs over the parameters, and indeed it is probably unlikely that the
posterior would be anything close to being a curve if we chose our prior
as something like a Gaussian. This work does not care about the prior.
It samples some initial models, trains them, fits the curve, and treats
it as a posterior approximation. It still has many nice properties and
allows interesting insights into the parameter space and the loss
surface.
However, Bayesian in this book refers to training infinitely many
models, not performing Bayesian inference using the prior + likelihood
formulation. For a true Bayesian, the prior matters a lot. For the
purpose of this book, it does not. We also see ensembling as a Bayesian
method. All we are doing is approximating the otherwise intractable true
posterior in various ways, sometimes taking an explicit prior into
account, sometimes not. This is a common interpretation in the field,
and is hard to connect it to any rigorous Bayesian theory.
:::
We are **not** using a variational approximation of the true posterior:
We do not have an explicit prior, and the training objective also does
not take any prior into account, as we are performing maximum likelihood
estimation over the third parameter of the curve.
::: information
Further Surprise We can find [further
surprises](https://arxiv.org/abs/2110.03095) [@https://doi.org/10.48550/arxiv.2110.03095]
when considering models biased to different cues. This is shown in
Figure [4.32](#fig:surprise){reference-type="ref"
reference="fig:surprise"}. Even heterogeneous pairs of models
$(\theta_1, \theta_2)$ can be connected with some curve, where
heterogeneous means that the two solutions are biased to different
attributes.
In the training data, color and orientation labels coincide; we have a
diagonal dataset (Section [2.7.1](#ssec:spurious){reference-type="ref"
reference="ssec:spurious"}). We can use either of the cues to get low
training loss. Here, $\theta_\mathrm{color}$ refers to a model biased to
color (i.e., the usual solution we get), and
$\theta_\mathrm{orientation}$ corresponds to a model biased to
orientation (which is an unusual solution). **Note**: We have two sets
of inputs: $X_\mathrm{train}$ and $X_\mathrm{test}$. However, for
$X_\mathrm{test}$, we consider two labeling schemes: one color and one
using orientation as the task cue. Therefore, the loss landscapes are
different for these three datasets in total. The parameter x-y cut is
shared across these datasets.
The axes are chosen as follows. The starting point is two models with
different biases. We train a third model using the formulation above. We
obtain three points in a million-D space that determine a unique plane
(2D subspace) that contains all three models. The other dimensions are
hidden in the plots. The negative log-likelihood is plotted for all
models (parameter configurations) in this plane.
On the left, it is possible to connect these very different solutions
with a curve on the training set landscape: The loss for the training
data is very low for the entire curve of models, as for the training
set, it does not matter which cue our model chooses.
In the middle, when we consider the color test set and the *same* curve,
we have many models with a low loss, i.e., many models on the curve are
biased towards color (as the blue region is rather large). However, an
entire region of models that had low training loss suddenly has high
test loss (right, orange part of the middle plot): This shows that these
models learned spurious correlations the color labeling scheme.
On the right, when we consider the orientation test set (i.e., we only
change the labels compared to the middle plot) and the *same* curve, we
have a lot fewer models on the curve with a low loss, i.e., models that
are biased towards orientation: The blue (low-loss) region is rather
small. The yellow region shows a transition from color-biased models to
orientation-biased models, and all color-based models have a high loss
(red region) on the orientation task.
It is nice to see that the space of color-biased solutions is much
larger than that of the orientation-biased solutions. This probably
explains why if we simply train a model, it is more likely to get a
color-biased solution than solutions biased to other cues. This is a
volumetric POV for explaining why color is a more favored bias for the
models than other cues.
**Another example**: Frogs being the foreground cue and swamps being the
background cue. The training samples consist of frogs in swamps. The
middle plot would then correspond to pictures of swamps that do not
necessarily contain a frog (unbiased dataset). Here, looking at the
foreground does not solve the problem. Many more models are biased
toward the background than the foreground.
:::
### Stochastic Weight Averaging
We can exploit the randomness in SGD. This is another cheap source of
Bayesian ML. The method is called [Stochastic Weight
Averaging](https://arxiv.org/abs/1902.02476) [@https://doi.org/10.48550/arxiv.1902.02476]
(SWA). An informal overview is given in
Figure [4.33](#fig:sgdrand){reference-type="ref"
reference="fig:sgdrand"}. In SGD, we usually use an LR schedule. When
the LR is sufficiently reduced, solutions are not moving too much around
a certain point in space. We treat the set of points (models) towards
the end of training (i.e., when the model roughly converged and the
models are indeed plausible under the data) as samples from some
Gaussian (see Figure [4.35](#fig:sgdrand2){reference-type="ref"
reference="fig:sgdrand2"}). This is the whole idea behind SWA with a
Gaussian (SWAG).
![Informal overview of Stochastic Weight Averaging. We give an
approximate posterior by considering parameter configurations from the
later 25% of the epochs. Not all of these models have necessarily
converged. Figure taken
from [@andrewgw].](gfx/04_sgdrand.pdf){#fig:sgdrand
width="0.4\\linewidth"}
SGD thus inherently trains a large number of models. The SGD trajectory
is noisy because of the small batches. The training's final few
iterations (epochs) can be treated as samples from the approximate
posterior distribution. This is the MCMC point of view of training and
sampling, first introduced in "[Bayesian Learning via Stochastic
Gradient Langevin
Dynamics](https://www.stats.ox.ac.uk/~teh/research/compstats/WelTeh2011a.pdf)" [@welling2011bayesian],
published way before this paper, in 2011. SGLD is an MCMC method to
train and sample a posterior. The intuition is the same as what we
discuss here. We treat the current procedure as if we were MCMC sampling
the posterior (because of the noise from SGD) that is determined by the
loss landscape. (If we do not regularize, we only have an uninformative
prior, and the loss landscape is the negative log-likelihood.)
A visualization of SWAG is shown in
Figure [4.35](#fig:sgdrand2){reference-type="ref"
reference="fig:sgdrand2"}. Based on the mean and variance of the models
of the last couple of epochs, we give a Gaussian approximation. Now, we
do not use a variational approximation to get this Gaussian: We do not
minimize KL divergences.
![ "**\[Left\]:** Posterior joint density surface in the plane spanned
by eigenvectors of SWAG covariance matrix corresponding to the first and
second largest eigenvalues and **Right:** the third and fourth largest
eigenvalues. All plots are produced using PreResNet-164 on CIFAR-100.
The SWAG distribution projected onto these directions fits the geometry
of the posterior density remarkably
well." [@https://doi.org/10.48550/arxiv.1902.02476] Figure taken
from [@https://doi.org/10.48550/arxiv.1902.02476].
](gfx/c100_resnet110_swag_2d_01_big_font.pdf){#fig:sgdrand2
width="\\textwidth"}
![ "**\[Left\]:** Posterior joint density surface in the plane spanned
by eigenvectors of SWAG covariance matrix corresponding to the first and
second largest eigenvalues and **Right:** the third and fourth largest
eigenvalues. All plots are produced using PreResNet-164 on CIFAR-100.
The SWAG distribution projected onto these directions fits the geometry
of the posterior density remarkably
well." [@https://doi.org/10.48550/arxiv.1902.02476] Figure taken
from [@https://doi.org/10.48550/arxiv.1902.02476].
](gfx/c100_resnet110_swag_2d_23_big_font.pdf){#fig:sgdrand2
width="\\textwidth"}
The method's assumption is that the posterior is approximately
Gaussian:[^84] $$\begin{aligned}
Q(\theta) &\approx P(\theta \mid \cD)\\
Q(\theta) &= \cN(\theta \mid \mu(\cD), \Sigma(\cD)).
\end{aligned}$$
The Gaussian parameters are computed from the parameters of the last $L$
epochs (= iterations): $\theta_1 \dots, \theta_L$. $$\begin{aligned}
\mu(\cD) &= \frac{1}{L} \sum_l \theta_l\\
\Sigma(\cD) &= \frac{1}{L} \sum_l \theta_l \theta_l^\top - \left(\frac{1}{L} \sum_l \theta_l\right)\left(\frac{1}{L} \sum_l \theta_l\right)^\top.
\end{aligned}$$
#### SWAG is not so scalable.
The problem with the above formulation is the full empirical covariance
matrix: For a mid-sized network with a few million parameters, computing
and storing this matrix becomes infeasible.
*SWAG-Diag* uses a diagonal approximation of SWAG. The only difference
is how the covariance matrix is approximated. As expected from its name,
SWAG-Diag uses a diagonal approximation:
$$\Sigma(\cD) = \operatorname{diag}\left(\frac{1}{L}\sum_l \theta_l^2 - \left(\frac{1}{L} \sum_l \theta_l\right)^2\right),$$
where the squaring operations are element-wise.
When using SWAG-Diag, we do not need to train $M$ different models (like
in an ensemble setup), nor do we need to calculate a full covariance
approximation (like vanilla SWAG). We only need to train normally and
give a Gaussian approximation based on the last few epochs. This is very
easy and comes at almost no cost. We can, e.g., do it on ImageNet, or
could do it for ChatGPT too. A comparison of SAWG and SWA with other
methods is shown in Figure [4.36](#fig:cal_curv){reference-type="ref"
reference="fig:cal_curv"}. SGD corresponds to the standard training of a
single model. Unlike a reliability diagram, the plots directly show the
deviation from the line of perfect calibration. Therefore, closer to 0
is better. SWAG-Diag is sadly very similar to SGD regarding the
reliability diagram -- SWAG is way better than SWAG-diag, even on
ImageNet. It seems that SWAG-diag only scales better computationally,
but the results do not follow.
![ "Reliability diagrams for WideResNet28x10 on CIFAR-100 and transfer
task; ResNet-152 and DenseNet-161 on ImageNet. Confidence is the value
of the max softmax output. \[\...\] SWAG is able to substantially
improve calibration over standard training (SGD), as well as SWA.
Additionally, SWAG significantly outperforms temperature scaling for
transfer learning (CIFAR-10 to STL), where the target data are not from
the same distribution as the training
data." [@https://doi.org/10.48550/arxiv.1902.02476]. Figure taken
from [@https://doi.org/10.48550/arxiv.1902.02476].](gfx/calibration_curves.pdf){#fig:cal_curv
width="\\textwidth"}
### On the Principledness of Bayesian Approaches
Bayesian approaches look principled. They *are* principled, given that
lots of assumptions are actually true:
- We have a sensible prior that does not make learning infeasible
(e.g., the true model (parameter configuration) is outside the
support of the prior) or inefficient (e.g., the true model is in the
tail, so we need a huge dataset to have high mass at the true model
in the approximate posterior).
- The posterior follows the assumed distribution (e.g., a Gaussian).
Of course, the posterior will seldom be truly Gaussian. This is a
huge assumption.[^85]
In high-dimensional parameter spaces (millions/billions), it is
challenging to guarantee those criteria. To ensure that our posterior is
concentrated around the true model, we need many samples (which is a
foundational problem, not a shortcoming of approximations). To recover
the true posterior, we need it to be in the approximate family. Even to
*verify* correctness, we would need many samples from the true posterior
(an exponentially scaling number in the number of dimensions),
especially for complex distributions. This is infeasible for deep
learning.
## Non-Bayesian Approaches to Epistemic Uncertainty: Measuring Distances in the Feature Space
We have seen that we can give epistemic uncertainty estimates (by
measuring, e.g., the variance of the predictions) when training an
infinite (or large) number of models. In principle, however, we do not
require training an infinite number of models. Let us remember our basic
requirement for epistemic uncertainty: $c(x)$ is expected to be low when
$x$ is away from seen examples (OOD). Hence, we can also try to estimate
epistemic uncertainty by measuring the distance between test sample $x$
and training samples in the feature space.
### Mahalanobis Distance
Let us discuss "[A Simple Unified Framework for Detecting
Out-of-Distribution Samples and Adversarial
Attacks](https://arxiv.org/abs/1807.03888)" [@https://doi.org/10.48550/arxiv.1807.03888].
We want to measure the closeness of a test sample $x$ to one of the
classes in the feature space for OOD detection. To give a feature
representation to each class, we consider the training samples in the
feature space (e.g., the penultimate layer of DNNs, with dimensionality
$\approx$ 1000). This is illustrated in
Figure [4.37](#fig:distance){reference-type="ref"
reference="fig:distance"}.
![Feature space representation of training samples, where classes are
encoded by color *Left cross.* Test features are close to training
sample features of class 1 (one of the clusters). This is an
in-distribution (ID) test sample. *Right cross.* Test features are not
close to training sample features of any class (neither of the
clusters). This is an OOD test sample. Base figure taken
from [@https://doi.org/10.48550/arxiv.1807.03888].](gfx/04_dist.pdf){#fig:distance
width="\\linewidth"}
In Figure [4.37](#fig:distance){reference-type="ref"
reference="fig:distance"}, we computed the distance of our test sample
to all training samples. As we do not want to keep all training sample
features for future reference, we compute the mean and covariance for
each class in the feature space based on all samples in the training
set, then approximate each class by a heteroscedastic, non-diagonal
Gaussian: $$\begin{aligned}
\mu_k &= \frac{1}{N_k} \sum_{i: y_i = k} f(x_i)\\
\Sigma_k &= \frac{1}{N_k} \sum_{i: y_i = k} (f(x_i) - \mu_k)(f(x_i) - \mu_k)^\top.
\end{aligned}$$
The authors of [@https://doi.org/10.48550/arxiv.1807.03888] also
consider "tied cov" to simplify computations by unifying the covariance
across classes: $$\begin{aligned}
\Sigma &= \frac{1}{N} \sum_k N_k \Sigma_k\\
&= \frac{1}{N} \sum_k \sum_{i: y_i = k} (f(x_i) - \mu_k)(f(x_i) - \mu_k)^\top.
\end{aligned}$$ This is *not* the same as calculating the covariance
matrix for the entire dataset, as the individual class means are
preserved. This is the weighted average of all covariance matrices where
the weights are $N_k / N$. Every class has a different number of
samples. We then measure the **Mahalanobis distance** between a test
sample $x$ and the Gaussian for class $k$ as
$$M(x, k) = (f(x) - \mu_k)^\top \Sigma^{-1}(f(x) - \mu_k).$$
::: information
Interpretations of the Mahalanobis Distance There are two ways to think
about the Mahalanobis distance. (1) It is roughly the NLL of the test
sample given class $k$ (up to a constant). (2) It is the $L_2$ distance
between sample $x$ and the class mean, weighting every dimension by the
precision (inverse covariance) matrix. This is a distorted $L_2$
distance that weights directions with large precision (small variance)
more. Both interpretations are useful to keep in mind.
:::
Then we define the confidence measure $c(x)$ based on the smallest
Mahalanobis distance to a Gaussian: $$c(x) := - \min_c M(x, c).$$
According to this definition, $c(x)$ is low when $x$ is OOD (i.e.,
$\min_k M(x, k)$ is high) and analogously $c(x)$ is high when $x$ is ID
(i.e., $\min_k M(x, k)$ is low).
**Note**: This method was designed for detecting OOD samples. It did not
consider aleatoric uncertainty, which is also present in the estimates.
The field is now aware that it can also influence predictive
uncertainty. One could, e.g., measure aleatoric uncertainty as the ratio
of distances of the two closest distributions. When it is approximately
one, we have high aleatoric uncertainty, because we are split between
the classes.
#### Results of Mahalanobis Distance
To discuss the Mahalanobis distance's ability to detect OOD samples, we
consider Table [\[tab:mah\]](#tab:mah){reference-type="ref"
reference="tab:mah"}. This showcases the detection accuracy of OOD
samples under various metrics. AUPR in and out correspond to whether the
ID or OOD class is considered positive. The max-prob confidence measure
is not suitable for OOD detection as much. The performance in detecting
OOD samples is considerably better for the Mahalanobis distance in the
feature space. This is, of course, just an example. One should not
conclude that the Mahalanobis distance is always a better confidence
measure than max-prob, just for this particular setup.
### Other types of distances than Mahalanobis: RBF kernel
We discuss the work "[Uncertainty Estimation Using a Single Deep
Deterministic Neural
Network](https://arxiv.org/abs/2003.02037)" [@https://doi.org/10.48550/arxiv.2003.02037]
to highlight another distance-based uncertainty estimator.
Here, instead of computing the Mahalanobis distance the class Gaussians,
we first compute the $L_2$ distance between test sample and centroid of
class $k$, $\mu_k = \frac{1}{N_k} \sum_{i: y_i = k} f(x_i)$:
$$\begin{aligned}
d(x, k) = \Vert f(x) - \mu_k \Vert_2^2.
\end{aligned}$$ Then, we compute the RBF kernel value for class $k$ as
$$K_k(f(x), \mu_k) = \exp\left(-\frac{d(x, k)}{2\sigma^2}\right)$$ where
$\sigma$ is a hyperparameter. This is a special case of the Mahalanobis
distance where the covariance is isotropic (hence the name "radial"
basis function), and we take the squared $L_2$ norm.
The kernel value has a nice property: $$K_k(f(x), \mu_k) \in (0, 1]$$
where higher values indicate greater similarity. This is more
interpretable than the Mahalanobis distance, and it also has a nice
interpretation as a probability. All $\sigma$ does is to control the
temperature of this distribution. Finally, we define our confidence
level as $$c(x) := \max_k K_k(f(x), \mu_k).$$ Low confidence indicates
an OOD sample: $c(x)$ can be interpreted as the probability of $x$ *not*
being OOD. Conveniently, $c(x) \in (0, 1]$, thus, we can apply a proper
scoring rule and train using the resulting criterion. In the derivation
below, we exclude the case of $K_k(f(x), \mu_k) = 1$ and also simplify
notation to just $K_k$. The negative log probability scoring rule for
the max-RBF similarity is given by $$\begin{aligned}
\label{eq:loss}
\cL = \begin{cases} - \log \max_k K_k & \text{if } Y_{\argmax_k K_k} = 1 \\ -\log(1 - \max_k K_k) & \text{if } Y_{\argmax_k K_k} = 0\end{cases}
\end{aligned}$$ where $Y$ is a one-hot (random) vector for the GT class.
As $Y$ is a one-hot vector, $Y_{\argmax_k K_k} = 1$ means that the
prediction is *correct*, whereas $Y_{\argmax_k K_k} = 0$ shows an
*incorrect* prediction.
When the prediction is correct, we gain $\log c(x)$ reward (or lose
$-\log c(x)$ reward). If we were very confident, we would gain the most.
This encourages the network to have a high $c(x)$, i.e., make the
feature representation of $X$ even closer to the current centroid. We
are optimizing correct predictive uncertainty estimation by $c(x)$. When
the prediction is incorrect, we gain $\log (1 - c(x))$ reward. We repel
the current centroid.
We upper bound Equation [\[eq:loss\]](#eq:loss){reference-type="ref"
reference="eq:loss"} with a familiar loss function, BCE. When
$Y_{\argmax_c K_c} = 1$ (upper branch), we write
$$-\log \max_k K_k = -\sum_k Y_k\log K_k$$ because $Y$ is a one-hot
vector. We also have
$$-\log (1 - \max_k K_k) = -\log \min_k (1 - K_k) = \max_k - \log(1 - K_k)$$
where we used for the last equality that $\log$ is monotonically
increasing. When $Y_{\argmax_k K_k} = 0$ (lower branch), this can be
bounded from above as $$\begin{aligned}
\max_k \underbrace{-\log(\underbrace{1 - K_k}_{\in (0, 1)})}_{\in (0, +\infty)} &\le \sum_{k: Y_k = 0} -\log(1 - K_k)\\
&= \sum_k -(1 - Y_k)\log(1 - K_k).
\end{aligned}$$ Thus, we finally have that $$\begin{aligned}
\cL &\le \begin{cases} \overbrace{-\sum_k Y_k \log K_k}^{> 0} & \text{if } Y_{\argmax_k K_k} = 1 \\ \underbrace{-\sum_k (1 - Y_k)\log(1 - K_k)}_{> 0} & \text{if } Y_{\argmax_k K_k} = 0\end{cases}\\
&\le -\sum_k \left(Y_k \log K_k + (1 - Y_k) \log(1 - K_k)\right).
\end{aligned}$$ The authors
of [@https://doi.org/10.48550/arxiv.2003.02037] optimize this
upper-bound proxy loss on a finite (deterministic) dataset
$\{(x_i, y_i)\}_{i = 1}^N$. The loss is the sum of BCEs of one-vs-rest
classifications where $K_k$ is our predicted probability of membership
of class $k$ for sample $x$. This advocates the use of this form of BCE
for optimizing our classifiers. This encourages correct predictive
uncertainty ($L = 1$) reports for $c(x) := \max_k K_k(f(x), \mu_k)$.
A remaining problem is that the class centroids
$\mu_k = \frac{1}{N_k}\sum_{i: y_i = k}f(x_i)$ are continuously updated
during training. These are needed for all $K_k$ and for all $x$. Suppose
we recompute centroids every time we update our parameters. In that
case, we will have a very noisy training procedure, as the targets
(centroid means) are constantly moving, and we are trying to chase after
them for the right class for each sample. Also, recalculating these for
the entire dataset after every network update is infeasible. To solve
both, we use a moving average for more stable centroid estimation at
each iteration and more stable training: $$\begin{aligned}
N_k &\gets \gamma N_k + (1 - \gamma)n_k\\
m_k &\gets \gamma m_k + (1 - \gamma) \sum_{i \in \mathrm{minibatch}: y_i = k}f(x_i)\\
\mu_k &\gets \frac{m_k}{N_k}.
\end{aligned}$$ where
- $N_k$ is the "soft" number of samples per class in mini-batch: It is
the moving average of the number of samples per class in mini-batch.
This changes over iterations; we also need to smooth this out.
- $m_k$ is the moving average of the sum of class $k$ sample features.
- $\mu_k$ is the average feature location (centroid) for class $k$.
- $n_k$ is the number of samples per class in the current mini-batch.
- $\gamma \in [0.99, 0.999]$ corresponds to the momentum term in the
moving average. To make learning stable, it is chosen to be quite
high.
**Note**: When a training sample has high aleatoric uncertainty, it will
be positioned between likely centroids at the end of training. When a
training sample has low aleatoric uncertainty, it will be very closely
clustered to the correct class. When an OOD sample comes, it will have
low confidence. However, we can also get low confidence for samples with
high aleatoric uncertainty. We cannot distinguish these two cases based
on the confidence value. This work just attributes low confidence to
epistemic uncertainty.
#### Results of RBF Kernel
We first discuss Figure [4.38](#fig:rbf){reference-type="ref"
reference="fig:rbf"} that showcases qualitative results. The confidence
estimate successfully distinguishes the two sources of data (ID, OOD).
In distribution, the maximal kernel similarity[^86] is very high, and
samples are well clustered in the feature space. Out of distribution,
samples tend to have different maximal kernel similarities than one. We
qualitatively conclude that $c(x) = \max_c K_c(f(x), \mu_c)$ is a good
indicator of how to separate OOD samples from ID samples after training
the network. One can find the best separating threshold (or just report
AUROC or AUPR).
![Results of RBF kernel confidence estimation. In distribution, the
kernel similarities are all high, i.e., the mapped samples are
concentrated in high-density regions of the feature space. Out of
distribution, the similarities are mixed, meaning there are a variety of
samples "the model is not familiar with." ID dataset is CIFAR-10, OOD
dataset is SVHN. Figure taken
from [@https://doi.org/10.48550/arxiv.2003.02037].](gfx/04_res.png){#fig:rbf
width="0.5\\linewidth"}
We also discuss quantitative results shown in
Table [4.4](#tab:quant){reference-type="ref" reference="tab:quant"}.
Results are quantified using the AUROC score. ("Can we separate ID from
OOD based on $c(x)$ predictions?") DUQ corresponds to deterministic
uncertainty quantification using the confidence score
$c(x) = \max_k K_k(f(x), \mu_k)$. The name highlights that they do not
have to stochastically train multiple (or even an infinite number of)
models to obtain epistemic uncertainty estimates. LL ratio is a method
we do not discuss. 'Single model' denotes DUQ trained with
softmax-cross-entropy. It uses the same $c(x)$ formulation but is
trained with the usual softmax-cross-entropy loss. As shown in the
Table, the method gives good results after training with the proposed
objective (DUQ).
::: {#tab:quant}
Method AUROC
----------------------------- -------
DUQ 0.955
LL ratio (generative model) 0.994
Single model 0.843
5 - Deep Ensembles (ours) 0.861
5 - Deep Ensembles (ll) 0.839
Mahalanobis Distance (ll) 0.942
: AUROC results on FashionMNIST, with MNIST being the OOD set. DUQ
(using $c(x) = \max_k K_k(f(x), \mu_k)$) outperforms most
methods."Deep Ensembles is by Lakshminarayanan et al. (2017),
Mahalanobis Distance by Lee et al. (2018), LL ratio by Ren et al.
(2019). Results marked by (ll) are obtained from Ren et al. (2019),
(ours) is implemented using our architecture. Single model is our
architecture, but trained with softmax/cross
entropy." [@https://doi.org/10.48550/arxiv.2003.02037] Table taken
from [@https://doi.org/10.48550/arxiv.2003.02037].
:::
### Summary of Modeling Epistemic Uncertainty
We have seen two general ways of modeling epistemic uncertainty. In
Bayesian ML, we train a set of models simultaneously. We measure their
disagreement during inference through Bayesian model averaging (BMA). We
can also choose to measure distances in the feature space. In
particular, we can compute the distance to the closest class centroid in
the feature space to get a sense of how surprising an input sample is.
Both have been successfully applied to the problem of OOD detection,
which is a proxy task for epistemic uncertainty.
## Modeling Aleatoric Uncertainty
As we have seen, aleatoric uncertainty refers to "I do not know because
there are multiple plausible answers." This happens when true label $y$
is not a deterministic function of input $x$, as multiple possibilities
could be an answer for input $x$.
Below, we give a high-level overview of the ingredients we will use to
represent aleatoric uncertainty.
### Roadmap to Representing Aleatoric Uncertainty {#ssec:roadmap}
There are *two ingredients* that are used together for the recipe of
representing aleatoric uncertainty.
**Architecture.** Formulate a model architecture that accommodates
multiple possible outputs. We should prepare, e.g., a probabilistic
output where our model outputs the parameters of this output
distribution rather than a single prediction.
**Loss function.** Taking a proper scoring rule for matching the
predicted output distribution to the one dictated by the dataset
(examples are discussed in
Sections [4.13.2](#ssec:au_classification){reference-type="ref"
reference="ssec:au_classification"}
and [\[ssec:au_regression\]](#ssec:au_regression){reference-type="ref"
reference="ssec:au_regression"}) is often sufficient.
Let us follow our recipe and extend proper scoring rules to more generic
distributions, as this will allow us to recover truthful aleatoric
uncertainty estimates. We start with matching output distributions in
classification.
### Aleatoric Uncertainty In Classification {#ssec:au_classification}
As discussed previously, aleatoric uncertainty refers to the inherent
variability of the labels, i.e., the non-deterministic nature of the
data generating process. To *represent* aleatoric uncertainty, we would
like our model to output a *distribution* which is faithful to
$P(Y \mid X = x)$.
#### Proper Scoring Rules to the Rescue, Again
So far, our discussion centered around binary distributions, where we
tried to match a confidence value $c(x)$ to the true probability of an
event, such as $P(L = 1)$, where $L$ represents the correctness of
prediction. Here, both $c(x)$ and $P(L = 1)$ corresponded to the
parameters of respective Bernoulli distributions. By matching the
Bernoulli parameters, we were also matching the Bernoulli distributions.
To achieve this, we leveraged (strictly) proper scoring rules.
We now extend the notion of proper scoring rules to general discrete
distributions. In particular, we want to match a distribution $Q$ (a
categorical distribution encoded by a vector of probabilities) to the
true discrete distribution $P$. Let $y$ be a sample of distribution
$P(Y)$ -- for example, the GT class index. We then define the scoring
rule as a function $S(Q, y)$. Arguments are $Q$ (the predicted
distribution) and $y$ (a sample from true distribution $P(Y)$). This
scoring rule is *strictly proper* when the expected score
$\nE_{P(Y)} S(Q, Y)$ is maximized iff $Q \equiv P$ (i.e., when the
distributions match). $S$ may also be described as a function of $Q$'s
parameters (e.g., the parameter vector of a categorical distribution or
the parameter of a Bernoulli distribution) rather than $Q$ itself.
If we want, we can then further compress $Q$ into a scalar. The
aleatoric confidence can be, e.g., given by the max-prob
$\max_k Q(Y = k)$, or by the entropy of the predicted distribution,
$\nH(Q)$.
Let us discuss some popular proper scoring rules for matching predicted
categorical distributions, encoded by softmax outputs $f(x)$, to the GT
distributions with GT probabilities
$P(Y = y \mid X = x)\ \forall y \in \cY$ from which we can only sample.
#### Log Probability Scoring Rule
The log probability scoring rule (negative CE) for categorical
distributions is defined as
$$S(f, y) = \sum_k y_k \log f_k(x) = \log f_y(x),$$ where $y$ is the
true class.[^87] It can be shown that $S$ defined this way is a strictly
proper scoring rule, i.e. $$\nE_{P(Y)}S(f, Y)$$ is maximal iff
$$f_k(x) = P(Y = k \mid x)\ \forall k \in \{1, \dotsc, C\}.$$ This is
great news! Many DNNs already minimize a NLL loss of the form
$$\cL = -\sum_k y_k \log f_k(x) = -\log f_y(x),$$ which means they are
already matching their predictions to the aleatoric uncertainty of a
data source. Since it is a proper scoring rule, in the expectation of
$Y$, we encourage our DNN to predict $f(x)$ that correctly represents
the spread of $P(Y \mid X = x)$ in the training set.
#### Multi-Class Brier Scoring Rule
To match a probability vector encoding a categorical distribution to the
true distribution $P(Y \mid X = x)$, we can also use the Brier scoring
rule. Consider a predicted probability vector $f(x) \in [0, 1]^K$ with
$\sum_{k=1}^K f_k(x) = 1$ and a categorical random variable
$Y \in \{1, \dots, K\}$. The multi-class Brier scoring rule is defined
as $$S(f(x), y) = -(1 - f_y(x))^2 + f_y(x)^2 - \sum_{k=1}^K f_k(x)^2.$$
::: claim
The multi-class Brier score is a strictly proper scoring rule for
aleatoric uncertainty.
:::
::: proof
*Proof.* First, we rewrite $\nE_{P(Y \mid X = x)} S(f, Y)$ as
$$\begin{aligned}
\nE_{P(Y \mid X = x)} S(f, Y) &= \sum_{k=1}^K P(Y = k)\left[-(1 - f_k(x))^2 + f_k(x)^2 - \sum_{l=1}^K f_l(x)^2\right]\\
&= \sum_{k=1}^K P(Y = k) \left[-f_k(x)^2 + 2f_k - 1 + f_k(x)^2 - \sum_{l=1}^K f_l(x)^2\right]\\
&= -\sum_{k=1}^K \left[P(Y = k)(1 - 2f_k(x)) + \sum_{l=1}^K P(Y = k)f_l(x)^2\right]\\
&= -\sum_{k=1}^K P(Y = k)(1 - 2 f_k(x)) - \sum_{l=1}^K f_l(x)^2\\
&= -\sum_{k=1}^K \left[P(Y = k)(1 - 2f_k(x)) + f_k(x)^2\right]
\end{aligned}$$ for all $f \in \Delta^K$ which is the
($K - 1$)-dimensional probability simplex.
A necessary condition for the maximizer of the Brier scoring rule in
expectation is as follows.
$\forall r \in \{1, \dots, K\}, f \in \Delta^K$: $$\begin{aligned}
\frac{\partial}{\partial f_r}\left(-\sum_{k=1}^K \left[P(Y = k)(1 - f_k(x)) + f_k(x)^2\right]\right) &= -\sum_{k=1}^K \frac{\partial}{\partial f_r}\left[P(Y = k)(1 - 2f_k(x)) + f_k(x)^2\right]\\
&= -\frac{\partial}{\partial f_r}\left[P(Y = r)(1 - 2f_r(x)) + f_r(x)^2\right]\\
&= -(-2P(Y = r) + 2f_r(x)) \overset{!}{=} 0\\
&\iff f_r(x) = P(Y = r).
\end{aligned}$$ As
$\frac{\partial}{\partial f_r} \left(-[-2P(Y = r) + 2f_r(x)]\right) = -\left(0 + 2 \right) = -2 < 0\ \forall r \in \{1, \dots, K\}, f \in \Delta^K$,
the above, $f(x) \equiv P(Y \mid X = x)$ is the unique maximizer of the
multi-class Brier scoring rule's expectation. Therefore, it is strictly
proper. ◻
:::
#### Using softmax with the NLL Loss
Let us discuss the most popular setup for classification that uses the
steps introduced in Section [4.13.1](#ssec:roadmap){reference-type="ref"
reference="ssec:roadmap"}.
Using a softmax output with the NLL loss is by far the most common
activation and loss function for (multi-class) classification problems.
Luckily, it is also designed to handle the aleatoric uncertainty in the
true $P(y \mid x)$ distribution, which is potentially multimodal
(according to humans). **Note**: NLL loss = CE loss = softmax CE loss =
log-likelihood loss = negative log probability for classification.
**Ingredient 1.** The softmax output $f(x)$ for input image $x$ has the
right dimensionality (number of classes) to represent any $P(y \mid x)$.
$f(x)$ outputs the parameter vector $p$ of the output categorical
distribution. Therefore, the architectural condition is satisfied. The
model is ready to represent aleatoric uncertainty.
**Ingredient 2.** Is the method also *encouraged* to represent the
*true* aleatoric uncertainty? We consider the loss function
$-\log f_Y(x)$. We have seen that, in expectation of $Y$, it guides the
model to produce the GT distribution $P(y \mid x)$.
#### Toy experiment with the NLL loss
Importantly, our DNN is *not* encouraged to be overconfident (nearly
one-hot) for $\argmax_k P(Y = k \mid X = x)$ when using the NLL loss, as
many people suggest. The loss encourages the model to produce
distributions $f(x)$ with variance when the true distribution also has a
non-zero variance. This is, of course, considering infinite data. For
finite datasets where the model usually does not see two labelings for a
single data point, it can arbitrarily overfit to the given labeling for
each datum (given sufficient expressivity). This makes the model *not*
predict the true aleatoric uncertainty for a sample (only the empirical
probability), which can result in the model being extremely
overconfident in one of the possible answers. This can be possibly
mitigated by adversarial training (increase region of class $y$
prediction) or by regularizing the model based on how many data points
we have.
![Homoscedastic 2D class-wise Gaussian dataset in a binary
classification setting, used for the experiment in the
[notebook](https://colab.research.google.com/drive/1ao7oyRoye2uPnfk7NFhz5AujH-jAMxAd?usp=sharing).](gfx/04_data.pdf){#fig:dset
width="0.6\\linewidth"}
We provide a
[notebook](https://colab.research.google.com/drive/1ao7oyRoye2uPnfk7NFhz5AujH-jAMxAd?usp=sharing)
to clean up the possible source of the misconception of the NLL
encouraging overconfidence. The dataset is generated from two
homoscedastic 2D Gaussians with a small overlap near $(0, 0)$
(Figure [4.39](#fig:dset){reference-type="ref" reference="fig:dset"}).
The task is binary classification. Since we know the Gaussians that
generate each class, we can calculate the true probability that a sample
$x$ is of class 0: $$\begin{aligned}
P(Y = 0 \mid X = x) &= \frac{P(X = x \mid Y = 0)P(Y = 0)}{P(X = x \mid Y = 0)P(Y = 0)+P(X = x \mid Y = 1)P(Y = 1)}\\
&= \frac{P(X = x \mid Y = 0)}{P(X = x \mid Y = 0) + P(X = x \mid Y = 1)}\\
&= \frac{\cN(x \mid \mu_0, \Sigma_0)}{\cN(x \mid \mu_0, \Sigma_0) + \cN(x \mid \mu_1, \Sigma_1)}
\end{aligned}$$ where we assumed a uniform label prior. This is just the
ratio of the likelihood of $x$ being a part of class 0 and the total
likelihood of it being a part of any of the two. We visualize the
predicted ($f_0(x)$) and GT ($P(y \mid x)$) probabilities in
Figure [4.40](#fig:predgt){reference-type="ref" reference="fig:predgt"}.
![Predicted and ground truth probabilities from the experiment in the
[notebook](https://colab.research.google.com/drive/1ao7oyRoye2uPnfk7NFhz5AujH-jAMxAd?usp=sharing).
The two probability maps are almost
indistinguishable.](gfx/04_res.pdf){#fig:predgt width="0.6\\linewidth"}
We have very high GT label certainty for the lower and upper triangles.
On the diagonal, the GT label certainties are close to 0.5, signaling
high aleatoric uncertainty. The question is: If we train with this data,
does a 2-layer DNN predict something close to this after applying
sigmoid? The model will observe mixed supervision near the class
boundary $x_1 + x_2 = 0$. (It is also not expressive enough to overfit
to the training set and produce incorrect aleatoric uncertainties. We
have enough data points.) Such mixed supervision and the NLL objective
result in the correct estimation of $P(y \mid x)$.
The model outputs closely resemble the true $P(Y = 0 \mid X = x)$ at
nearly all $x$ values. The model *can* learn correct aleatoric
uncertainty estimation (as supported by the theory of proper scoring
rules). We can see the pointwise difference between $f_0(x)$ and
$P(Y = 0 \mid X = x)$ in
Figure [\[fig:res2\]](#fig:res2){reference-type="ref"
reference="fig:res2"}.
# Evaluation and Scalability
## Benchmarks and Evaluation
In this section, we will see common pitfalls of evaluations in
trustworthy machine learning.
### Why do we do evaluation?
Evaluation enables the ranking of methods. We have a 1D line to put
different methods at different positions. We can design new methods that
are better than previous ones (the metric) and advance the field. We
often compare to prior state-of-the-art (SotA) methods, but comparing a
method's performance against human performance often also makes sense.
Sometimes there is also a derived theoretical upper bound for
performance, either from previous works or our current work. When a
model goes over a theoretical upper bound, one has to explain how that
is possible. Either there is a bug in the evaluation, the upper bound is
flawed, or the model assumes a different set of ingredients than the
upper bound.
It is essential to talk about evaluation because it is hard to do it
right. There have been many cases in the literature where the evaluation
was wrong, and the field had to pay a huge price for that.
### What are the costs of wrong evaluation?
![The trend according to
papers](gfx/PaperClaimsOverTime.pdf){width="\\textwidth"}
![The trend according to
reality](gfx/RealityOverTime.pdf){width="\\textwidth"}
For example, we consider a [metric learning
benchmark](https://arxiv.org/abs/2003.08505). The "expectation vs.
reality" check is shown in
Figure [\[fig:claims\]](#fig:claims){reference-type="ref"
reference="fig:claims"}. We first discuss what the papers claim over
time. The colors correspond to different datasets for measuring metric
learning performance. The contrastive loss is the starting point of
metric learning methods that the later works built upon. This is the
standard method we would use to learn a deep metric representation
space. Over time, people have developed complicated tricks to improve
upon the baselines and the previous year's SotA method. Importantly,
*there is a clear upward trend*.
However, the actual reality is *much worse*. The paper unveils many
details of the unfair comparisons that lead to the distorted results
seen above. In particular, if we tune the hyperparameters super well for
contrastive learning and the recent (so-called) SotA methods and
consider a fair comparison of them, we get almost the same performance
among the methods.
**The costs of the wrong evaluation above are severe.**
For researchers, 4+ years of effort was put into pursuing the wrong
evaluation protocol where we do not unify the set of ingredients among
the methods (e.g., the effort put into fine-tuning previous works). We
have a false sense of improvement over time. This also translates to
opportunity cost: What if they worked on other "real" challenges and
were satisfied with contrastive loss instead of working on all these
complicated methods?
Practitioners need to select the loss function for their business
problems. They waste time looking into all these recent methods,
although the most straightforward solution (contrastive loss) probably
gives them a good result and requires much less human effort to get it
working. This leads to a misinformed selection of methods based on the
wrong ranking. They suffer the cost of neglecting a simple solution that
works equally well.
::: information
Similar "evaluation scandals" in many CV and ML tasks We consider a list
of similar cases in ML where poor evaluation wasted human effort and
money. Typically the papers unveiling the problems with the evaluations
tend to be very entertaining to read and interesting; thus, we recommend
reading them. They can also be very valuable for practitioners who want
an unbiased and correct evaluation of methods they can choose from.
- **Face detection**: Mathias "[Face Detection without Bells and
Whistles](https://link.springer.com/chapter/10.1007/978-3-319-10593-2_47)" [@mathias2014face].
ECCV'14.
- **Zero-shot learning**: Xian "[Zero-Shot Learning -- The Good, the
Bad and the
Ugly](https://openaccess.thecvf.com/content_cvpr_2017/html/Xian_Zero-Shot_Learning_-_CVPR_2017_paper.html)" [@xian2017zero].
CVPR'17.
- **Semi-supervised learning**: Oliver "[Realistic Evaluation of Deep
Semi-Supervised Learning
Algorithms](https://proceedings.neurips.cc/paper/2018/hash/c1fea270c48e8079d8ddf7d06d26ab52-Abstract.html)" [@oliver2018realistic].
NeurIPS'18.
- **Unsupervised disentanglement**: Locatello "[Challenging Common
Assumptions in the Unsupervised Learning of Disentangled
Representations](https://proceedings.mlr.press/v97/locatello19a.html)" [@locatello2019challenging].
ICML'19.
- **Image classification**: Recht "[Do ImageNet Classifiers Generalize
to
ImageNet?](http://proceedings.mlr.press/v97/recht19a.html)" [@https://doi.org/10.48550/arxiv.1902.10811]
ICML'19.
- **Scene text recognition**: Baek "[What is Wrong with Scene Text
Recognition Model Comparisons? Dataset and Model
Analysis](https://openaccess.thecvf.com/content_ICCV_2019/html/Baek_What_Is_Wrong_With_Scene_Text_Recognition_Model_Comparisons_Dataset_ICCV_2019_paper.html)" [@baek2019wrong].
ICCV'19.
- **Weakly-supervised object localization**: Choe "[Evaluating
Weakly-Supervised Object Localization Methods
Right](https://openaccess.thecvf.com/content_CVPR_2020/html/Choe_Evaluating_Weakly_Supervised_Object_Localization_Methods_Right_CVPR_2020_paper.html)" [@choe2020evaluating].
CVPR'20.
- **Deep metric learning**: Musgrave "[A Metric Learning Reality
Check](https://link.springer.com/chapter/10.1007/978-3-030-58595-2_41)" [@https://doi.org/10.48550/arxiv.2003.08505].
ECCV'20.
- **Natural language QA**: Lewis "[Question and Answer Test-Train
Overlap in Open-Domain Question Answering
Datasets](https://aclanthology.org/2021.eacl-main.86.pdf)" [@https://doi.org/10.48550/arxiv.2008.02637].
ArXiv'20.
These papers cover many domains in CV and NLP in general.
:::
### "Recipes" for Wrong Benchmark Evaluation
What are the typical patterns in wrong benchmarking/evaluation? We
provide an incomplete list of possible failure modes.
#### Everyone writes their own evaluation metric code.
Even if things are mathematically the same, when it comes to coding,
everyone has different ways of handling corner cases. There are
non-trivial code-level details in some evaluation metrics. For example,
for computing average precision (AP = AUPR), how should we handle
precision values for high-confidence bins where the threshold is very
high, and thus there are no positive predictions at all? In such cases
$$\operatorname{Precision}(p) = \frac{|\operatorname{TP}(p)|}{|\operatorname{TP}(p)| + |\operatorname{FP}(p)|} = \frac{0}{0}.$$
This is undefined. Some argue that it should be considered 0, some
decide to use 1, and others say it should be excluded from the integral
computation (for calculating the AP). There must be some agreement on
handling such cases in practice. What probably works best in these cases
is to have an evaluation server or a library for computing the metrics.
That way, we can ensure that all methods use the same implementations of
metrics.
#### Confounding multiple factors when comparing methods.
An example is shown in Figure [5.1](#fig:confound){reference-type="ref"
reference="fig:confound"}. Consider the paper "[Sampling Matters in Deep
Embedding
Learning](https://arxiv.org/abs/1706.07567)" [@https://doi.org/10.48550/arxiv.1706.07567]
They argue in a benchmark that their novel loss function is bringing
them gains. But do the improvements really come from the loss function?
They do not disclose that the architectures used for training with the
respective losses were different. In particular, for training with their
loss, they used a more modern architecture (ResNet-50) than for the
others (GoogleNet, Inception-BN -- archaic). Then it is naturally
expected that a Resnet-50 performs better than a GoogleNet. By
confounding multiple factors, it becomes hard to rank the losses alone.
![Example of a scenario where some part of the setup stays hidden (the
used architecture) that would clarify an unfair comparison of methods.
The first column is not shown in the original paper, although it is very
influential of the results. Base figure taken
from [@https://doi.org/10.48550/arxiv.1706.07567].](gfx/05_confound.pdf){#fig:confound
width="0.8\\linewidth"}
#### Hiding extra resources needed to make improvements.
We mention the work "[What Is Wrong With Scene Text Recognition Model
Comparisons? Dataset and Model
Analysis](https://arxiv.org/abs/1904.01906)" [@https://doi.org/10.48550/arxiv.1904.01906]
If we only care about accuracy, we might be missing the other important
axis: computational cost and efficiency. When we only look at an
accuracy plot, we are more inclined to select the method with the
highest accuracy. However, if we also considered the inference time
(latency) or other computational costs, maybe we would want a different
method than the one with the highest accuracy (that is a bit less
accurate but much faster).
#### Training and test samples overlap.
This problem is illustrated in
Table [\[tab:overlap\]](#tab:overlap){reference-type="ref"
reference="tab:overlap"}. We consider the paper "[Question and Answer
Test-Train Overlap in Open-Domain Question Answering
Datasets](https://aclanthology.org/2021.eacl-main.86.pdf)" [@https://doi.org/10.48550/arxiv.2008.02637].
In this work, a general problem is highlighted where a fraction of the
test sets overlap with the training set for the natural language Q&A
task.
::: tabular
lcc Dataset & &\
Natural Questions & 63.6 & 32.5\
TriviaQA & 71.7& 33.6\
WebQuestions & 57.9 & 27.5\
:::
Sometimes, our evaluation set is contaminated: We see many test samples
during training. The Natural Questions, TriviaQA, and WebQuestions
datasets are popular benchmarks for the Q&A task in NLP. It turns out
that for all three datasets, there is a $> 50\%$ answer overlap (up to
$70\%$!) with the test answers. By memorizing the training answers, it
becomes much easier for the model to produce a good answer at test time.
Questions are also overlapping quite a bit, as shown in
Table [\[tab:overlap2\]](#tab:overlap2){reference-type="ref"
reference="tab:overlap2"}. The authors show that the models solve the
task by memorizing rather than generalizing. Many models achieve $0\%$
accuracy for no overlap samples.
::: tabular
ll\|cccc &\
& & & &\
& T5-11B+SSM &36.6 & 77.2 & 22.2 & 9.4\
& BART &26.5 & 67.6 & 10.2 & 0.8\
& Dense &26.7 & 69.4 & 7.0 & 0.0\
& TF-IDF &22.2 & 56.8 & 4.1 & 0.0\
:::
#### Lack of validation set.
This problem is shown in
Figure [5.4](#fig:imagenet_overfit){reference-type="ref"
reference="fig:imagenet_overfit"}. Sometimes, there is no published
validation set. The CIFAR and ImageNet classification benchmarks lack
validation sets. To be precise, ImageNet has a validation but not a test
set. Therefore, people are using the validation set as a test set. This
brings us many problems. When there is an improvement on the ImageNet
validation set benchmark, it is usually the pointwise samples in the
validation set that are addressed rather than the general image
classification task. There have been questions like "Are we solving
ImageNet or image classification?".
Another question for the same narrative is "[Do ImageNet Classifiers
Generalize to
ImageNet?](https://arxiv.org/abs/1902.10811)" [@https://doi.org/10.48550/arxiv.1902.10811].
In this case, the design choices and hyperparameter tuning are performed
over the test set, spoiling its measure of generalization. There is some
evidence that ImageNet classifiers do not generalize to ImageNet, and
they are overfitted to the test set. The same holds for CIFAR. The
authors of the referenced paper collected another ImageNet validation
set, following the same collection procedure. They found that compared
to the original ImageNet validation set, the version 2 validation set
accuracy is notably lower. If we plot the performances of individual
models on the original validation set against the corresponding
performances on the version 2 validation set as a scatter plot, it tends
to follow a line below the $x = y$ line, indicating that the models do
not seem to generalize well. The models' dropping performances on new
samples from the same distribution is evidence of overfitting the design
choices to the test set over time.
![Comparison of model accuracy on the original test sets vs. the new
test sets collected by the authors. Ideal reproducibility is the line of
identity -- the performances should not differ at all if the models are
not overfit. This is not the case: All models are overfit to both
CIFAR-10 and ImageNet. Figure taken
from [@https://doi.org/10.48550/arxiv.1902.10811].
](gfx/intro_plot_cifar10_without_legend.pdf){#fig:imagenet_overfit
width="\\linewidth"}
![Comparison of model accuracy on the original test sets vs. the new
test sets collected by the authors. Ideal reproducibility is the line of
identity -- the performances should not differ at all if the models are
not overfit. This is not the case: All models are overfit to both
CIFAR-10 and ImageNet. Figure taken
from [@https://doi.org/10.48550/arxiv.1902.10811].
](gfx/intro_plot_imagenet_without_legend.pdf){#fig:imagenet_overfit
width="\\linewidth"}
![Comparison of model accuracy on the original test sets vs. the new
test sets collected by the authors. Ideal reproducibility is the line of
identity -- the performances should not differ at all if the models are
not overfit. This is not the case: All models are overfit to both
CIFAR-10 and ImageNet. Figure taken
from [@https://doi.org/10.48550/arxiv.1902.10811].
](gfx/intro_plot_separate_legend_horizontal.pdf){#fig:imagenet_overfit
width=".75\\linewidth"}
::: information
Practical Pointers for Failure Modes When we start a new research
project in a particular field, how should we find the common failure
modes, and how can we avoid them in evaluation?
A good first thing to check is whether there is any paper about the fair
comparison of all the methods we are interested in / we want to improve
upon.
If not, we have three choices.
1. Write such a paper ourselves to "unify all the numbers". This takes
the most work, but it can be very rewarding.
2. Say that we trust the benchmark because we think there are not so
many complicated ingredients involved in the setup; therefore, there
is not so much room for the researchers to confound multiple factors
during evaluation, e.g., by introducing architectural changes. When
the task and the ingredients are both simple, we might want to trust
the benchmark.
3. Choose to stay skeptical and leave the field until someone performs
a trustworthy unified evaluation.
:::
It is crucial to do the evaluation right; otherwise, we are losing much
money, time, and research effort. There are currently many domains where
this is going sideways, as seen from the list above.
## Scalability
If we look at TML papers, TML is often studied with "toy" datasets.
These have the following properties:
- Low-dimensional data ($\le$ order of 1000 dims per sample).
- Small number of training samples ($\le$ order of 100k samples).
- Benefit: More extensive and precise labels are available per sample.
For example, we have all kinds of attributes labeled for the sample
(not just the task label but also other attribute labels like the
domain or bias label).
- They make quick evaluation possible.
- Controlled experiments are also possible.
- This kind of dataset accommodates complicated methods with many
hyperparameters. Typically we can bring in many of them and tune
them the right way to generate the best results here.
Example datasets used in OOD generalization can be seen in
Figure [2.19](#fig:domainbed){reference-type="ref"
reference="fig:domainbed"}. The real impact, however, comes from results
on large-scale datasets. These have the following properties:
- High-dimensional data ($\ge$ order of 10k dims per sample).
- Large number of training samples ($\ge$ order of 1M samples). These
days this is not that large-scale either; we can go up to 1B samples
if we have the resources.
- High-quality labels are dearer. Often a large portion of our data is
even unlabelled or very noisily labelled.
- The validation of an idea on large-scale datasets may take
days-weeks. We cannot validate hyperparameter settings super
frequently.
- It is hard to analyze the contributing factors. We do not have all
the labels we had for the toy dataset, and we also do not have the
time and resources to determine which kind of factor contributes to
the performance. It is hard to gain knowledge and insights from this
kind of data. Therefore, we need some simple methods without many
design choices (few hyperparameters).
An example large-scale dataset is the Open Images Dataset (V7),
illustrated in Figure [5.5](#fig:openv7){reference-type="ref"
reference="fig:openv7"}.
![Sample of the Open Images Dataset
(V7) [@OpenImages2].](gfx/05_openimages.png){#fig:openv7
width="\\linewidth"}
### Possible Roadmap to Scaling Up TML
There is a possibility to combine toy-ish data and real data to make an
impact. This is the method of scaling up from toy data to real data.
First, we work with toy data (e.g., MNIST). Here we have to be creative
and propose new ideas (potentially complicated methodology) through
quick experiments and tuning hyperparameters. Our goal is to understand
why things work.
Based on the insights from toy data and a set of candidate tools, we can
go to real data (scaling up). Here we have to identify and remove
unnecessary complexities based on knowledge from toy data and aim for
something simple. Based on the understanding obtained on toy datasets,
we make a good guess about what will work on real data.
The point here is that eventually, to scale up, we need something
simple. Of course, going simple is not always easy. We will see some
examples where simple wins. There are tons more on arXiv and Twitter.
### Simple Wins
::: center
:::
#### OOD Generalization
Consider addressing the [OOD
generalization](https://arxiv.org/abs/2007.01434) [@DBLP:journals/corr/abs-2007-01434]
problem and the fair evaluation shown in
Figure [\[tab:lost\]](#tab:lost){reference-type="ref"
reference="tab:lost"}. Tuned fairly, ERM -- the simplest method -- is
not worse at all than other complicated methods. (ERM: Training on the
whole combined dataset without domain label.) We discussed DRO and DANN
in this book.
#### Loss Functions
We have a lot of [different variants of loss
functions](https://arxiv.org/abs/2212.12478) [@Brigato_2022] we can use
for training a classifier. An interesting contrast between a tuned and
an untuned baseline is shown in
Figure [\[fig:untuned\]](#fig:untuned){reference-type="ref"
reference="fig:untuned"}. Here, the baseline is vanilla CE. The red line
is what papers report as the performance of vanilla CE. They say their
method works better than vanilla CE. However, it does not. They just did
not tune the baseline properly. We should always do it as well as we
possibly can and be completely clear about our methodology.
**Note**: It is true that nowadays, reviewers are much more careful with
checking evaluation setups than in 2017. However, one can always be not
100% clear about how they performed the evaluation. They can say that
they did the tuning, but they can hide the fact that they did so, e.g.,
using a tiny search window. They can also, e.g., leave out weight decay
from the baseline ("Who cares about weight decay..."). Weight decay
actually turns out to be quite important. Papers that properly tune
hyperparameters for fair comparison tend to recognize the importance of
weight decay. We can see a mismatch between what people generally know
and what is actually true. By not being 100% descriptive, people can
still sneak in papers to conferences by saying they did the
hyperparameter search, leaving out subtle and important details (What
they did not do right.)
#### Weakly-Supervised Object Localization
[Weakly-Supervised Object
Localization](https://arxiv.org/abs/2007.04178) [@https://doi.org/10.48550/arxiv.2007.04178]
is another field that had such a scandal. In
Figure [5.6](#fig:camevaluation){reference-type="ref"
reference="fig:camevaluation"}, we see a table adapted from the authors'
work. The coverage of the paper's re-evaluation is extensive. CAM has
been the simplest method for WSOL for a while now. All the other papers
reported better results than CAM in general. However, when we do
everything correctly with the same set of ingredients, CAM is the best
method on average.
![Re-evaluation of various methods on WSOL. CAM, a method from 2016, is
still the best when tuned appropriately. Table adapted
from [@https://doi.org/10.48550/arxiv.2007.04178].](gfx/05_cam.pdf){#fig:camevaluation
width="\\linewidth"}
### One Right Way to Tune Hyperparameters
The rule of thumb we propose is random search.
1. Set a sensible range of hyperparameters by searching the exponential
space. From this heuristic, we could already see that, e.g.,
$10^{-20}$ for the learning rate is not sensible at all; no learning
happens there. We cut off parts of the parameter space that are not
sensible.
2. Once we have this search space defined by sensible ranges of 5-10
hyperparameters, we perform a random search with a fixed number of
iterations/samples (between different methods). Random search in
practice is excellent. The intuition for that: In practice, not all
parameters contribute equally to the performance. In particular,
some might not contribute to it at all, only a selection of them. In
that case, randomly searching the exponential grid is already good
enough because all irrelevant dimensions will not contribute anyway,
and all the search samples in this exponential grid are effectively
just searching in the relevant dimensions of the hyperparameter
space.
Of course, if many hyperparameters contribute to the final result, we
might want to consider more principled techniques, e.g., Bayesian
Optimization. However, the authors have never used it for hyperparameter
search, only random search.
Suppose the viable hyperparameter regions are super non-convex and very
wiggly. In that case, this range-based approach might not work, as we
will probably lose a lot of reasonable solutions by cutting the space.
We hope this is not the case and the loss is close to unimodal in the
space of hyperparameters we are optimizing over. We also assume the
independence of the involved hyperparameters. (And that many of these do
not matter, actually.)
## Transition from "What" to "How"
Now let us consider future research ideas from the authors. We walk
about these from the perspective of going from the "What" to the "How"
question we have seen before. In ML 2.0 we learn $P(X, Z, Y)$ from
$(X, Y)$ ("What") data. If we look at the ingredients, there is a
historical artifact. For ML 1.0, we trained on $(X, Y)$ data, but for ML
2.0, we are still using the same ingredients for solving the derivative
problems. Is this right? Is this going to work? We argue that the answer
is likely no.
### Our Vision
![Different perspectives of different settings. $(X, Y)$ data correspond
to limited ingredients. The upper bound of performance using such data
is also very limited. Using $(X, Y, Z)$ opens much broader
perspectives.](gfx/05_vision.pdf){#fig:vision width="0.8\\linewidth"}
We can go simple in terms of the method, but what could be really
interesting in the future is to collect new types of datasets for
scalability and trustworthiness. We discuss
Figure [5.7](#fig:vision){reference-type="ref" reference="fig:vision"}.
The inner two ovals correspond to the *benchmarking* approach. This is
the typical approach we have been discussing so far. In a fixed
benchmark, everyone uses the same ingredients: an $(X, Y)$ dataset. If
we allow them to use more ingredients, it is no longer a benchmark. It
is unfair. (Although that is what people do sometimes still.) The *goal*
is to compete for the highest accuracy by using the ingredients most
efficiently and smartly. We want to generate the maximal performance
from a limited set of ingredients. The *key contribution* is usually the
learning algorithm. This approach used to work well for "What" problems.
However, we think we should probably use new types of data (new
ingredients) that also involve $Z$. If we discuss with reviewers, we
learn that this "learning algorithm contribution + using the same
ingredients" is the default mode of thinking for many people.
The outer oval corresponds to the *data hunting* approach. We are still
doing some competition, but we are not confined to using the same
ingredients. Searching for the ingredient itself is part of the game.
When we allow people to use new ingredients, we invite creative new ways
to find cheap sources of information that could give us hints about the
$Z$ data from all kinds of places. Competitors are allowed to use other
ingredients: $(X, Y) + Z$. The *source of value* is the discovery of
new, efficient data sources. This is the future of addressing the "How"
task. This is also the general research direction the authors of this
book want to pursue in the future.
### Data as Compressed Human Knowledge
Data are a *compression of human knowledge*. When training an ML model,
there are typically two sources of human knowledge.
The first comes from the data and is embedded in the ML model through
training. There is a transition of the abstract concept of knowledge in
the real world (from annotators) into a dataset in the computational
domain. Usually, the dataset contains labels crowdsourced by some
annotators.
![The currently dominating types/sources of supervision. Annotators only
give "What" supervision through the labeling process. The "How" signal
is only supplied by ML engineers through the recipe of creating and
training the model.](gfx/05_compressed1a.pdf){#fig:compressed1
width="0.8\\linewidth"}
The second source of human knowledge comes from the validation loop.
There is a transition of knowledge of the ML engineer into the recipes
for training an ML model that they develop over time. A general overview
of this setup is shown in
Figure [5.8](#fig:compressed1){reference-type="ref"
reference="fig:compressed1"}. This is typically how people are
addressing "How" problems now: through the ML engineer's knowledge.
Through many validations, we can find the right setup and design to
achieve the "How" tasks. They use the same kind of $(X, Y)$ dataset and
rely on the ML engineers to encode business intentions, such as:
- "We need more transparency in the model."
- "We need more robustness."
- "We need better OOD generalization."
![A possible future paradigm for types/sources of supervision. Here, the
annotators also provide "How" supervision, which can lead to much more
robust models.](gfx/05_compressed2a.pdf){#fig:compressed2
width="0.8\\linewidth"}
We argue that in the future, we should also look for methods or datasets
for addressing the "How" problem *from the dataset side*. In the future,
"How" will probably also be handled through data collection. This is
illustrated in Figure [5.9](#fig:compressed2){reference-type="ref"
reference="fig:compressed2"}. We wish to not only collect "What"
supervision from the annotators but also information related to the
"How" task. This way, we obtain a new type of dataset that could be very
interesting to the community.
We will now specify two examples of "How" data: We will consider
*interventional data* and *additional supervision* on top of our
standard annotations. An illustration is given in
Figure [5.10](#fig:howdata){reference-type="ref"
reference="fig:howdata"}.
![Different types of "How" data. Interventional data specify the "How"
aspect by breaking spurious correlations that lead to the incorrect
selection of cues. Additional supervision provides explicit new
information to specify our needs more
thoroughly.](gfx/05_how.pdf){#fig:howdata width="0.6\\linewidth"}
### Interventional Data
![Example (input, attribution map) pair that highlights spurious
correlations between the label 'train' and the rails. Considering most
natural images, the model can get away with looking at the rails because
it is quite uncommon to see a train without rails. However, this choice
of cue is misspecified and does not lead to robust generalization. Base
figure taken
from [@https://doi.org/10.48550/arxiv.2203.03860].](gfx/05_railstrains.pdf){#fig:train
width="0.8\\linewidth"}
We will discuss the paper "[Weakly Supervised Semantic Segmentation
Using Out-of-Distribution
Data](https://arxiv.org/abs/2203.03860)" [@https://doi.org/10.48550/arxiv.2203.03860].
First, let us consider an example of the spurious correlation between
trains and rails. If we visualize where our model is looking for the
class 'train', we are probably going to get something like in
Figure [5.11](#fig:train){reference-type="ref" reference="fig:train"}.
The models often look a lot on the rail pixels. This is a well-known
problem. The reason is that if we collect data naturally arising from
the way people take pictures, then we will probably see many images
where the trains are on the rails. Models can recognize trains based on
rails, leveraging spurious correlation. The learned "How" by the model
is wrong. The existence of spurious correlations already indicates that
interventional data does not arise naturally in natural data. (If they
arose naturally, they would have been part of the training data already,
and there would not be any spurious correlation at all.) However, we did
not encode "How" requirements in our dataset. *Therefore, we cannot
expect our model to get it right.*
![Example hard-negative samples train samples, containing no trains but
still including rails. Base Figures are taken
from [@https://doi.org/10.48550/arxiv.2203.03860].](gfx/05_hardneg.pdf){#fig:hardneg
width="0.6\\linewidth"}
One way to combat the problem of spurious correlation between rail and
train is to introduce interventional data. If we are more cautious when
collecting data, we can also collect hard-negative images (rail with no
train). This is illustrated in
Figure [5.12](#fig:hardneg){reference-type="ref"
reference="fig:hardneg"}. Hard-negative images are (1) hard because they
target spurious correlations, so the models employing such spurious
correlations will get them wrong; and (2) negative because there is no
train in the image. Here, we explicitly target the possible bias -- we
eliminate "rail" from a plausible set of cues for detecting trains.
![Addition of hard-negative samples 'car' and 'frog', containing their
corresponding biases but not the objects
themselves.](gfx/05_more_examples.pdf){#fig:morehardneg
width="0.8\\linewidth"}
More examples of interventional data are shown in
Figure [5.13](#fig:morehardneg){reference-type="ref"
reference="fig:morehardneg"}. Of course, as discussed, this kind of data
does not arise very often naturally; thus, there should be a way to go
and find them. We need a data crawling mechanism that supplies such
examples.
#### Efficiently Collecting an Interventional Dataset
![Possible procedure of collecting hard OOD samples. Figure taken
from [@https://doi.org/10.48550/arxiv.2203.03860].](gfx/05_collect.pdf){#fig:collect
width="0.7\\linewidth"}
Our task is to find hard-negative samples for the 'train' class in our
running example. We can do this as illustrated in
Figure [5.14](#fig:collect){reference-type="ref"
reference="fig:collect"}. The candidate dataset does not fully have to
be OOD; it just has to be a large dataset of images. Of course, the more
purely OOD the original dataset is, the more efficiently we can collect
relevant data from it. We compute $p(\text{train})$ by running all
images in the dataset through our classifier. We only keep images with a
'train' score above a certain threshold. This set will not look as nice
as in the figure in practice. There will be a lot of true positives as
well. For manual filtering, we need human labor (HITL). Humans are the
sources of hard-negative knowledge. There is no solution to finding a
clarification of such spurious correlations without requiring human
knowledge. The name "hard OOD dataset" is equivalent to "hard false
positive dataset" and also to "hard negative dataset."
We need to minimize our costs whenever we use human labor because it is
expensive. The cost depends on two dimensions:
1. **How long does it take a human to remove all the true positive
images?** This is very cheap, as the annotators do not have to draw
a bounding box/segmentation map or classify the image into 1k
classes. It is an easy binary decision (Y/N).
2. **How many hard-negative images are needed per class?** Not many at
all. Considering only one image per class, the mIoU (which is a
measure for telling how much spurious correlation we have, higher is
better) with foreground increases by 2%. Using 100 images per class,
the mIoU with foreground increases by 3%. We have poor mIoU without
any hard negatives. However, one hard-negative image per class
already helps a lot, apparently. If we take more hard negatives per
class, we get diminishing returns as the mIoU performance saturates.
Interventional data are, therefore, a cheap source of "How" information.
For the Pascal VOC dataset with 20 classes, we only need 20 new
hard-negative samples to improve mIoU quite a bit. This is a low-hanging
fruit for new types of data.
![Three-step fine-tuning procedure of ChatGPT. HITL is crucial for
aligned ML models. Figure taken
from [@chatgpt].](gfx/05_gpt.pdf){#fig:chatgpt width="\\linewidth"}
**Interventional data collection is gaining momentum now.** One example
is ChatGPT's fine-tuning, illustrated in
Figure [5.15](#fig:chatgpt){reference-type="ref"
reference="fig:chatgpt"}. The research field seems to return to the HITL
paradigm. This is good because it is the only way to solve this problem.
HITL is used for both InstructGPT and ChatGPT. These improve upon the
original GPT-3 in terms of the safety features precisely because they
also use HITL to fine-tune the models further. Researchers developing
these systems know that humans are the ultimate source of the "How"
information. We are also shifting the distribution (data or output) a
little bit to what humans would consider more appropriate/relevant as
answers during a chat. This introduces an intervention in the data
generation process; we use a novel data source for further training. And
this is what matters: There are all kinds of issues around LLMs, like
inappropriate outputs and jailbreak (making LLMs output inappropriate
things). Humans can teach LLMs "how to behave." Instead of web crawling,
one can use humans to generate samples. This improves trustworthiness
considerably. On the left of
Figure [5.15](#fig:chatgpt){reference-type="ref"
reference="fig:chatgpt"}, humans are used to generate possible answers
to questions. This is quite labor-heavy. On the right, humans are only
used to rank the outputs of models based on their preferences. This
ranking can be used for further fine-tuning with RLHF. This is less
labor-heavy and is quite scalable.
### Introducing Additional Supervision
[ImageNet annotation](https://www.youtube.com/watch?v=AAoFT9xjI58) is
performed as follows. First, annotators receive an object category or
concept at the top of a webpage. Then they have to click on images
containing the concept. Some images from the candidate image set are
selected, and some are not. (This is already a pre-filtered set of
images that might correspond to the concept.) When we do this, we obtain
a set of images for every selected concept. These are then used for
training the model.
![Additional, potentially useful meta-data from the annotation
procedure. *Blue:* Original annotation ImageNet data collectors have
considered so far. This is wasting a ton of auxiliary supervision.
*Red:* The annotation byproducts may be irrelevant and noisy, but we
should not throw them away, as they can also be informative. We want to
use them to improve our model (e.g., by obtaining new ingredients for
uncertainty estimation).](gfx/05_additional.pdf){#fig:additional
width="0.9\\linewidth"}
However, the action of annotation also contains valuable information in
terms of the mouse track, click location, time annotators took between
clicks, the full time needed to go through the set of images, and many
other factors. We can efficiently collect additional supervision. This
is shown in Figure [5.16](#fig:additional){reference-type="ref"
reference="fig:additional"}. Annotation byproducts can be leveraged in
several ways. The work of Han [@han2023neglected] gives a thorough
demonstration of how these can be used along with task supervision.
![Labeling process of OpenImages Localized Narratives. They try to
collect as much information as possible for every image. The human
annotator speaks out what they see in the image. As they describe every
object, they need to hover over the image part they are talking about.
For every word, we have a corresponding location in the image. They
record the mouse trajectory and voice (1-to-1 correspondence between
what they say and what they point to). Then, the voice recording is
transcribed into text. This results in huge captions compared to COCO.
Figure taken
from [@openimages].](gfx/05_localization.pdf){#fig:localization
width="0.7\\linewidth"}
There are also a lot of parallel efforts from other groups to obtain
additional supervision. One is OpenImages Localized Narratives,
illustrated in Figure [5.17](#fig:localization){reference-type="ref"
reference="fig:localization"}. This is way more information and
supervision compared to traditional image captioning or object
localization datasets. Multimodal annotations are rich in the "How"
information content in general. The annotation contains much new
information. We should consider how to best exploit this information for
the "How" problems. There is not much research on this yet; we are
fortunate to work on this now and make an impact.
### Method-centric vs. Dataset-centric Solutions
There are two general ways to solve problems, both with pros and cons,
detailed below.
**Method-centric solutions.** These have cheap initial costs. One can
use existing benchmarks and training sets. One just needs to devise a
clever new method (e.g., loss, architecture, optimizer, regularizer).
Typically we end up with highly complex methods because all simple
methods have been tried out already. For these complicated methods, we
need a lot of computational resources and human brain time. This
potentially has enormous costs. We usually do not consider brain time
cost as much compared to, e.g., annotation cost. Development is
expensive and requires many runs to validate hyperparameters.
Furthermore, such solutions are upper-limited by the information cap
defined by the benchmark. (What supervision do we have?) As such,
scaling up often fails. The complex tricks do not work anymore. We need
a lot of effort and experiments to prune down the method into something
simpler that scales well.
**Dataset-centric solutions.** These are relatively new and exciting
approaches. It has large initial costs: 10k - 10M EUR for a large-scale
dataset. A few thousand might be enough for a small-scale dataset, but
it is meaningful when our budget goes up to 100k EUR. (This way, we can
obtain a larger dataset and/or better supervision.) Once built, it
brings huge utility to the public. (Everyone can use it to create new
methods; it has a huge impact.) One could also expect good
transferability of pre-trained models to other tasks. (We can pre-train
a model on the dataset and open source it: this is also a huge
contribution to the field. They can just download it without needing to
train it from scratch.) Notably, there is no information cap (only
creativity cap and budget cap). If we have more information available,
the method itself can be quite simple. We can just use a vanilla
loss/architecture, which will often work best (we have seen that simple
methods often work best). Such methods also scale better and are easier
to use, as they come with fewer hyperparameters usually.
We think that simple methods with new kinds of data will bring us the
biggest gain in the future.
[ChatGPT](https://chat.openai.com/chat) proposed the following closing
statement for the book: "Let us harness the power of machine learning to
make a difference. Let us make an impact through machine learning." We
could not agree more.
# Calculus Refresher
We consider a couple of exercises for calculating partial derivates
vectors and matrices. One particularly useful object for calculating
partial derivatives is the Kronecker delta, a function of two variables.
::: definition
Kronecker Delta
$$\delta_{ij} = \begin{cases} 0 & \text{if } i \ne j \\ 1 & \text{if } i = j. \end{cases}$$
:::
Matrix multiplication, which is also important to be able to solve the
exercises later, is defined as follows.
::: definition
Matrix Multiplication Let
$A \in \nR^{m \times n}, B \in \nR^{n \times p}, C \in \nR^{m \times p}$.
$$C = AB \iff C_{ij} = \sum_{k = 1}^n A_{ik} B_{kj}\ \forall i \in \{1, \dots, m\}, j \in \{1, \dots, p\}.$$
:::
Let us consider the gradient operator.
::: definition
Gradient Let $f: \nR^n \rightarrow \nR$. Then
$$\nabla f: \nR^n \rightarrow \nR^n, \left(\nabla f\right)_i = \frac{\partial f}{\partial x_i}.$$
Sometimes, the argument is explicit: $\nabla_x f$. In $\nabla_x f(x)$,
the subscript indicates "which variable" and the argument indicates
"where to evaluate".
**Example**: $f: \nR^n \times \nR^m \times \nR^l \rightarrow \nR$. Then
$\nabla_z f: \nR^n \times \nR^m \times \nR^l \rightarrow \nR^l$. We
often abuse notation and use the variable name to indicate the position
of the argument which we take the gradient. Often this is clear from the
context.
The following notations are questionable. They are both abusing the
abuse of notation.
- $\nabla_z f(x + z, z^2, y)$. This notation is unclear. According to
general use, the $z$ in the subscript should refer to the position.
However, we also have an explicit variable $z$ that can be
confusing. One should either use different symbols as the arguments
and/or one should write everything down nicely using partial
derivatives. Combining the two, one might first declare that $f$ is
a function of variables (placeholders) $x'$, $y'$, and $z'$, and
then write
$$\restr{\frac{\partial f}{\partial x}}{x' = x + z, y' = z^2, z' = y}.$$
- $\nabla_{x + y + z} f(x + y + z, x)$. This notation is incorrect.
One should use the subscript to refer to the position, and again,
either use different symbols as the arguments or write everything
down using partial derivatives.
:::
Lastly, we present a simple rule for taking partial derivatives of a
tensor element another tensor element.
::: definition
Derivative of a Tensor Element Another Element
$$\frac{\partial v_{i_1,\dots,i_n}}{\partial v_{j_1,\dots,j_n}} = \prod_{k = 1}^n\delta_{i_kj_k}.$$
:::
We are now ready to solve the first exercise.
::: task
Gradient of Squared $L_2$ Norm Show that for $x \in \nR^n$,
$$\nabla_x \Vert x \Vert^2 = 2x.$$
:::
$\forall i \in \{1, \dots, n\}$: $$\begin{aligned}
\left(\nabla_x \Vert x \Vert^2\right)_i &= \frac{\partial}{\partial x_i} \sum_{j = 1}^n x_j^2\\
&= \sum_{j = 1}^n \frac{\partial}{\partial x_i} x_j^2\\
&= \sum_{j = 1}^n \delta_{ij} 2x_j\\
&= 2x_i.
\end{aligned}$$
Let us define the trace operator for real matrices.
::: definition
Trace The trace of a square matrix $A \in \nR^{n \times n}, n \in \nN$
is defined as $$\operatorname{tr}(A) = \sum_{i = 1}^n a_{ii}.$$
:::
The second exercise is as follows.
::: task
Gradient of Trace of Matrix Multiplication Show that for
$A \in \nR^{n \times m}, B \in \nR^{m \times n}$,
$$\nabla_A \operatorname{tr}(AB) = B^\top.$$
:::
$\forall i, j \in \{1, \dots, n\}$: $$\begin{aligned}
\left(\nabla_A \text{tr}(AB)\right)_{ij} &= \frac{\partial}{\partial A_{ij}} \sum_{p = 1}^n (AB)_{pp}\\
&= \frac{\partial}{\partial A_{ij}} \sum_{p = 1}^n \sum_{q = 1}^m A_pq B_qp\\
&= \sum_{p = 1}^n \sum_{q = 1}^m \frac{\partial}{\partial A_{ij}} A_pq B_qp\\
&= \sum_{p = 1}^n \sum_{q = 1}^m \delta_{ip} \delta_{jq} B_qp\\
&= B_{ji}.
\end{aligned}$$
Our last exercise is to compute the gradient of a quadratic form.
::: task
Gradient of Quadratic Form Show that for
$x \in \nR^n, A \in \nR^{n \times n}$,
$$\nabla_x x^\top A x = (A + A^\top)x.$$
:::
$\forall i \in \{1, \dots, n\}$: $$\begin{aligned}
\left(\nabla_x x^\top A x\right)_i &= \frac{\partial}{\partial x_i} \sum_{p, q = 1}^n x_p A_{pq} x_q\\
&= \sum_{p, q = 1}^n \frac{\partial}{\partial x_i} \left(x_p A_{pq} x_q\right)\\
&= \sum_{p, q = 1}^n \delta_{ip} A_{pq} x_q + \sum_{p, q = 1}^n \delta_{iq} x_p A_{pq}\\
&= \sum_{q = 1}^n A_{iq} x_q + \sum_{p = 1}^n x_i A_{iq}\\
&= (Ax)_i + (A^\top x)_i\\
&= ((A + A^\top)x)_i.
\end{aligned}$$
[^1]: COCO is collected from Flickr. ImageNet is partly also from Flickr
and other databases.
[^2]: For domain generalization
(Section [2.4.5](#ssec:domain){reference-type="ref"
reference="ssec:domain"}), we never get any annotations from
deployment in reality. We consider the deployment scenario as a
fictitious entity.
[^3]: The task labels are not used for moment matching, only to compute
the task loss.
[^4]: However, we can also come up with counterexamples. When task 1 is
to predict numbers 0-4 on MNIST and task 2 is to predict numbers
5-9, the domain stays the same, but the task changes.
[^5]: For task changes, we also need to change the output head in the
parametric case (e.g., linear probing). It is not needed for the
non-parametric case (kNN) and
CLIP [@https://doi.org/10.48550/arxiv.2103.00020]. In CLIP, we need
no information about the exact target task (zero-shot learning), but
we need an LLM. The information comes from large-scale pretraining.
[^6]: The output layer is always switched for the task accordingly.
Sometimes very shallow output heads are enough (e.g., linear
probing) if we have a strong backbone feature representation.
[^7]: Reducing the amount of information gained from evaluation helps in
not spoiling the test set too much. For example, we might use a
hidden server for benchmarking where only the ranking of submissions
is shown but not the exact results.
[^8]: We explicitly mention the used implementation of ResNet-50 because
there are [subtle
differences](https://stackoverflow.com/questions/67365237/imagenet-pretrained-resnet50-backbones-are-different-between-pytorch-and-tensorf)
between versions.
[^9]: The task label is needed to calculate the loss.
[^10]: This is a more general statement than only considering cross-bias
generalization -- whenever we are presented with a (nearly) diagonal
dataset, we need additional information, and this can happen in any
cross-domain setting.
[^11]: Unless there are a lot of unbiased samples. Then we simply do
ERM, and we basically have ID training.
[^12]: This is somewhat like a metric learning objective for HSIC.
[^13]: This is possible since the diagonal problem is highly ill-posed
and the problem admits a versatile set of solutions.
[^14]: HSIC could also be used as an independence criterion.
[^15]: If one of them is left unspecified, we are missing critical
ingredients.
[^16]: $S = \left\{y \in \nR^{H \times W \times 3} \middle| \Vert y \Vert_p \le \epsilon\right\}$.
[^17]: The projection *can* change this angle.
[^18]: This is a boundary between *intentions*. The method can be used
to construct adversarial examples that also correspond to plausible
OOD domains we might wish to generalize to.
[^19]: Warping refers to the pixel-wise displacements between the two
image meshes.
[^20]: Note, however, that BaRT [@8954476]
([2.15.13](#sssec:bart){reference-type="ref"
reference="sssec:bart"}) works because it has such a large
stochasticity internally.
[^21]: Naive iterative gradient-based optimization would not work, as
the gradients of the individual random transformations are simply
too noisy.
[^22]: This statement holds for arbitrary vectors $y \in \nR^n$.
[^23]: The central bank is directly in control of this through
determining official interest rate policies. Similarly, other policy
rates and asset purchases have a large effect on how prices develop.
[^24]: This is a highly recommended work for those working or wishing to
work in XAI.
[^25]: The field of Human-AI Interaction works on such methods. One
possible way of knowledge exchange is through textual discussions,
as seen in LLMs.
[^26]: Curiously, some explanation methods are also good at Weakly
Supervised Object Localization (WSOL), that aims to answer the
"Where is the object in the image?" question.
[^27]: We might not want to linearize the entire model. Partial
linearization is often used, e.g., in Grad-CAM
([3.5.17](#sssec:gradcam){reference-type="ref"
reference="sssec:gradcam"}) and TCAV
([3.5.13](#sssec:tcav){reference-type="ref"
reference="sssec:tcav"}).
[^28]: This is the smallest possible perturbation with bit depth $8$ --
a coarse approximation of the gradient.
[^29]: Strictly speaking, the linearization is not a simplification when
considering infinitesimal perturbations. However, such perturbations
are fictitious, and if one wants to obtain the *net* changes in the
network output, they have to consider small $\delta$ values that are
not exact anymore.
[^30]: In practice, we just choose a small $\delta$ value for the
tangent plane to stay faithful to the function. Another choice, as
we have seen before, is to consider
$\nE_z\left[\frac{\partial}{\partial x_i}f(x + z)\right]$ as the
attribution score for pixel $i$.
[^31]: Again, we can consider the attribution score with or without
$\delta$. If one includes it, one must keep it a very small number
in practice for the tangent plane to stay faithful to the function.
This measures the approximate absolute expected change in the
output. If one does not include it (this is the usual choice), the
score measures the *relative* expected change in the output.
[^32]: A black image baseline is used in the paper. According to the
authors, using a black image results in cleaner visualization of the
"edge" features than using random noise.
[^33]: Turning a feature off means that the features receive the
baseline value, which is not necessarily zero.
[^34]: The works considers image data. $P(x_i \mid x_{\setminus i})$:
distribution of feature $i$ given all the other features in the
image.
[^35]: Subset $z$ must include $i$, so the minimal size of $z$ is 1.
[^36]: If a function values a feature a lot, then that is also reflected
in the Shapley value.
[^37]: Note the low resolution. To overlay the score map on images,
further upscaling is needed.
[^38]: Nowadays, many people are using
Transformer-based [@https://doi.org/10.48550/arxiv.1706.03762]
baselines for doing semantic segmentation. The convolutional
baselines are a bit old-fashioned but are still widely used.
[^39]: In CALM, the intermediate feature map elements also do not have a
one-to-one correspondence to the input pixels. As we will see in
Section [3.5.19](#sssec:calm){reference-type="ref"
reference="sssec:calm"}, however, CALM resolves one of the many
problems CAM has (namely, the unintuitive normalization of the
attribution map).
[^40]: Unexpected, large gains.
[^41]: What we mean by "derivative" is not mathematical derivatives but
computations that are derived from the probability tensors.
[^42]: The paper back in 2017 was not rejected for just making
qualitative evaluations. The field has grown and matured a lot since
then -- today, it is always a requirement to provide proper
quantitative evaluations.
[^43]: The authors likely refer to WSOL performance. However, that is
just a coarse proxy for explainability methods and does not directly
measure the quality of explanations in any way.
[^44]: The model is, of course, sound to its own behavior. However, we
cannot treat a system as its own explanation. That kills the
purpose.
[^45]: Without question. Full stop.
[^46]: A simple linear classifier cannot perform better than random
guessing.
[^47]: This is understandable -- it is multiplying gradient-based
attribution with the pixel value differences between the image and
baseline (a black image -- MNIST). If we multiply the gradient-based
attribution with the image of this 0 number, we will see a 0 in the
attribution map.
[^48]: As we will see in
[3.7.8](#sssec:missingness_bias){reference-type="ref"
reference="sssec:missingness_bias"}, while this is generally true,
there are cases where we *introduce* information by encoding
missingness.
[^49]: The $L_2$-regularized image can still be very noisy, just a bit
less than the original because of the reduction in magnitude.
[^50]: This algorithm aims to find the optimal LR without
cross-validation through another GD algorithm. Optimal here means
good for generalization to the held-out validation set. For this, we
also need backpropagation through the optimization procedure.
[^51]: It is global because one can use this linearization for *any*
test sample.
[^52]: This is because the linearization the method admits is very
similar to the one of Integrated Gradients.
[^53]: Strictly speaking, IF considers the globally optimal parameter
configuration in the formulation.
[^54]: Interestingly, the FastIF paper considers the original IF
definition without flipping the sign.
[^55]: Suppose that a self-driving car killed a pedestrian. We need to
find out which data sample was responsible for the incorrect
(sequence of) predictions. Remove-and-retrain is not the end goal in
this case, we do not care about how well we approximate it or
whether we even approximate it at all.
[^56]: Assuming that the training set has many correctly labeled data
and a few mislabeled data points (i.e., there is no systematic
mislabeling).
[^57]: For example, we might accept the model's prediction when the
provided confidence estimate is above a certain tuned threshold.
[^58]: Self-driving and healthcare usually come in pairs when discussing
high-stake ML use cases.
[^59]: In many cases, we could equivalently say that we have aleatoric
uncertainty when the variance of $Y \mid X = x$ is non-zero.
However, if we want to be precise, we have to consider that variance
is undefined for *nominal/categorical* variables.
[^60]: For general variables, multimodality is perhaps the most extreme
case of aleatoric uncertainty. However, for discrete distributions
(corresponding to our categorical variable $Y \mid X = x$ here),
multimodality is synonymous with having multiple possibilities,
which is synonymous with having a non-zero entropy.
[^61]: This is not true for the handwriting case, where even if we see
the handwritten digits in real life, we might be unable to tell a
$1$ apart from a $7$.
[^62]: We will soon see the key difference between epistemic and
aleatoric uncertainty: epistemic uncertainty *can* be reduced to 0
with an infinite amount of data, sampled from the right distribution
$P(X)$ (considering underexplored regions, too).
[^63]: We still want to stay on the data manifold -- sampling from
underexplored regions that are very implausible is not useful.
[^64]: It could also be treated as a separate source of uncertainty when
considering a different definition of epistemic uncertainty.
[^65]: Most existing uncertainty quantification methods also do not
model misspecification as an additional source of uncertainty.
[^66]: This depends on what we consider a "model". If we consider the
models as the parameters, then this statement is subject to model
identifiability. For DNNs, because of weight space symmetries and
other factors, many models can correspond to the same function. If
we equate models to the functions, then this statement always holds.
[^67]: We emphasize that this only holds under the simplifying
assumptions we (and many other authors) make in this book; namely
that the generative model is contained in the effective function
space.
[^68]: New types of realistic OOD data (e.g., counterfactual data) did
not matter so much before, so they were not collected. This is
precisely the reason they *stayed* OOD. With the rising popularity
of the field of ML robustness, these samples also matter a lot
(refer back to OOD generalization), so we want to perform well on
these samples, too.
[^69]: For example, if we have two class-conditional Gaussians, we
necessarily have variance/uncertainty in the largely overlapping
region, but it reduces considerably outside of this region.
[^70]: These only approximate the true aleatoric and epistemic
uncertainties. Their faithfulness is subject to evaluation.
[^71]: Strictly speaking, the negative loss functions fulfill this
criterion, as scores are meant to be *maximized*.
[^72]: Accuracy is usually highly correlated with the negative loss.
However, not all calibration metrics have such a high correlation
with accuracy.
[^73]: Some people refer to scores even when lower is better. To discuss
a unified overview in this book, we refer to scores when we wish to
'maximize', and refer to losses when we wish to 'minimize'. There is
a trivial correspondence between scores and losses when taking
reciprocals or negatives.
[^74]: The reader can easily convince themselves that the perplexity is
independent of the common base of the exponential and logarithm.
[^75]: If we do not take an expectation but still have mixed supervision
(different labels for the same input $x$), the lowest possible value
is, again, non-zero.
[^76]: The NLL loss and the multi-class Brier score are also strictly
proper for aleatoric uncertainty (i.e., the recovery of
$P(Y \mid X = x)$), as we will see in
Section [4.13.2](#ssec:au_classification){reference-type="ref"
reference="ssec:au_classification"}.
[^77]: Curious readers might find the phenomenon of benign overfitting
in the highly overparameterized regime interesting.
[^78]: ECE here is calculated with 15 bins. We can already see that
$M = 10$ is not consistently applied through papers, though it is a
popular choice.
[^79]: Here, we also need the model posterior we obtain to represent a
diverse set of plausible models.
[^80]: When considering the Dirac measure, one should write
$\int P(y \mid x, \theta) d\delta(\theta - \theta^{(m)})$, which is
a rigorous form of Lebesgue integration.
[^81]: The fact that Gaussianity makes integrals more tractable and the
$L-2$ regularization a Gaussian prior imposes is widely known to
work well are more convincing arguments
[^82]: This is also of measure 0, just like the ensemble posterior
approximation.
[^83]: The two line segments are equally probable. Therefore, the
piecewise density values differ when $\phi$ is not equidistant to
the two parameters.
[^84]: The corresponding priors (of multiple experiments) are specified
in [@https://doi.org/10.48550/arxiv.1902.02476], e.g., $L_2$
regularization.
[^85]: Bayesians usually claim that at least they are open with their
assumptions. Frequentists *also use priors*, but implicitly, which
makes them less principled in the Bayesian sense.
[^86]: The x-axis label reads "kernel distance" but is actually the
kernel similarity. Distance is low when similarity is high.
[^87]: $y$ is often used to denote both a one-hot vector of a class and
the class label. This is just an abuse of notation.
|
# Convex Optimization: Algorithms and Complexity / Introduction {#intro}
The central objects of our study are convex functions and convex sets in
$\mathbb{R}^n$.
::: definition
A set $\mathcal{X}\subset \mathbb{R}^n$ is said to be convex if it
contains all of its segments, that is
$$\forall (x,y,\gamma) \in \mathcal{X}\times \mathcal{X}\times [0,1], \; (1-\gamma) x + \gamma y \in \mathcal{X}.$$
A function $f : \mathcal{X} \rightarrow \mathbb{R}$ is said to be convex
if it always lies below its chords, that is
$$\forall (x,y,\gamma) \in \mathcal{X}\times \mathcal{X}\times [0,1], \; f((1-\gamma) x + \gamma y) \leq (1-\gamma)f(x) + \gamma f(y) .$$
:::
We are interested in algorithms that take as input a convex set
$\mathcal{X}$ and a convex function $f$ and output an approximate
minimum of $f$ over $\mathcal{X}$. We write compactly the problem of
finding the minimum of $f$ over $\mathcal{X}$ as $$\begin{aligned}
& \mathrm{min.} \; f(x) \\
& \text{s.t.} \; x \in \mathcal{X}.
\end{aligned}$$ In the following we will make more precise how the set
of constraints $\mathcal{X}$ and the objective function $f$ are
specified to the algorithm. Before that we proceed to give a few
important examples of convex optimization problems in machine learning.
## Some convex optimization problems in machine learning {#sec:mlapps}
Many fundamental convex optimization problems in machine learning take
the following form: $$\label{eq:veryfirst}
\underset{x \in \mathbb{R}^n}{\mathrm{min.}} \; \sum_{i=1}^m f_i(x) + \lambda \mathcal{R}(x) ,$$
where the functions $f_1, \hdots, f_m, \mathcal{R}$ are convex and
$\lambda \geq 0$ is a fixed parameter. The interpretation is that
$f_i(x)$ represents the cost of using $x$ on the $i^{th}$ element of
some data set, and $\mathcal{R}(x)$ is a regularization term which
enforces some "simplicity" in $x$. We discuss now major instances of
[\[eq:veryfirst\]](#eq:veryfirst){reference-type="eqref"
reference="eq:veryfirst"}. In all cases one has a data set of the form
$(w_i, y_i) \in \mathbb{R}^n \times \mathcal{Y}, i=1, \hdots, m$ and the
cost function $f_i$ depends only on the pair $(w_i, y_i)$. We refer to
[@HTF01; @SS02; @SSS14] for more details on the origin of these
important problems. The mere objective of this section is to expose the
reader to a few concrete convex optimization problems which are
routinely solved.
In classification one has $\mathcal{Y}= \{-1,1\}$. Taking
$f_i(x) = \max(0, 1- y_i x^{\top} w_i)$ (the so-called hinge loss) and
$\mathcal{R}(x) = \|x\|_2^2$ one obtains the SVM problem. On the other
hand taking $f_i(x) = \log(1 + \exp(-y_i x^{\top} w_i) )$ (the logistic
loss) and again $\mathcal{R}(x) = \|x\|_2^2$ one obtains the
(regularized) logistic regression problem.
In regression one has $\mathcal{Y}= \mathbb{R}$. Taking
$f_i(x) = (x^{\top} w_i - y_i)^2$ and $\mathcal{R}(x) = 0$ one obtains
the vanilla least-squares problem which can be rewritten in vector
notation as
$$\underset{x \in \mathbb{R}^n}{\mathrm{min.}} \; \|W x - Y\|_2^2 ,$$
where $W \in \mathbb{R}^{m \times n}$ is the matrix with $w_i^{\top}$ on
the $i^{th}$ row and $Y =(y_1, \hdots, y_n)^{\top}$. With
$\mathcal{R}(x) = \|x\|_2^2$ one obtains the ridge regression problem,
while with $\mathcal{R}(x) = \|x\|_1$ this is the LASSO problem
[@Tib96].
Our last two examples are of a slightly different flavor. In particular
the design variable $x$ is now best viewed as a matrix, and thus we
denote it by a capital letter $X$. The sparse inverse covariance
estimation problem can be written as follows, given some empirical
covariance matrix $Y$, $$\begin{aligned}
& \mathrm{min.} \; \mathrm{Tr}(X Y) - \mathrm{logdet}(X) + \lambda \|X\|_1 \\
& \text{s.t.} \; X \in \mathbb{R}^{n \times n}, X^{\top} = X, X \succeq 0 .
\end{aligned}$$ Intuitively the above problem is simply a regularized
maximum likelihood estimator (under a Gaussian assumption).
Finally we introduce the convex version of the matrix completion
problem. Here our data set consists of observations of some of the
entries of an unknown matrix $Y$, and we want to "complete\" the
unobserved entries of $Y$ in such a way that the resulting matrix is
"simple\" (in the sense that it has low rank). After some massaging (see
[@CR09]) the (convex) matrix completion problem can be formulated as
follows: $$\begin{aligned}
& \mathrm{min.} \; \mathrm{Tr}(X) \\
& \text{s.t.} \; X \in \mathbb{R}^{n \times n}, X^{\top} = X, X \succeq 0, X_{i,j} = Y_{i,j} \; \text{for} \; (i,j) \in \Omega ,
\end{aligned}$$ where $\Omega \subset [n]^2$ and
$(Y_{i,j})_{(i,j) \in \Omega}$ are given.
## Basic properties of convexity
A basic result about convex sets that we shall use extensively is the
Separation Theorem.
::: theorem
Let $\mathcal{X} \subset \mathbb{R}^n$ be a closed convex set, and
$x_0 \in \mathbb{R}^n \setminus \mathcal{X}$. Then, there exists
$w \in \mathbb{R}^n$ and $t \in \mathbb{R}$ such that
$$w^{\top} x_0 < t, \; \text{and} \; \forall x \in \mathcal{X}, w^{\top} x \geq t.$$
:::
Note that if $\mathcal{X}$ is not closed then one can only guarantee
that $w^{\top} x_0 \leq w^{\top} x, \forall x \in \mathcal{X}$ (and
$w \neq 0$). This immediately implies the Supporting Hyperplane Theorem
($\partial \mathcal{X}$ denotes the boundary of $\mathcal{X}$, that is
the closure without the interior):
::: theorem
Let $\mathcal{X} \subset \mathbb{R}^n$ be a convex set, and
$x_0 \in \partial \mathcal{X}$. Then, there exists
$w \in \mathbb{R}^n, w \neq 0$ such that
$$\forall x \in \mathcal{X}, w^{\top} x \geq w^{\top} x_0.$$
:::
We introduce now the key notion of *subgradients*.
::: definition
Let $\mathcal{X} \subset \mathbb{R}^n$, and
$f : \mathcal{X} \rightarrow \mathbb{R}$. Then $g \in \mathbb{R}^n$ is a
subgradient of $f$ at $x \in \mathcal{X}$ if for any $y \in \mathcal{X}$
one has $$f(x) - f(y) \leq g^{\top} (x - y) .$$ The set of subgradients
of $f$ at $x$ is denoted $\partial f (x)$.
:::
To put it differently, for any $x \in \mathcal{X}$ and
$g \in \partial f(x)$, $f$ is above the linear function
$y \mapsto f(x) + g^{\top} (y-x)$. The next result shows (essentially)
that a convex functions always admit subgradients.
::: proposition
[]{#prop:existencesubgradients label="prop:existencesubgradients"} Let
$\mathcal{X} \subset \mathbb{R}^n$ be convex, and
$f : \mathcal{X} \rightarrow \mathbb{R}$. If
$\forall x \in \mathcal{X}, \partial f(x) \neq \emptyset$ then $f$ is
convex. Conversely if $f$ is convex then for any
$x \in \mathrm{int}(\mathcal{X}), \partial f(x) \neq \emptyset$.
Furthermore if $f$ is convex and differentiable at $x$ then
$\nabla f(x) \in \partial f(x)$.
:::
Before going to the proof we recall the definition of the epigraph of a
function $f : \mathcal{X} \rightarrow \mathbb{R}$:
$$\mathrm{epi}(f) = \{(x,t) \in \mathcal{X} \times \mathbb{R}: t \geq f(x) \} .$$
It is obvious that a function is convex if and only if its epigraph is a
convex set.
::: proof
*Proof.* The first claim is almost trivial: let
$g \in \partial f((1-\gamma) x + \gamma y)$, then by definition one has
$$\begin{aligned}
& & f((1-\gamma) x + \gamma y) \leq f(x) + \gamma g^{\top} (y - x) , \\
& & f((1-\gamma) x + \gamma y) \leq f(y) + (1-\gamma) g^{\top} (x - y) ,
\end{aligned}$$ which clearly shows that $f$ is convex by adding the two
(appropriately rescaled) inequalities.
Now let us prove that a convex function $f$ has subgradients in the
interior of $\mathcal{X}$. We build a subgradient by using a supporting
hyperplane to the epigraph of the function. Let $x \in \mathcal{X}$.
Then clearly $(x,f(x)) \in \partial \mathrm{epi}(f)$, and
$\mathrm{epi}(f)$ is a convex set. Thus by using the Supporting
Hyperplane Theorem, there exists
$(a,b) \in \mathbb{R}^n \times \mathbb{R}$ such that
$$\label{eq:supphyp}
a^{\top} x + b f(x) \geq a^{\top} y + b t, \forall (y,t) \in \mathrm{epi}(f) .$$
Clearly, by letting $t$ tend to infinity, one can see that $b \leq 0$.
Now let us assume that $x$ is in the interior of $\mathcal{X}$. Then for
$\varepsilon> 0$ small enough, $y=x + \varepsilon a \in \mathcal{X}$,
which implies that $b$ cannot be equal to $0$ (recall that if $b=0$ then
necessarily $a \neq 0$ which allows to conclude by contradiction). Thus
rewriting [\[eq:supphyp\]](#eq:supphyp){reference-type="eqref"
reference="eq:supphyp"} for $t=f(y)$ one obtains
$$f(x) - f(y) \leq \frac{1}{|b|} a^{\top} (x - y) .$$ Thus
$a / |b| \in \partial f(x)$ which concludes the proof of the second
claim.
Finally let $f$ be a convex and differentiable function. Then by
definition: $$\begin{aligned}
f(y) & \geq & \frac{f((1-\gamma) x + \gamma y) - (1- \gamma) f(x)}{\gamma} \\
& = & f(x) + \frac{f(x + \gamma (y - x)) - f(x)}{\gamma} \\
& \underset{\gamma \to 0}{\to} & f(x) + \nabla f(x)^{\top} (y-x),
\end{aligned}$$ which shows that $\nabla f(x) \in \partial f(x)$. ◻
:::
In several cases of interest the set of contraints can have an empty
interior, in which case the above proposition does not yield any
information. However it is easy to replace $\mathrm{int}(\mathcal{X})$
by $\mathrm{ri}(\mathcal{X})$ -the relative interior of $\mathcal{X}$-
which is defined as the interior of $\mathcal{X}$ when we view it as
subset of the affine subspace it generates. Other notions of convex
analysis will prove to be useful in some parts of this text. In
particular the notion of *closed convex functions* is convenient to
exclude pathological cases: these are the convex functions with closed
epigraphs. Sometimes it is also useful to consider the extension of a
convex function $f: \mathcal{X}\rightarrow \mathbb{R}$ to a function
from $\mathbb{R}^n$ to $\overline{\mathbb{R}}$ by setting
$f(x)= + \infty$ for $x \not\in \mathcal{X}$. In convex analysis one
uses the term *proper convex function* to denote a convex function with
values in $\mathbb{R}\cup \{+\infty\}$ such that there exists
$x \in \mathbb{R}^n$ with $f(x) < +\infty$. **From now on all convex
functions will be closed, and if necessary we consider also their proper
extension.** We refer the reader to [@Roc70] for an extensive discussion
of these notions.
## Why convexity?
The key to the algorithmic success in minimizing convex functions is
that these functions exhibit a *local to global* phenomenon. We have
already seen one instance of this in Proposition
[\[prop:existencesubgradients\]](#prop:existencesubgradients){reference-type="ref"
reference="prop:existencesubgradients"}, where we showed that
$\nabla f(x) \in \partial f(x)$: the gradient $\nabla f(x)$ contains a
priori only local information about the function $f$ around $x$ while
the subdifferential $\partial f(x)$ gives a global information in the
form of a linear lower bound on the entire function. Another instance of
this local to global phenomenon is that local minima of convex functions
are in fact global minima:
::: proposition
Let $f$ be convex. If $x$ is a local minimum of $f$ then $x$ is a global
minimum of $f$. Furthermore this happens if and only if
$0 \in \partial f(x)$.
:::
::: proof
*Proof.* Clearly $0 \in \partial f(x)$ if and only if $x$ is a global
minimum of $f$. Now assume that $x$ is local minimum of $f$. Then for
$\gamma$ small enough one has for any $y$,
$$f(x) \leq f((1-\gamma) x + \gamma y) \leq (1-\gamma) f(x) + \gamma f(y) ,$$
which implies $f(x) \leq f(y)$ and thus $x$ is a global minimum of
$f$. ◻
:::
The nice behavior of convex functions will allow for very fast
algorithms to optimize them. This alone would not be sufficient to
justify the importance of this class of functions (after all constant
functions are pretty easy to optimize). However it turns out that
surprisingly many optimization problems admit a convex (re)formulation.
The excellent book [@BV04] describes in great details the various
methods that one can employ to uncover the convex aspects of an
optimization problem. We will not repeat these arguments here, but we
have already seen that many famous machine learning problems (SVM, ridge
regression, logistic regression, LASSO, sparse covariance estimation,
and matrix completion) are formulated as convex problems.
We conclude this section with a simple extension of the optimality
condition "$0 \in \partial f(x)$" to the case of constrained
optimization. We state this result in the case of a differentiable
function for sake of simplicity.
::: proposition
[]{#prop:firstorder label="prop:firstorder"} Let $f$ be convex and
$\mathcal{X}$ a closed convex set on which $f$ is differentiable. Then
$$x^* \in \mathop{\mathrm{argmin}}_{x \in \mathcal{X}} f(x) ,$$ if and
only if one has
$$\nabla f(x^*)^{\top}(x^*-y) \leq 0, \forall y \in \mathcal{X}.$$
:::
::: proof
*Proof.* The "if\" direction is trivial by using that a gradient is also
a subgradient. For the "only if\" direction it suffices to note that if
$\nabla f(x)^{\top} (y-x) < 0$, then $f$ is locally decreasing around
$x$ on the line to $y$ (simply consider $h(t) = f(x + t (y-x))$ and note
that $h'(0) = \nabla f(x)^{\top} (y-x)$). ◻
:::
## Black-box model {#sec:blackbox}
We now describe our first model of "input\" for the objective function
and the set of constraints. In the black-box model we assume that we
have unlimited computational resources, the set of constraint
$\mathcal{X}$ is known, and the objective function
$f: \mathcal{X}\rightarrow \mathbb{R}$ is unknown but can be accessed
through queries to *oracles*:
- A zeroth order oracle takes as input a point $x \in \mathcal{X}$ and
outputs the value of $f$ at $x$.
- A first order oracle takes as input a point $x \in \mathcal{X}$ and
outputs a subgradient of $f$ at $x$.
In this context we are interested in understanding the *oracle
complexity* of convex optimization, that is how many queries to the
oracles are necessary and sufficient to find an
$\varepsilon$-approximate minima of a convex function. To show an upper
bound on the sample complexity we need to propose an algorithm, while
lower bounds are obtained by information theoretic reasoning (we need to
argue that if the number of queries is "too small\" then we don't have
enough information about the function to identify an
$\varepsilon$-approximate solution).
From a mathematical point of view, the strength of the black-box model
is that it will allow us to derive a *complete* theory of convex
optimization, in the sense that we will obtain matching upper and lower
bounds on the oracle complexity for various subclasses of interesting
convex functions. While the model by itself does not limit our
computational resources (for instance any operation on the constraint
set $\mathcal{X}$ is allowed) we will of course pay special attention to
the algorithms' *computational complexity* (i.e., the number of
elementary operations that the algorithm needs to do). We will also be
interested in the situation where the set of constraint $\mathcal{X}$ is
unknown and can only be accessed through a *separation oracle*: given
$x \in \mathbb{R}^n$, it outputs either that $x$ is in $\mathcal{X}$, or
if $x \not\in \mathcal{X}$ then it outputs a separating hyperplane
between $x$ and $\mathcal{X}$.
The black-box model was essentially developed in the early days of
convex optimization (in the Seventies) with [@NY83] being still an
important reference for this theory (see also [@Nem95]). In the recent
years this model and the corresponding algorithms have regained a lot of
popularity, essentially for two reasons:
- It is possible to develop algorithms with dimension-free oracle
complexity which is quite attractive for optimization problems in
very high dimension.
- Many algorithms developed in this model are robust to noise in the
output of the oracles. This is especially interesting for stochastic
optimization, and very relevant to machine learning applications. We
will explore this in details in Chapter
[6](#rand){reference-type="ref" reference="rand"}.
Chapter [2](#finitedim){reference-type="ref" reference="finitedim"},
Chapter [3](#dimfree){reference-type="ref" reference="dimfree"} and
Chapter [4](#mirror){reference-type="ref" reference="mirror"} are
dedicated to the study of the black-box model (noisy oracles are
discussed in Chapter [6](#rand){reference-type="ref" reference="rand"}).
We do not cover the setting where only a zeroth order oracle is
available, also called derivative free optimization, and we refer to
[@CSV09; @ABM11] for further references on this.
## Structured optimization {#sec:structured}
The black-box model described in the previous section seems extremely
wasteful for the applications we discussed in Section
[1.1](#sec:mlapps){reference-type="ref" reference="sec:mlapps"}.
Consider for instance the LASSO objective:
$x \mapsto \|W x - y\|_2^2 + \|x\|_1$. We know this function *globally*,
and assuming that we can only make local queries through oracles seem
like an artificial constraint for the design of algorithms. Structured
optimization tries to address this observation. Ultimately one would
like to take into account the global structure of both $f$ and
$\mathcal{X}$ in order to propose the most efficient optimization
procedure. An extremely powerful hammer for this task are the Interior
Point Methods. We will describe this technique in Chapter
[5](#beyond){reference-type="ref" reference="beyond"} alongside with
other more recent techniques such as FISTA or Mirror Prox.
We briefly describe now two classes of optimization problems for which
we will be able to exploit the structure very efficiently, these are the
LPs (Linear Programs) and SDPs (Semi-Definite Programs). [@BN01]
describe a more general class of Conic Programs but we will not go in
that direction here.
The class LP consists of problems where $f(x) = c^{\top} x$ for some
$c \in \mathbb{R}^n$, and
$\mathcal{X} = \{x \in \mathbb{R}^n : A x \leq b \}$ for some
$A \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^m$.
The class SDP consists of problems where the optimization variable is a
symmetric matrix $X \in \mathbb{R}^{n \times n}$. Let $\mathbb{S}^n$ be
the space of $n\times n$ symmetric matrices (respectively
$\mathbb{S}^n_+$ is the space of positive semi-definite matrices), and
let $\langle \cdot, \cdot \rangle$ be the Frobenius inner product
(recall that it can be written as
$\langle A, B \rangle = \mathrm{Tr}(A^{\top} B)$). In the class SDP the
problems are of the following form: $f(x) = \langle X, C \rangle$ for
some $C \in \mathbb{R}^{n \times n}$, and
$\mathcal{X} = \{X \in \mathbb{S}^n_+ : \langle X, A_i \rangle \leq b_i, i \in \{1, \hdots, m\} \}$
for some $A_1, \hdots, A_m \in \mathbb{R}^{n \times n}$ and
$b \in \mathbb{R}^m$. Note that the matrix completion problem described
in Section [1.1](#sec:mlapps){reference-type="ref"
reference="sec:mlapps"} is an example of an SDP.
## Overview of the results and disclaimer
The overarching aim of this monograph is to present the main complexity
theorems in convex optimization and the corresponding algorithms. We
focus on five major results in convex optimization which give the
overall structure of the text: the existence of efficient cutting-plane
methods with optimal oracle complexity (Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"}), a complete
characterization of the relation between first order oracle complexity
and curvature in the objective function (Chapter
[3](#dimfree){reference-type="ref" reference="dimfree"}), first order
methods beyond Euclidean spaces (Chapter
[4](#mirror){reference-type="ref" reference="mirror"}), non-black box
methods (such as interior point methods) can give a quadratic
improvement in the number of iterations with respect to optimal
black-box methods (Chapter [5](#beyond){reference-type="ref"
reference="beyond"}), and finally noise robustness of first order
methods (Chapter [6](#rand){reference-type="ref" reference="rand"}).
Table [1.37](#table){reference-type="ref" reference="table"} can be used
as a quick reference to the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"}, as well as some
of the results of Chapter [6](#rand){reference-type="ref"
reference="rand"} (this last chapter is the most relevant to machine
learning but the results are also slightly more specific which make them
harder to summarize).
An important disclaimer is that the above selection leaves out methods
derived from duality arguments, as well as the two most popular research
avenues in convex optimization: (i) using convex optimization in
non-convex settings, and (ii) practical large-scale algorithms. Entire
books have been written on these topics, and new books have yet to be
written on the impressive collection of new results obtained for both
(i) and (ii) in the past five years.
A few of the blatant omissions regarding (i) include (a) the theory of
submodular optimization (see [@Bac13]), (b) convex relaxations of
combinatorial problems (a short example is given in Section
[6.6](#sec:convexrelaxation){reference-type="ref"
reference="sec:convexrelaxation"}), and (c) methods inspired from convex
optimization for non-convex problems such as low-rank matrix
factorization (see e.g. [@JNS13] and references therein), neural
networks optimization, etc.
With respect to (ii) the most glaring omissions include (a) heuristics
(the only heuristic briefly discussed here is the non-linear conjugate
gradient in Section [2.4](#sec:CG){reference-type="ref"
reference="sec:CG"}), (b) methods for distributed systems, and (c)
adaptivity to unknown parameters. Regarding (a) we refer to [@NW06]
where the most practical algorithms are discussed in great details
(e.g., quasi-newton methods such as BFGS and L-BFGS, primal-dual
interior point methods, etc.). The recent survey [@BPCPE11] discusses
the alternating direction method of multipliers (ADMM) which is a
popular method to address (b). Finally (c) is a subtle and important
issue. In the entire monograph the emphasis is on presenting the
algorithms and proofs in the simplest way, and thus for sake of
convenience we assume that the relevant parameters describing the
regularity and curvature of the objective function (Lipschitz constant,
smoothness constant, strong convexity parameter) are known and can be
used to tune the algorithm's own parameters. Line search is a powerful
technique to replace the knowledge of these parameters and it is heavily
used in practice, see again [@NW06]. We observe however that from a
theoretical point of view (c) is only a matter of logarithmic factors as
one can always run in parallel several copies of the algorithm with
different guesses for the values of the parameters[^1]. Overall the
attitude of this text with respect to (ii) is best summarized by a quote
of Thomas Cover: "theory is the first term in the Taylor series of
practice", [@Cov92].
**Notation.** We always denote by $x^*$ a point in $\mathcal{X}$ such
that $f(x^*) = \min_{x \in \mathcal{X}} f(x)$ (note that the
optimization problem under consideration will always be clear from the
context). In particular we always assume that $x^*$ exists. For a vector
$x \in \mathbb{R}^n$ we denote by $x(i)$ its $i^{th}$ coordinate. The
dual of a norm $\|\cdot\|$ (defined later) will be denoted either
$\|\cdot\|_*$ or $\|\cdot\|^*$ (depending on whether the norm already
comes with a subscript). Other notation are standard (e.g.,
$\mathrm{I}_n$ for the $n \times n$ identity matrix, $\succeq$ for the
positive semi-definite order on matrices, etc).
::: center
::: {#table}
$f$ Algorithm Rate \# Iter Cost/iter
----- ----------- ------ --------- -----------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
&
::: {#table}
-----------
center of
gravity
-----------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& $\exp\left( - \frac{t}{n} \right)$ &
$n \log \left(\frac{1}{\varepsilon}\right)$ &
::: {#table}
------------------
1 $\nabla$,
1 $n$-dim $\int$
------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
------------
non-smooth
------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
&
::: {#table}
-----------
ellipsoid
method
-----------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& $\frac{R}{r} \exp\left( - \frac{t}{n^2}\right)$ &
$n^2 \log \left(\frac{R}{r \varepsilon}\right)$ &
::: {#table}
------------------
1 $\nabla$,
mat-vec $\times$
------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
------------
non-smooth
------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
&
::: {#table}
--------
Vaidya
--------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& $\frac{R n}{r} \exp\left( - \frac{t}{n}\right)$ &
$n \log \left(\frac{R n}{r \varepsilon}\right)$ &
::: {#table}
------------------
1 $\nabla$,
mat-mat $\times$
------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
-----------
quadratic
-----------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
&
::: {#table}
----
CG
----
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
&
::: {#table}
----------------------------------------
exact
$\exp\left( - \frac{t}{\kappa}\right)$
----------------------------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
&
::: {#table}
-----------------------------------------------
$n$
$\kappa \log\left(\frac1{\varepsilon}\right)$
-----------------------------------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
&
::: {#table}
------------
1 $\nabla$
------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
-------------
non-smooth,
Lipschitz
-------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& PGD & $R L /\sqrt{t}$ & $R^2 L^2 /\varepsilon^2$ &
::: {#table}
-------------
1 $\nabla$,
1 proj.
-------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
smooth & PGD & $\beta R^2 / t$ & $\beta R^2 /\varepsilon$ &
::: {#table}
-------------
1 $\nabla$,
1 proj.
-------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
smooth &
::: {#table}
-----
AGD
-----
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& $\beta R^2 / t^2$ & $R \sqrt{\beta / \varepsilon}$ & 1 $\nabla$\
::: {#table}
------------
smooth
(any norm)
------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& FW & $\beta R^2 / t$ & $\beta R^2 /\varepsilon$ &
::: {#table}
-------------
1 $\nabla$,
1 LP
-------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
----------------
strong. conv.,
Lipschitz
----------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& PGD & $L^2 / (\alpha t)$ & $L^2 / (\alpha \varepsilon)$ &
::: {#table}
--------------
1 $\nabla$ ,
1 proj.
--------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
----------------
strong. conv.,
smooth
----------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& PGD & $R^2 \exp\left(-\frac{t}{\kappa}\right)$ &
$\kappa \log\left(\frac{R^2}{\varepsilon}\right)$ &
::: {#table}
--------------
1 $\nabla$ ,
1 proj.
--------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
----------------
strong. conv.,
smooth
----------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
&
::: {#table}
-----
AGD
-----
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& $R^2 \exp\left(-\frac{t}{\sqrt{\kappa}}\right)$ &
$\sqrt{\kappa} \log\left(\frac{R^2}{\varepsilon}\right)$ & 1 $\nabla$\
::: {#table}
-------------
$f+g$,
$f$ smooth,
$g$ simple
-------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& FISTA & $\beta R^2 / t^2$ & $R \sqrt{\beta / \varepsilon}$ &
::: {#table}
-------------------
1 $\nabla$ of $f$
Prox of $g$
-------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
------------------------------------------------------
$\underset{y \in \mathcal{Y}}{\max} \ \varphi(x,y)$,
$\varphi$ smooth
------------------------------------------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& SP-MP & $\beta R^2 / t$ & $\beta R^2 /\varepsilon$ &
::: {#table}
---------------------
MD on $\mathcal{X}$
MD on $\mathcal{Y}$
---------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
------------------------
linear,
$\mathcal{X}$ with $F$
$\nu$-self-conc.
------------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& IPM & $\nu \exp\left(- \frac{t}{\sqrt{\nu}}\right)$ &
$\sqrt{\nu} \log\left(\frac{\nu}{\varepsilon}\right)$ &
::: {#table}
-------------
Newton
step on $F$
-------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
------------
non-smooth
------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& SGD & $B L /\sqrt{t}$ & $B^2 L^2 /\varepsilon^2$ &
::: {#table}
----------------------
1 stoch. ${\nabla}$,
1 proj.
----------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
---------------
non-smooth,
strong. conv.
---------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& SGD & $B^2 / (\alpha t)$ & $B^2 / (\alpha \varepsilon)$ &
::: {#table}
--------------------
1 stoch. $\nabla$,
1 proj.
--------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
\
::: {#table}
------------------------
$f=\frac1{m} \sum f_i$
$f_i$ smooth
strong. conv.
------------------------
: Summary of the results proved in Chapter
[2](#finitedim){reference-type="ref" reference="finitedim"} to Chapter
[5](#beyond){reference-type="ref" reference="beyond"} and some of the
results in Chapter [6](#rand){reference-type="ref" reference="rand"}.
:::
& SVRG & -- & $(m + \kappa) \log\left(\frac{1}{\varepsilon}\right)$ & 1
stoch. $\nabla$
:::
# Convex optimization in finite dimension {#finitedim}
Let $\mathcal{X} \subset \mathbb{R}^n$ be a convex body (that is a
compact convex set with non-empty interior), and
$f : \mathcal{X} \rightarrow [-B,B]$ be a continuous and convex
function. Let $r, R>0$ be such that $\mathcal{X}$ is contained in an
Euclidean ball of radius $R$ (respectively it contains an Euclidean ball
of radius $r$). In this chapter we give several black-box algorithms to
solve $$\begin{aligned}
& \mathrm{min.} \; f(x) \\
& \text{s.t.} \; x \in \mathcal{X}.
\end{aligned}$$ As we will see these algorithms have an oracle
complexity which is linear (or quadratic) in the dimension, hence the
title of the chapter (in the next chapter the oracle complexity will be
*independent* of the dimension). An interesting feature of the methods
discussed here is that they only need a separation oracle for the
constraint set $\mathcal{X}$. In the literature such algorithms are
often referred to as *cutting plane methods*. In particular these
methods can be used to *find* a point $x \in \mathcal{X}$ given only a
separating oracle for $\mathcal{X}$ (this is also known as the
*feasibility problem*).
## The center of gravity method {#sec:gravity}
We consider the following simple iterative algorithm[^2]: let
$\mathcal{S}_1= \mathcal{X}$, and for $t \geq 1$ do the following:
1. Compute
$$c_t = \frac{1}{\mathrm{vol}(\mathcal{S}_t)} \int_{x \in \mathcal{S}_t} x dx .$$
2. Query the first order oracle at $c_t$ and obtain
$w_t \in \partial f (c_t)$. Let
$$\mathcal{S}_{t+1} = \mathcal{S}_t \cap \{x \in \mathbb{R}^n : (x-c_t)^{\top} w_t \leq 0\} .$$
If stopped after $t$ queries to the first order oracle then we use $t$
queries to a zeroth order oracle to output
$$x_t\in \mathop{\mathrm{argmin}}_{1 \leq r \leq t} f(c_r) .$$ This
procedure is known as the *center of gravity method*, it was discovered
independently on both sides of the Wall by [@Lev65] and [@New65].
::: theorem
[]{#th:centerofgravity label="th:centerofgravity"} The center of gravity
method satisfies
$$f(x_t) - \min_{x \in \mathcal{X}} f(x) \leq 2 B \left(1 - \frac1{e} \right)^{t/n} .$$
:::
Before proving this result a few comments are in order.
To attain an $\varepsilon$-optimal point the center of gravity method
requires $O( n \log (2 B / \varepsilon))$ queries to both the first and
zeroth order oracles. It can be shown that this is the best one can hope
for, in the sense that for $\varepsilon$ small enough one needs
$\Omega(n \log(1/ \varepsilon))$ calls to the oracle in order to find an
$\varepsilon$-optimal point, see [@NY83] for a formal proof.
The rate of convergence given by Theorem
[\[th:centerofgravity\]](#th:centerofgravity){reference-type="ref"
reference="th:centerofgravity"} is exponentially fast. In the
optimization literature this is called a *linear rate* as the
(estimated) error at iteration $t+1$ is linearly related to the error at
iteration $t$.
The last and most important comment concerns the computational
complexity of the method. It turns out that finding the center of
gravity $c_t$ is a very difficult problem by itself, and we do not have
computationally efficient procedure to carry out this computation in
general. In Section [6.7](#sec:rwmethod){reference-type="ref"
reference="sec:rwmethod"} we will discuss a relatively recent (compared
to the 50 years old center of gravity method!) randomized algorithm to
approximately compute the center of gravity. This will in turn give a
randomized center of gravity method which we will describe in detail.
We now turn to the proof of Theorem
[\[th:centerofgravity\]](#th:centerofgravity){reference-type="ref"
reference="th:centerofgravity"}. We will use the following elementary
result from convex geometry:
::: lemma
[]{#lem:Gru60 label="lem:Gru60"} Let $\mathcal{K}$ be a centered convex
set, i.e., $\int_{x \in \mathcal{K}} x dx = 0$, then for any
$w \in \mathbb{R}^n, w \neq 0$, one has
$$\mathrm{Vol} \left( \mathcal{K}\cap \{x \in \mathbb{R}^n : x^{\top} w \geq 0\} \right) \geq \frac{1}{e} \mathrm{Vol} (\mathcal{K}) .$$
:::
We now prove Theorem
[\[th:centerofgravity\]](#th:centerofgravity){reference-type="ref"
reference="th:centerofgravity"}.
::: proof
*Proof.* Let $x^*$ be such that
$f(x^*) = \min_{x \in \mathcal{X}} f(x)$. Since
$w_t \in \partial f(c_t)$ one has
$$f(c_t) - f(x) \leq w_t^{\top} (c_t - x) .$$ and thus
$$\label{eq:centerofgravity1}
\mathcal{S}_{t} \setminus \mathcal{S}_{t+1} \subset \{x \in \mathcal{X}: (x-c_t)^{\top} w_t > 0\} \subset \{x \in \mathcal{X}: f(x) > f(c_t)\} ,$$
which clearly implies that one can never remove the optimal point from
our sets in consideration, that is $x^* \in \mathcal{S}_t$ for any $t$.
Without loss of generality we can assume that we always have
$w_t \neq 0$, for otherwise one would have $f(c_t) = f(x^*)$ which
immediately conludes the proof. Now using that $w_t \neq 0$ for any $t$
and Lemma [\[lem:Gru60\]](#lem:Gru60){reference-type="ref"
reference="lem:Gru60"} one clearly obtains
$$\mathrm{vol}(\mathcal{S}_{t+1}) \leq \left(1 - \frac1{e} \right)^t \mathrm{vol}(\mathcal{X}) .$$
For $\varepsilon\in [0,1]$, let
$\mathcal{X}_{\varepsilon} = \{(1-\varepsilon) x^* + \varepsilon x, x \in \mathcal{X}\}$.
Note that
$\mathrm{vol}(\mathcal{X}_{\varepsilon}) = \varepsilon^n \mathrm{vol}(\mathcal{X})$.
These volume computations show that for
$\varepsilon> \left(1 - \frac1{e} \right)^{t/n}$ one has
$\mathrm{vol}(\mathcal{X}_{\varepsilon}) > \mathrm{vol}(\mathcal{S}_{t+1})$.
In particular this implies that for
$\varepsilon> \left(1 - \frac1{e} \right)^{t/n}$, there must exist a
time $r \in \{1,\hdots, t\}$, and
$x_{\varepsilon} \in \mathcal{X}_{\varepsilon}$, such that
$x_{\varepsilon} \in \mathcal{S}_{r}$ and
$x_{\varepsilon} \not\in \mathcal{S}_{r+1}$. In particular by
[\[eq:centerofgravity1\]](#eq:centerofgravity1){reference-type="eqref"
reference="eq:centerofgravity1"} one has $f(c_r) < f(x_{\varepsilon})$.
On the other hand by convexity of $f$ one clearly has
$f(x_{\varepsilon}) \leq f(x^*) + 2 \varepsilon B$. This concludes the
proof. ◻
:::
## The ellipsoid method {#sec:ellipsoid}
Recall that an ellipsoid is a convex set of the form
$$\mathcal{E} = \{x \in \mathbb{R}^n : (x - c)^{\top} H^{-1} (x-c) \leq 1 \} ,$$
where $c \in \mathbb{R}^n$, and $H$ is a symmetric positive definite
matrix. Geometrically $c$ is the center of the ellipsoid, and the
semi-axes of $\mathcal{E}$ are given by the eigenvectors of $H$, with
lengths given by the square root of the corresponding eigenvalues.
We give now a simple geometric lemma, which is at the heart of the
ellipsoid method.
::: lemma
[]{#lem:geomellipsoid label="lem:geomellipsoid"} Let
$\mathcal{E}_0 = \{x \in \mathbb{R}^n : (x - c_0)^{\top} H_0^{-1} (x-c_0) \leq 1 \}$.
For any $w \in \mathbb{R}^n$, $w \neq 0$, there exists an ellipsoid
$\mathcal{E}$ such that
$$\mathcal{E} \supset \{x \in \mathcal{E}_0 : w^{\top} (x-c_0) \leq 0\} , \label{eq:ellipsoidlemma1}$$
and
$$\mathrm{vol}(\mathcal{E}) \leq \exp \left(- \frac{1}{2 n} \right) \mathrm{vol}(\mathcal{E}_0) . \label{eq:ellipsoidlemma2}$$
Furthermore for $n \geq 2$ one can take
$\mathcal{E}= \{x \in \mathbb{R}^n : (x - c)^{\top} H^{-1} (x-c) \leq 1 \}$
where $$\begin{aligned}
& c = c_0 - \frac{1}{n+1} \frac{H_0 w}{\sqrt{w^{\top} H_0 w}} , \label{eq:ellipsoidlemma3}\\
& H = \frac{n^2}{n^2-1} \left(H_0 - \frac{2}{n+1} \frac{H_0 w w^{\top} H_0}{w^{\top} H_0 w} \right) . \label{eq:ellipsoidlemma4}
\end{aligned}$$
:::
::: proof
*Proof.* For $n=1$ the result is obvious, in fact we even have
$\mathrm{vol}(\mathcal{E}) \leq \frac12 \mathrm{vol}(\mathcal{E}_0) .$
For $n \geq 2$ one can simply verify that the ellipsoid given by
[\[eq:ellipsoidlemma3\]](#eq:ellipsoidlemma3){reference-type="eqref"
reference="eq:ellipsoidlemma3"} and
[\[eq:ellipsoidlemma4\]](#eq:ellipsoidlemma4){reference-type="eqref"
reference="eq:ellipsoidlemma4"} satisfy the required properties
[\[eq:ellipsoidlemma1\]](#eq:ellipsoidlemma1){reference-type="eqref"
reference="eq:ellipsoidlemma1"} and
[\[eq:ellipsoidlemma2\]](#eq:ellipsoidlemma2){reference-type="eqref"
reference="eq:ellipsoidlemma2"}. Rather than bluntly doing these
computations we will show how to derive
[\[eq:ellipsoidlemma3\]](#eq:ellipsoidlemma3){reference-type="eqref"
reference="eq:ellipsoidlemma3"} and
[\[eq:ellipsoidlemma4\]](#eq:ellipsoidlemma4){reference-type="eqref"
reference="eq:ellipsoidlemma4"}. As a by-product this will also show
that the ellipsoid defined by
[\[eq:ellipsoidlemma3\]](#eq:ellipsoidlemma3){reference-type="eqref"
reference="eq:ellipsoidlemma3"} and
[\[eq:ellipsoidlemma4\]](#eq:ellipsoidlemma4){reference-type="eqref"
reference="eq:ellipsoidlemma4"} is the unique ellipsoid of minimal
volume that satisfy
[\[eq:ellipsoidlemma1\]](#eq:ellipsoidlemma1){reference-type="eqref"
reference="eq:ellipsoidlemma1"}. Let us first focus on the case where
$\mathcal{E}_0$ is the Euclidean ball
$\mathcal{B}= \{x \in \mathbb{R}^n : x^{\top} x \leq 1\}$. We
momentarily assume that $w$ is a unit norm vector.
By doing a quick picture, one can see that it makes sense to look for an
ellipsoid $\mathcal{E}$ that would be centered at $c= - t w$, with
$t \in [0,1]$ (presumably $t$ will be small), and such that one
principal direction is $w$ (with inverse squared semi-axis $a>0$), and
the other principal directions are all orthogonal to $w$ (with the same
inverse squared semi-axes $b>0$). In other words we are looking for
$\mathcal{E}= \{x: (x - c)^{\top} H^{-1} (x-c) \leq 1 \}$ with
$$c = - t w, \; \text{and} \; H^{-1} = a w w^{\top} + b (\mathrm{I}_n - w w^{\top} ) .$$
Now we have to express our constraints on the fact that $\mathcal{E}$
should contain the half Euclidean ball
$\{x \in \mathcal{B}: x^{\top} w \leq 0\}$. Since we are also looking
for $\mathcal{E}$ to be as small as possible, it makes sense to ask for
$\mathcal{E}$ to \"touch\" the Euclidean ball, both at $x = - w$, and at
the equator $\partial \mathcal{B}\cap w^{\perp}$. The former condition
can be written as:
$$(- w - c)^{\top} H^{-1} (- w - c) = 1 \Leftrightarrow (t-1)^2 a = 1 ,$$
while the latter is expressed as:
$$\forall y \in \partial \mathcal{B}\cap w^{\perp}, (y - c)^{\top} H^{-1} (y - c) = 1 \Leftrightarrow b + t^2 a = 1 .$$
As one can see from the above two equations, we are still free to choose
any value for $t \in [0,1/2)$ (the fact that we need $t<1/2$ comes from
$b=1 - \left(\frac{t}{t-1}\right)^2>0$). Quite naturally we take the
value that minimizes the volume of the resulting ellipsoid. Note that
$$\frac{\mathrm{vol}(\mathcal{E})}{\mathrm{vol}(\mathcal{B})} = \frac{1}{\sqrt{a}} \left(\frac{1}{\sqrt{b}}\right)^{n-1}
= \frac{1}{\sqrt{\frac{1}{(1-t)^2}\left (1 - \left(\frac{t}{1-t}\right)^2\right)^{n-1}}} \\= \frac{1}{\sqrt{f\left(\frac{1}{1-t}\right)}} ,$$
where $f(h) = h^2 (2 h - h^2)^{n-1}$. Elementary computations show that
the maximum of $f$ (on $[1,2]$) is attained at $h = 1+ \frac{1}{n}$
(which corresponds to $t=\frac{1}{n+1}$), and the value is
$$\left(1+\frac{1}{n}\right)^2 \left(1 - \frac{1}{n^2} \right)^{n-1} \geq \exp \left(\frac{1}{n} \right),$$
where the lower bound follows again from elementary computations. Thus
we showed that, for $\mathcal{E}_0 = \mathcal{B}$,
[\[eq:ellipsoidlemma1\]](#eq:ellipsoidlemma1){reference-type="eqref"
reference="eq:ellipsoidlemma1"} and
[\[eq:ellipsoidlemma2\]](#eq:ellipsoidlemma2){reference-type="eqref"
reference="eq:ellipsoidlemma2"} are satisfied with the ellipsoid given
by the set of points $x$ satisfying: $$\label{eq:ellipsoidlemma5}
\left(x + \frac{w/\|w\|_2}{n+1}\right)^{\top} \left(\frac{n^2-1}{n^2} \mathrm{I}_n + \frac{2(n+1)}{n^2} \frac{w w^{\top}}{\|w\|_2^2} \right) \left(x + \frac{w/\|w\|_2}{n+1} \right) \leq 1 .$$
We consider now an arbitrary ellipsoid
$\mathcal{E}_0 = \{x \in \mathbb{R}^n : (x - c_0)^{\top} H_0^{-1} (x-c_0) \leq 1 \}$.
Let $\Phi(x) = c_0 + H_0^{1/2} x$, then clearly
$\mathcal{E}_0 = \Phi(\mathcal{B})$ and
$\{x : w^{\top}(x - c_0) \leq 0\} = \Phi(\{x : (H_0^{1/2} w)^{\top} x \leq 0\})$.
Thus in this case the image by $\Phi$ of the ellipsoid given in
[\[eq:ellipsoidlemma5\]](#eq:ellipsoidlemma5){reference-type="eqref"
reference="eq:ellipsoidlemma5"} with $w$ replaced by $H_0^{1/2} w$ will
satisfy
[\[eq:ellipsoidlemma1\]](#eq:ellipsoidlemma1){reference-type="eqref"
reference="eq:ellipsoidlemma1"} and
[\[eq:ellipsoidlemma2\]](#eq:ellipsoidlemma2){reference-type="eqref"
reference="eq:ellipsoidlemma2"}. It is easy to see that this corresponds
to an ellipsoid defined by $$\begin{aligned}
& c = c_0 - \frac{1}{n+1} \frac{H_0 w}{\sqrt{w^{\top} H_0 w}} , \notag \\
& H^{-1} = \left(1 - \frac{1}{n^2}\right) H_0^{-1} + \frac{2(n+1)}{n^2} \frac{w w^{\top}}{w^{\top} H_0 w} . \label{eq:ellipsoidlemma6}
\end{aligned}$$ Applying Sherman-Morrison formula to
[\[eq:ellipsoidlemma6\]](#eq:ellipsoidlemma6){reference-type="eqref"
reference="eq:ellipsoidlemma6"} one can recover
[\[eq:ellipsoidlemma4\]](#eq:ellipsoidlemma4){reference-type="eqref"
reference="eq:ellipsoidlemma4"} which concludes the proof. ◻
:::
We describe now the ellipsoid method, which only assumes a separation
oracle for the constraint set $\mathcal{X}$ (in particular it can be
used to solve the feasibility problem mentioned at the beginning of the
chapter). Let $\mathcal{E}_0$ be the Euclidean ball of radius $R$ that
contains $\mathcal{X}$, and let $c_0$ be its center. Denote also
$H_0=R^2 \mathrm{I}_n$. For $t \geq 0$ do the following:
1. If $c_t \not\in \mathcal{X}$ then call the separation oracle to
obtain a separating hyperplane $w_t \in \mathbb{R}^n$ such that
$\mathcal{X}\subset \{x : (x- c_t)^{\top} w_t \leq 0\}$, otherwise
call the first order oracle at $c_t$ to obtain
$w_t \in \partial f (c_t)$.
2. Let
$\mathcal{E}_{t+1} = \{x : (x - c_{t+1})^{\top} H_{t+1}^{-1} (x-c_{t+1}) \leq 1 \}$
be the ellipsoid given in Lemma
[\[lem:geomellipsoid\]](#lem:geomellipsoid){reference-type="ref"
reference="lem:geomellipsoid"} that contains
$\{x \in \mathcal{E}_t : (x- c_t)^{\top} w_t \leq 0\}$, that is
$$\begin{aligned}
& c_{t+1} = c_{t} - \frac{1}{n+1} \frac{H_t w}{\sqrt{w^{\top} H_t w}} ,\\
& H_{t+1} = \frac{n^2}{n^2-1} \left(H_t - \frac{2}{n+1} \frac{H_t w w^{\top} H_t}{w^{\top} H_t w} \right) .
\end{aligned}$$
If stopped after $t$ iterations and if
$\{c_1, \hdots, c_t\} \cap \mathcal{X}\neq \emptyset$, then we use the
zeroth order oracle to output
$$x_t\in \mathop{\mathrm{argmin}}_{c \in \{c_1, \hdots, c_t\} \cap \mathcal{X}} f(c_r) .$$
The following rate of convergence can be proved with the exact same
argument than for Theorem
[\[th:centerofgravity\]](#th:centerofgravity){reference-type="ref"
reference="th:centerofgravity"} (observe that at step $t$ one can remove
a point in $\mathcal{X}$ from the current ellipsoid only if
$c_t \in \mathcal{X}$).
::: theorem
For $t \geq 2n^2 \log(R/r)$ the ellipsoid method satisfies
$\{c_1, \hdots, c_t\} \cap \mathcal{X}\neq \emptyset$ and
$$f(x_t) - \min_{x \in \mathcal{X}} f(x) \leq \frac{2 B R}{r} \exp\left( - \frac{t}{2 n^2}\right) .$$
:::
We observe that the oracle complexity of the ellipsoid method is much
worse than the one of the center gravity method, indeed the former needs
$O(n^2 \log(1/\varepsilon))$ calls to the oracles while the latter
requires only $O(n \log(1/\varepsilon))$ calls. However from a
computational point of view the situation is much better: in many cases
one can derive an efficient separation oracle, while the center of
gravity method is basically always intractable. This is for instance the
case in the context of LPs and SDPs: with the notation of Section
[1.5](#sec:structured){reference-type="ref" reference="sec:structured"}
the computational complexity of the separation oracle for LPs is
$O(m n)$ while for SDPs it is $O(\max(m,n) n^2)$ (we use the fact that
the spectral decomposition of a matrix can be done in $O(n^3)$
operations). This gives an overall complexity of
$O(\max(m,n) n^3 \log(1/\varepsilon))$ for LPs and
$O(\max(m,n^2) n^6 \log(1/\varepsilon))$ for SDPs. We note however that
the ellipsoid method is almost never used in practice, essentially
because the method is too rigid to exploit the potential easiness of
real problems (e.g., the volume decrease given by
[\[eq:ellipsoidlemma2\]](#eq:ellipsoidlemma2){reference-type="eqref"
reference="eq:ellipsoidlemma2"} is essentially always tight).
## Vaidya's cutting plane method
We focus here on the feasibility problem (it should be clear from the
previous sections how to adapt the argument for optimization). We have
seen that for the feasibility problem the center of gravity has a $O(n)$
oracle complexity and unclear computational complexity (see Section
[6.7](#sec:rwmethod){reference-type="ref" reference="sec:rwmethod"} for
more on this), while the ellipsoid method has oracle complexity $O(n^2)$
and computational complexity $O(n^4)$. We describe here the beautiful
algorithm of [@Vai89; @Vai96] which has oracle complexity $O(n \log(n))$
and computational complexity $O(n^4)$, thus getting the best of both the
center of gravity and the ellipsoid method. In fact the computational
complexity can even be improved further, and the recent breakthrough
[@LSW15] shows that it can essentially (up to logarithmic factors) be
brought down to $O(n^3)$.
This section, while giving a fundamental algorithm, should probably be
skipped on a first reading. In particular we use several concepts from
the theory of interior point methods which are described in Section
[5.3](#sec:IPM){reference-type="ref" reference="sec:IPM"}.
### The volumetric barrier
Let $A \in \mathbb{R}^{m \times n}$ where the $i^{th}$ row is
$a_i \in \mathbb{R}^n$, and let $b \in \mathbb{R}^m$. We consider the
logarithmic barrier $F$ for the polytope
$\{x \in \mathbb{R}^n : A x > b\}$ defined by
$$F(x) = - \sum_{i=1}^m \log(a_i^{\top} x - b_i) .$$ We also consider
the volumetric barrier $v$ defined by
$$v(x) = \frac{1}{2} \mathrm{logdet}(\nabla^2 F(x) ) .$$ The intuition
is clear: $v(x)$ is equal to the logarithm of the inverse volume of the
Dikin ellipsoid (for the logarithmic barrier) at $x$. It will be useful
to spell out the hessian of the logarithmic barrier:
$$\nabla^2 F(x) = \sum_{i=1}^m \frac{a_i a_i^{\top}}{(a_i^{\top} x - b_i)^2} .$$
Introducing the leverage score
$$\sigma_i(x) = \frac{(\nabla^2 F(x) )^{-1}[a_i, a_i]}{(a_i^{\top} x - b_i)^2} ,$$
one can easily verify that $$\label{eq:gradvol}
\nabla v(x) = - \sum_{i=1}^m \sigma_i(x) \frac{a_i}{a_i^{\top} x - b_i} ,$$
and $$\label{eq:hessianvol}
\nabla^2 v(x) \succeq \sum_{i=1}^m \sigma_i(x) \frac{a_i a_i^{\top}}{(a_i^{\top} x - b_i)^2} =: Q(x) .$$
### Vaidya's algorithm
We fix $\varepsilon\leq 0.006$ a small constant to be specified later.
Vaidya's algorithm produces a sequence of pairs
$(A^{(t)}, b^{(t)}) \in \mathbb{R}^{m_t \times n} \times \mathbb{R}^{m_t}$
such that the corresponding polytope contains the convex set of
interest. The initial polytope defined by $(A^{(0)},b^{(0)})$ is a
simplex (in particular $m_0=n+1$). For $t\geq0$ we let $x_t$ be the
minimizer of the volumetric barrier $v_t$ of the polytope given by
$(A^{(t)}, b^{(t)})$, and $(\sigma_i^{(t)})_{i \in [m_t]}$ the leverage
scores (associated to $v_t$) at the point $x_t$. We also denote $F_t$
for the logarithmic barrier given by $(A^{(t)}, b^{(t)})$. The next
polytope $(A^{(t+1)}, b^{(t+1)})$ is defined by either adding or
removing a constraint to the current polytope:
1. If for some $i \in [m_t]$ one has
$\sigma_i^{(t)} = \min_{j \in [m_t]} \sigma_j^{(t)} < \varepsilon$,
then $(A^{(t+1)}, b^{(t+1)})$ is defined by removing the $i^{th}$
row in $(A^{(t)}, b^{(t)})$ (in particular $m_{t+1} = m_t - 1$).
2. Otherwise let $c^{(t)}$ be the vector given by the separation oracle
queried at $x_t$, and $\beta^{(t)} \in \mathbb{R}$ be chosen so that
$$\frac{(\nabla^2 F_t(x_t) )^{-1}[c^{(t)}, c^{(t)}]}{(x_t^{\top} c^{(t)} - \beta^{(t)})^2} = \frac{1}{5} \sqrt{\varepsilon} .$$
Then we define $(A^{(t+1)}, b^{(t+1)})$ by adding to
$(A^{(t)}, b^{(t)})$ the row given by $(c^{(t)}, \beta^{(t)})$ (in
particular $m_{t+1} = m_t + 1$).
It can be shown that the volumetric barrier is a self-concordant
barrier, and thus it can be efficiently minimized with Newton's method.
In fact it is enough to do *one step* of Newton's method on $v_t$
initialized at $x_{t-1}$, see [@Vai89; @Vai96] for more details on this.
### Analysis of Vaidya's method {#sec:analysis}
The construction of Vaidya's method is based on a precise understanding
of how the volumetric barrier changes when one adds or removes a
constraint to the polytope. This understanding is derived in Section
[2.3.4](#sec:constraintsvolumetric){reference-type="ref"
reference="sec:constraintsvolumetric"}. In particular we obtain the
following two key inequalities: If case 1 happens at iteration $t$ then
$$\label{eq:analysis1}
v_{t+1}(x_{t+1}) - v_t(x_t) \geq - \varepsilon,$$ while if case 2
happens then $$\label{eq:analysis2}
v_{t+1}(x_{t+1}) - v_t(x_t) \geq \frac{1}{20} \sqrt{\varepsilon} .$$ We
show now how these inequalities imply that Vaidya's method stops after
$O(n \log(n R/r))$ steps. First we claim that after $2t$ iterations,
case 2 must have happened at least $t-1$ times. Indeed suppose that at
iteration $2t-1$, case 2 has happened $t-2$ times; then $\nabla^2 F(x)$
is singular and the leverage scores are infinite, so case 2 must happen
at iteration $2t$. Combining this claim with the two inequalities above
we obtain:
$$v_{2t}(x_{2t}) \geq v_0(x_0) + \frac{t-1}{20} \sqrt{\varepsilon} - (t+1) \varepsilon\geq \frac{t}{50} \varepsilon- 1 +v_0(x_0) .$$
The key point now is to recall that by definition one has
$v(x) = - \log \mathrm{vol}(\mathcal{E}(x,1))$ where
$\mathcal{E}(x,r) = \{y : \nabla F^2(x)[y-x,y-x] \leq r^2\}$ is the
Dikin ellipsoid centered at $x$ and of radius $r$. Moreover the
logarithmic barrier $F$ of a polytope with $m$ constraints is
$m$-self-concordant, which implies that the polytope is included in the
Dikin ellipsoid $\mathcal{E}(z, 2m)$ where $z$ is the minimizer of $F$
(see \[Theorem 4.2.6., [@Nes04]\]). The volume of $\mathcal{E}(z, 2m)$
is equal to $(2m)^n \exp(-v(z))$, which is thus always an upper bound on
the volume of the polytope. Combining this with the above display we
just proved that at iteration $2k$ the volume of the current polytope is
at most
$$\exp \left(n \log(2m_{2t}) + 1 - v_0(x_0) - \frac{t}{50} \varepsilon\right) .$$
Since $\mathcal{E}(x,1)$ is always included in the polytope we have that
$- v_0(x_0)$ is at most the logarithm of the volume of the initial
polytope which is $O(n \log(R))$. This clearly concludes the proof as
the procedure will necessarily stop when the volume is below
$\exp(n \log(r))$ (we also used the trivial bound $m_t \leq n+1+t$).
### Constraints and the volumetric barrier {#sec:constraintsvolumetric}
We want to understand the effect on the volumetric barrier of
addition/deletion of constraints to the polytope. Let
$c \in \mathbb{R}^n$, $\beta \in \mathbb{R}$, and consider the
logarithmic barrier $\widetilde{F}$ and the volumetric barrier
$\widetilde{v}$ corresponding to the matrix
$\widetilde{A}\in \mathbb{R}^{(m+1) \times n}$ and the vector
$\widetilde{b} \in \mathbb{R}^{m+1}$ which are respectively the
concatenation of $A$ and $c$, and the concatenation of $b$ and $\beta$.
Let $x^*$ and $\widetilde{x}^*$ be the minimizer of respectively $v$ and
$\widetilde{v}$. We recall the definition of leverage scores, for
$i \in [m+1]$, where $a_{m+1}=c$ and $b_{m+1}=\beta$,
$$\sigma_i(x) = \frac{(\nabla^2 F(x) )^{-1}[a_i, a_i]}{(a_i^{\top} x - b_i)^2}, \ \text{and} \ \widetilde{\sigma}_i(x) = \frac{(\nabla^2 \widetilde{F}(x) )^{-1}[a_i, a_i]}{(a_i^{\top} x - b_i)^2}.$$
The leverage scores $\sigma_i$ and $\widetilde{\sigma}_i$ are closely
related:
::: lemma
[]{#lem:V1 label="lem:V1"} One has for any $i \in [m+1]$,
$$\frac{\widetilde{\sigma}_{m+1}(x)}{1 - \widetilde{\sigma}_{m+1}(x)} \geq \sigma_i(x) \geq \widetilde{\sigma}_i(x) \geq (1-\sigma_{m+1}(x)) \sigma_i(x) .$$
:::
::: proof
*Proof.* First we observe that by Sherman-Morrison's formula
$(A+uv^{\top})^{-1} = A^{-1} - \frac{A^{-1} u v^{\top} A^{-1}}{1+A^{-1}[u,v]}$
one has $$\label{eq:SM}
(\nabla^2 \widetilde{F}(x))^{-1} = (\nabla^2 F(x))^{-1} - \frac{(\nabla^2 F(x))^{-1} c c^{\top} (\nabla^2 F(x))^{-1}}{(c^{\top} x - \beta)^2 + (\nabla^2 F(x))^{-1}[c,c]} ,$$
This immediately proves $\widetilde{\sigma}_i(x) \leq \sigma_i(x)$. It
also implies the inequality
$\widetilde{\sigma}_i(x) \geq (1-\sigma_{m+1}(x)) \sigma_i(x)$ thanks
the following fact:
$A - \frac{A u u^{\top} A}{1+A[u,u]} \succeq (1-A[u,u]) A$. For the last
inequality we use that
$A + \frac{A u u^{\top} A}{1+A[u,u]} \preceq \frac{1}{1-A[u,u]} A$
together with
$$(\nabla^2 {F}(x))^{-1} = (\nabla^2 \widetilde{F}(x))^{-1} + \frac{(\nabla^2 \widetilde{F}(x))^{-1} c c^{\top} (\nabla^2 \widetilde{F}(x))^{-1}}{(c^{\top} x - \beta)^2 - (\nabla^2 \widetilde{F}(x))^{-1}[c,c]} .$$ ◻
:::
We now assume the following key result, which was first proven by
Vaidya. To put the statement in context recall that for a
self-concordant barrier $f$ the suboptimality gap $f(x) - \min f$ is
intimately related to the Newton decrement
$\|\nabla f(x) \|_{(\nabla^2 f(x))^{-1}}$. Vaidya's inequality gives a
similar claim for the volumetric barrier. We use the version given in
\[Theorem 2.6, [@Ans98]\] which has slightly better numerical constants
than the original bound. Recall also the definition of $Q$ from
[\[eq:hessianvol\]](#eq:hessianvol){reference-type="eqref"
reference="eq:hessianvol"}.
::: theorem
[]{#th:V0 label="th:V0"} Let $\lambda(x) = \|\nabla v(x) \|_{Q(x)^{-1}}$
be an approximate Newton decrement,
$\varepsilon= \min_{i \in [m]} \sigma_i(x)$, and assume that
$\lambda(x)^2 \leq \frac{2 \sqrt{\varepsilon} - \varepsilon}{36}$. Then
$$v(x) - v(x^*) \leq 2 \lambda(x)^2 .$$
:::
We also denote $\widetilde{\lambda}$ for the approximate Newton
decrement of $\widetilde{v}$. The goal for the rest of the section is to
prove the following theorem which gives the precise understanding of the
volumetric barrier we were looking for.
::: theorem
[]{#th:V1 label="th:V1"} Let
$\varepsilon:= \min_{i \in [m]} \sigma_i(x^*)$,
$\delta := \sigma_{m+1}(x^*) / \sqrt{\varepsilon}$ and assume that
$\frac{\left(\delta \sqrt{\varepsilon} + \sqrt{\delta^{3} \sqrt{\varepsilon}}\right)^2}{1- \delta \sqrt{\varepsilon}} < \frac{2 \sqrt{\varepsilon} - \varepsilon}{36}$.
Then one has $$\label{eq:thV11}
\widetilde{v}(\widetilde{x}^*) - v(x^*) \geq \frac{1}{2} \log(1+\delta \sqrt{\varepsilon}) - 2 \frac{\left(\delta \sqrt{\varepsilon} + \sqrt{\delta^{3} \sqrt{\varepsilon}}\right)^2}{1- \delta \sqrt{\varepsilon}} .$$
On the other hand assuming that
$\widetilde{\sigma}_{m+1}(\widetilde{x}^*) = \min_{i \in [m+1]} \widetilde{\sigma}_{i}(\widetilde{x}^*) =: \varepsilon$
and that $\varepsilon\leq 1/4$, one has $$\label{eq:thV12}
\widetilde{v}(\widetilde{x}^*) - v(x^*) \leq - \frac{1}{2} \log(1 - \varepsilon) + \frac{8 \varepsilon^2}{(1-\varepsilon)^2}.$$
:::
Before going into the proof let us see briefly how Theorem
[\[th:V1\]](#th:V1){reference-type="ref" reference="th:V1"} give the two
inequalities stated at the beginning of Section
[2.3.3](#sec:analysis){reference-type="ref" reference="sec:analysis"}.
To prove [\[eq:analysis2\]](#eq:analysis2){reference-type="eqref"
reference="eq:analysis2"} we use
[\[eq:thV11\]](#eq:thV11){reference-type="eqref" reference="eq:thV11"}
with $\delta=1/5$ and $\varepsilon\leq 0.006$, and we observe that in
this case the right hand side of
[\[eq:thV11\]](#eq:thV11){reference-type="eqref" reference="eq:thV11"}
is lower bounded by $\frac{1}{20} \sqrt{\varepsilon}$. On the other hand
to prove [\[eq:analysis1\]](#eq:analysis1){reference-type="eqref"
reference="eq:analysis1"} we use
[\[eq:thV12\]](#eq:thV12){reference-type="eqref" reference="eq:thV12"},
and we observe that for $\varepsilon\leq 0.006$ the right hand side of
[\[eq:thV12\]](#eq:thV12){reference-type="eqref" reference="eq:thV12"}
is upper bounded by $\varepsilon$.
::: proof
*Proof.* We start with the proof of
[\[eq:thV11\]](#eq:thV11){reference-type="eqref" reference="eq:thV11"}.
First observe that by factoring $(\nabla^2 F(x))^{1/2}$ on the left and
on the right of $\nabla^2 \widetilde{F}(x)$ one obtains
$$\begin{aligned}
& \mathrm{det}(\nabla^2 \widetilde{F}(x)) \\
& = \mathrm{det}\left(\nabla^2 {F}(x) + \frac{cc^{\top}}{(c^{\top} x- \beta)^2} \right) \\
& = \mathrm{det}(\nabla^2 {F}(x)) \mathrm{det}\left(\mathrm{I}_n + \frac{(\nabla^2 {F}(x))^{-1/2} c c^{\top} (\nabla^2 {F}(x))^{-1/2}}{(c^{\top} x- \beta)^2}\right) \\
& = \mathrm{det}(\nabla^2 {F}(x)) (1+\sigma_{m+1}(x)) ,
\end{aligned}$$ and thus
$$\widetilde{v}(x) = v(x) + \frac{1}{2} \log(1+ \sigma_{m+1}(x)) .$$ In
particular we have
$$\widetilde{v}(\widetilde{x}^*) - v(x^*) = \frac{1}{2} \log(1+ \sigma_{m+1}(x^*)) - (\widetilde{v}(x^*) - \widetilde{v}(\widetilde{x}^*)) .$$
To bound the suboptimality gap of $x^*$ in $\widetilde{v}$ we will
invoke Theorem [\[th:V0\]](#th:V0){reference-type="ref"
reference="th:V0"} and thus we have to upper bound the approximate
Newton decrement $\widetilde{\lambda}$. Using
\[[\[eq:V21\]](#eq:V21){reference-type="eqref" reference="eq:V21"},
Lemma [\[lem:V2\]](#lem:V2){reference-type="ref" reference="lem:V2"}\]
below one has
$$\widetilde{\lambda} (x^*)^2 \leq \frac{\left(\sigma_{m+1}(x^*) + \sqrt{\frac{\sigma_{m+1}^3(x^*)}{\min_{i \in [m]} \sigma_i(x^*)}}\right)^2}{1-\sigma_{m+1}(x^*)} = \frac{\left(\delta \sqrt{\varepsilon} + \sqrt{\delta^{3} \sqrt{\varepsilon}}\right)^2}{1- \delta \sqrt{\varepsilon}} .$$
This concludes the proof of
[\[eq:thV11\]](#eq:thV11){reference-type="eqref" reference="eq:thV11"}.
We now turn to the proof of
[\[eq:thV12\]](#eq:thV12){reference-type="eqref" reference="eq:thV12"}.
Following the same steps as above we immediately obtain
$$\begin{aligned}
\widetilde{v}(\widetilde{x}^*) - v(x^*) & = & \widetilde{v}(\widetilde{x}^*) - v(\widetilde{x}^*)+v(\widetilde{x}^*)- v(x^*) \\
& = & - \frac{1}{2} \log(1 - \widetilde{\sigma}_{m+1}(\widetilde{x}^*)) + v(\widetilde{x}^*)- v(x^*).
\end{aligned}$$ To invoke Theorem
[\[th:V0\]](#th:V0){reference-type="ref" reference="th:V0"} it remains
to upper bound $\lambda(\widetilde{x}^*)$. Using
\[[\[eq:V22\]](#eq:V22){reference-type="eqref" reference="eq:V22"},
Lemma [\[lem:V2\]](#lem:V2){reference-type="ref" reference="lem:V2"}\]
below one has
$$\lambda(\widetilde{x}^*) \leq \frac{2 \ \widetilde{\sigma}_{m+1}(\widetilde{x}^*)}{1 - \widetilde{\sigma}_{m+1}(\widetilde{x}^*)} .$$
We can apply Theorem [\[th:V0\]](#th:V0){reference-type="ref"
reference="th:V0"} since the assumption $\varepsilon\leq 1/4$ implies
that
$\left(\frac{2 \varepsilon}{1-\varepsilon}\right)^2 \leq \frac{2 \sqrt{\varepsilon} - \varepsilon}{36}$.
This concludes the proof of
[\[eq:thV12\]](#eq:thV12){reference-type="eqref"
reference="eq:thV12"}. ◻
:::
::: lemma
[]{#lem:V2 label="lem:V2"} One has $$\label{eq:V21}
\sqrt{1- \sigma_{m+1}(x)} \ \widetilde{\lambda} (x) \leq \|\nabla {v}(x)\|_{Q(x)^{-1}} + \sigma_{m+1}(x) + \sqrt{\frac{\sigma_{m+1}^3(x)}{\min_{i \in [m]} \sigma_i(x)}} .$$
Furthermore if
$\widetilde{\sigma}_{m+1}(x) = \min_{i \in [m+1]} \widetilde{\sigma}_{i}(x)$
then one also has $$\label{eq:V22}
\lambda(x) \leq \|\nabla \widetilde{v}(x)\|_{Q(x)^{-1}} + \frac{2 \ \widetilde{\sigma}_{m+1}(x)}{1 - \widetilde{\sigma}_{m+1}(x)} .$$
:::
::: proof
*Proof.* We start with the proof of
[\[eq:V21\]](#eq:V21){reference-type="eqref" reference="eq:V21"}. First
observe that by Lemma [\[lem:V1\]](#lem:V1){reference-type="ref"
reference="lem:V1"} one has
$\widetilde{Q}(x) \succeq (1-\sigma_{m+1}(x)) Q(x)$ and thus by
definition of the Newton decrement
$$\widetilde{\lambda} (x) = \|\nabla \widetilde{v}(x)\|_{\widetilde{Q}(x)^{-1}} \leq \frac{\|\nabla \widetilde{v}(x)\|_{Q(x)^{-1}}}{\sqrt{1-\sigma_{m+1}(x)}} .$$
Next observe that (recall
[\[eq:gradvol\]](#eq:gradvol){reference-type="eqref"
reference="eq:gradvol"})
$$\nabla \widetilde{v}(x) = \nabla v(x) + \sum_{i=1}^m ({\sigma}_i(x) - \widetilde{\sigma}_i(x)) \frac{a_i}{a_i^{\top} x - b_i} - \widetilde{\sigma}_{m+1}(x) \frac{c}{c^{\top} x - \beta} .$$
We now use that
$Q(x) \succeq (\min_{i \in [m]} \sigma_i(x)) \nabla^2 F(x)$ to obtain
$$\left \| \widetilde{\sigma}_{m+1}(x) \frac{c}{c^{\top} x - \beta} \right\|_{Q(x)^{-1}}^2 \leq \frac{\widetilde{\sigma}_{m+1}^2(x) \sigma_{m+1}(x)}{\min_{i \in [m]} \sigma_i(x)} .$$
By Lemma [\[lem:V1\]](#lem:V1){reference-type="ref" reference="lem:V1"}
one has $\widetilde{\sigma}_{m+1}(x) \leq {\sigma}_{m+1}(x)$ and thus we
see that it only remains to prove
$$\left\|\sum_{i=1}^m ({\sigma}_i(x) - \widetilde{\sigma}_i(x)) \frac{a_i}{a_i^{\top}x - b_i} \right\|_{Q(x)^{-1}}^2 \leq \sigma_{m+1}^2(x) .$$
The above inequality follows from a beautiful calculation of Vaidya (see
\[Lemma 12, [@Vai96]\]), starting from the identity
$$\sigma_i(x) - \widetilde{\sigma}_i(x) = \frac{((\nabla^2 F(x))^{-1}[a_i,c])^2}{((c^{\top} x - \beta)^2 + (\nabla^2 F(x))^{-1}[c,c])(a_i^{\top} x - b_i)^2} ,$$
which itself follows from [\[eq:SM\]](#eq:SM){reference-type="eqref"
reference="eq:SM"}.
We now turn to the proof of [\[eq:V22\]](#eq:V22){reference-type="eqref"
reference="eq:V22"}. Following the same steps as above we immediately
obtain
$$\lambda(x) = \|\nabla v(x)\|_{Q(x)^{-1}} \leq \|\nabla \widetilde{v}(x)\|_{Q(x)^{-1}} + \sigma_{m+1}(x) + \sqrt{\frac{\widetilde{\sigma}_{m+1}^2(x) \sigma_{m+1}(x)}{\min_{i \in [m]} \sigma_i(x)}} .$$
Using Lemma [\[lem:V1\]](#lem:V1){reference-type="ref"
reference="lem:V1"} together with the assumption
$\widetilde{\sigma}_{m+1}(x) = \min_{i \in [m+1]} \widetilde{\sigma}_{i}(x)$
yields [\[eq:V22\]](#eq:V22){reference-type="eqref" reference="eq:V22"},
thus concluding the proof. ◻
:::
## Conjugate gradient {#sec:CG}
We conclude this chapter with the special case of unconstrained
optimization of a convex quadratic function
$f(x) = \frac12 x^{\top} A x - b^{\top} x$, where
$A \in \mathbb{R}^{n \times n}$ is a positive definite matrix and
$b \in \mathbb{R}^n$. This problem, of paramount importance in practice
(it is equivalent to solving the linear system $Ax = b$), admits a
simple first-order black-box procedure which attains the *exact* optimum
$x^*$ in at most $n$ steps. This method, called the *conjugate
gradient*, is described and analyzed below. What is written below is
taken from \[Chapter 5, [@NW06]\].
Let $\langle \cdot , \cdot\rangle_A$ be the inner product on
$\mathbb{R}^n$ defined by the positive definite matrix $A$, that is
$\langle x, y\rangle_A = x^{\top} A y$ (we also denote by $\|\cdot\|_A$
the corresponding norm). For sake of clarity we denote here
$\langle \cdot , \cdot\rangle$ for the standard inner product in
$\mathbb{R}^n$. Given an orthogonal set $\{p_0, \hdots, p_{n-1}\}$ for
$\langle \cdot , \cdot \rangle_A$ we will minimize $f$ by sequentially
minimizing it along the directions given by this orthogonal set. That
is, given $x_0 \in \mathbb{R}^n$, for $t \geq 0$ let $$\label{eq:CG1}
x_{t+1} := \mathop{\mathrm{argmin}}_{x \in \{x_t + \lambda p_t, \ \lambda \in \mathbb{R}\}} f(x) .$$
Equivalently one can write $$\label{eq:CG2}
x_{t+1} = x_t - \langle \nabla f(x_t) , p_t \rangle \frac{p_t}{\|p_t\|_A^2} .$$
The latter identity follows by differentiating
$\lambda \mapsto f(x + \lambda p_t)$, and using that
$\nabla f(x) = A x - b$. We also make an observation that will be useful
later, namely that $x_{t+1}$ is the minimizer of $f$ on
$x_0 + \mathrm{span}\{p_0, \hdots, p_t\}$, or equivalently
$$\label{eq:CG3prime}
\langle \nabla f(x_{t+1}), p_i \rangle = 0, \forall \ 0 \leq i \leq t.$$
Equation [\[eq:CG3prime\]](#eq:CG3prime){reference-type="eqref"
reference="eq:CG3prime"} is true by construction for $i=t$, and for
$i \leq t-1$ it follows by induction, assuming
[\[eq:CG3prime\]](#eq:CG3prime){reference-type="eqref"
reference="eq:CG3prime"} at $t=1$ and using the following formula:
$$\label{eq:CG3}
\nabla f(x_{t+1}) = \nabla f(x_{t}) - \langle \nabla f(x_{t}) , p_{t} \rangle \frac{A p_{t}}{\|p_t\|_A^2} .$$
We now claim that
$x_n = x^* = \mathop{\mathrm{argmin}}_{x \in \mathbb{R}^n} f(x)$. It
suffices to show that
$\langle x_n -x_0 , p_t \rangle_A = \langle x^*-x_0 , p_t \rangle_A$ for
any $t\in \{0,\hdots,n-1\}$. Note that
$x_n - x_0 = - \sum_{t=0}^{n-1} \langle \nabla f(x_t), p_t \rangle \frac{p_t}{\|p_t\|_A^2}$,
and thus using that $x^* = A^{-1} b$, $$\begin{aligned}
\langle x_n -x_0 , p_t \rangle_A = - \langle \nabla f(x_t) , p_t \rangle = \langle b - A x_t , p_t \rangle & = & \langle x^* - x_t, p_t \rangle_A \\
& = & \langle x^* - x_0, p_t \rangle_A ,
\end{aligned}$$ which concludes the proof of $x_n = x^*$.
In order to have a proper black-box method it remains to describe how to
build iteratively the orthogonal set $\{p_0, \hdots, p_{n-1}\}$ based
only on gradient evaluations of $f$. A natural guess to obtain a set of
orthogonal directions (w.r.t. $\langle \cdot , \cdot \rangle_A$) is to
take $p_0 = \nabla f(x_0)$ and for $t \geq 1$, $$\label{eq:CG4}
p_t = \nabla f(x_t) - \langle \nabla f(x_t), p_{t-1} \rangle_A \ \frac{p_{t-1}}{\|p_{t-1}\|^2_A} .$$
Let us first verify by induction on $t \in [n-1]$ that for any
$i \in \{0,\hdots,t-2\}$, $\langle p_t, p_{i}\rangle_A = 0$ (observe
that for $i=t-1$ this is true by construction of $p_t$). Using the
induction hypothesis one can see that it is enough to show
$\langle \nabla f(x_t), p_i \rangle_A = 0$ for any
$i \in \{0, \hdots, t-2\}$, which we prove now. First observe that by
induction one easily obtains
$A p_i \in \mathrm{span}\{p_0, \hdots, p_{i+1}\}$ from
[\[eq:CG3\]](#eq:CG3){reference-type="eqref" reference="eq:CG3"} and
[\[eq:CG4\]](#eq:CG4){reference-type="eqref" reference="eq:CG4"}. Using
this fact together with
$\langle \nabla f(x_t), p_i \rangle_A = \langle \nabla f(x_t), A p_i \rangle$
and [\[eq:CG3prime\]](#eq:CG3prime){reference-type="eqref"
reference="eq:CG3prime"} thus concludes the proof of orthogonality of
the set $\{p_0, \hdots, p_{n-1}\}$.
We still have to show that [\[eq:CG4\]](#eq:CG4){reference-type="eqref"
reference="eq:CG4"} can be written by making only reference to the
gradients of $f$ at previous points. Recall that $x_{t+1}$ is the
minimizer of $f$ on $x_0 + \mathrm{span}\{p_0, \hdots, p_t\}$, and thus
given the form of $p_t$ we also have that $x_{t+1}$ is the minimizer of
$f$ on $x_0 + \mathrm{span}\{\nabla f(x_0), \hdots, \nabla f(x_t)\}$ (in
some sense the conjugate gradient is the *optimal* first order method
for convex quadratic functions). In particular one has
$\langle \nabla f(x_{t+1}) , \nabla f(x_t) \rangle = 0$. This fact,
together with the orthogonality of the set $\{p_t\}$ and
[\[eq:CG3\]](#eq:CG3){reference-type="eqref" reference="eq:CG3"}, imply
that
$$\frac{\langle \nabla f(x_{t+1}) , p_{t} \rangle_A}{\|p_t\|_A^2} = \langle \nabla f(x_{t+1}) , \frac{A p_{t}}{\|p_t\|_A^2} \rangle = - \frac{\langle \nabla f(x_{t+1}) , \nabla f(x_{t+1}) \rangle}{\langle \nabla f(x_{t}) , p_t \rangle} .$$
Furthermore using the definition
[\[eq:CG4\]](#eq:CG4){reference-type="eqref" reference="eq:CG4"} and
$\langle \nabla f(x_t) , p_{t-1} \rangle = 0$ one also has
$$\langle \nabla f(x_t), p_t \rangle = \langle \nabla f(x_t) , \nabla f(x_t) \rangle .$$
Thus we arrive at the following rewriting of the (linear) conjugate
gradient algorithm, where we recall that $x_0$ is some fixed starting
point and $p_0 = \nabla f(x_0)$, $$\begin{aligned}
x_{t+1} & = & \mathop{\mathrm{argmin}}_{x \in \left\{x_t + \lambda p_t, \ \lambda \in \mathbb{R}\right\}} f(x) , \label{eq:CG5} \\
p_{t+1} & = & \nabla f(x_{t+1}) + \frac{\langle \nabla f(x_{t+1}) , \nabla f(x_{t+1}) \rangle}{\langle \nabla f(x_{t}) , \nabla f(x_t) \rangle} p_t . \label{eq:CG6}
\end{aligned}$$ Observe that the algorithm defined by
[\[eq:CG5\]](#eq:CG5){reference-type="eqref" reference="eq:CG5"} and
[\[eq:CG6\]](#eq:CG6){reference-type="eqref" reference="eq:CG6"} makes
sense for an arbitary convex function, in which case it is called the
*non-linear conjugate gradient*. There are many variants of the
non-linear conjugate gradient, and the above form is known as the
Fletcher---Reeves method. Another popular version in practice is the
Polak-Ribière method which is based on the fact that for the general
non-quadratic case one does not necessarily have
$\langle \nabla f(x_{t+1}) , \nabla f(x_t) \rangle = 0$, and thus one
replaces [\[eq:CG6\]](#eq:CG6){reference-type="eqref"
reference="eq:CG6"} by
$$p_{t+1} = \nabla f(x_{t+1}) + \frac{\langle \nabla f(x_{t+1}) - \nabla f(x_t), \nabla f(x_{t+1}) \rangle}{\langle \nabla f(x_{t}) , \nabla f(x_t) \rangle} p_t .$$
We refer to [@NW06] for more details about these algorithms, as well as
for advices on how to deal with the line search in
[\[eq:CG5\]](#eq:CG5){reference-type="eqref" reference="eq:CG5"}.
Finally we also note that the linear conjugate gradient method can often
attain an approximate solution in much fewer than $n$ steps. More
precisely, denoting $\kappa$ for the condition number of $A$ (that is
the ratio of the largest eigenvalue to the smallest eigenvalue of $A$),
one can show that linear conjugate gradient attains an $\varepsilon$
optimal point in a number of iterations of order
$\sqrt{\kappa} \log(1/\varepsilon)$. The next chapter will demistify
this convergence rate, and in particular we will see that (i) this is
the optimal rate among first order methods, and (ii) there is a way to
generalize this rate to non-quadratic convex functions (though the
algorithm will have to be modified).
# Dimension-free convex optimization {#dimfree}
We investigate here variants of the *gradient descent* scheme. This
iterative algorithm, which can be traced back to [@Cau47], is the
simplest strategy to minimize a differentiable function $f$ on
$\mathbb{R}^n$. Starting at some initial point $x_1 \in \mathbb{R}^n$ it
iterates the following equation: $$\label{eq:Cau47}
x_{t+1} = x_t - \eta \nabla f(x_t) ,$$ where $\eta > 0$ is a fixed
step-size parameter. The rationale behind
[\[eq:Cau47\]](#eq:Cau47){reference-type="eqref" reference="eq:Cau47"}
is to make a small step in the direction that minimizes the local first
order Taylor approximation of $f$ (also known as the steepest descent
direction).
As we shall see, methods of the type
[\[eq:Cau47\]](#eq:Cau47){reference-type="eqref" reference="eq:Cau47"}
can obtain an oracle complexity *independent of the dimension*[^3]. This
feature makes them particularly attractive for optimization in very high
dimension.
Apart from Section [3.3](#sec:FW){reference-type="ref"
reference="sec:FW"}, in this chapter $\|\cdot\|$ denotes the Euclidean
norm. The set of constraints $\mathcal{X}\subset \mathbb{R}^n$ is
assumed to be compact and convex. We define the projection operator
$\Pi_{\mathcal{X}}$ on $\mathcal{X}$ by
$$\Pi_{\mathcal{X}}(x) = \mathop{\mathrm{argmin}}_{y \in \mathcal{X}} \|x - y\| .$$
The following lemma will prove to be useful in our study. It is an easy
corollary of Proposition
[\[prop:firstorder\]](#prop:firstorder){reference-type="ref"
reference="prop:firstorder"}, see also Figure
[\[fig:pythagore\]](#fig:pythagore){reference-type="ref"
reference="fig:pythagore"}.
::: lemma
[]{#lem:todonow label="lem:todonow"} Let $x \in \mathcal{X}$ and
$y \in \mathbb{R}^n$, then
$$(\Pi_{\mathcal{X}}(y) - x)^{\top} (\Pi_{\mathcal{X}}(y) - y) \leq 0 ,$$
which also implies
$\|\Pi_{\mathcal{X}}(y) - x\|^2 + \|y - \Pi_{\mathcal{X}}(y)\|^2 \leq \|y - x\|^2$.
:::
::: center
:::
Unless specified otherwise all the proofs in this chapter are taken from
[@Nes04] (with slight simplification in some cases).
## Projected subgradient descent for Lipschitz functions {#sec:psgd}
In this section we assume that $\mathcal{X}$ is contained in an
Euclidean ball centered at $x_1 \in \mathcal{X}$ and of radius $R$.
Furthermore we assume that $f$ is such that for any $x \in \mathcal{X}$
and any $g \in \partial f(x)$ (we assume
$\partial f(x) \neq \emptyset$), one has $\|g\| \leq L$. Note that by
the subgradient inequality and Cauchy-Schwarz this implies that $f$ is
$L$-Lipschitz on $\mathcal{X}$, that is $|f(x) - f(y)| \leq L \|x-y\|$.
In this context we make two modifications to the basic gradient descent
[\[eq:Cau47\]](#eq:Cau47){reference-type="eqref" reference="eq:Cau47"}.
First, obviously, we replace the gradient $\nabla f(x)$ (which may not
exist) by a subgradient $g \in \partial f(x)$. Secondly, and more
importantly, we make sure that the updated point lies in $\mathcal{X}$
by projecting back (if necessary) onto it. This gives the *projected
subgradient descent* algorithm[^4] which iterates the following
equations for $t \geq 1$: $$\begin{aligned}
& y_{t+1} = x_t - \eta g_t , \ \text{where} \ g_t \in \partial f(x_t) , \label{eq:PGD1}\\
& x_{t+1} = \Pi_{\mathcal{X}}(y_{t+1}) . \label{eq:PGD2}
\end{aligned}$$ This procedure is illustrated in Figure
[\[fig:pgd\]](#fig:pgd){reference-type="ref" reference="fig:pgd"}. We
prove now a rate of convergence for this method under the above
assumptions.
::: center
:::
::: theorem
[]{#th:pgd label="th:pgd"} The projected subgradient descent method with
$\eta = \frac{R}{L \sqrt{t}}$ satisfies
$$f\left(\frac{1}{t} \sum_{s=1}^t x_s\right) - f(x^*) \leq \frac{R L}{\sqrt{t}} .$$
:::
::: proof
*Proof.* Using the definition of subgradients, the definition of the
method, and the elementary identity
$2 a^{\top} b = \|a\|^2 + \|b\|^2 - \|a-b\|^2$, one obtains
$$\begin{aligned}
f(x_s) - f(x^*) & \leq & g_s^{\top} (x_s - x^*) \\
& = & \frac{1}{\eta} (x_s - y_{s+1})^{\top} (x_s - x^*) \\
& = & \frac{1}{2 \eta} \left(\|x_s - x^*\|^2 + \|x_s - y_{s+1}\|^2 - \|y_{s+1} - x^*\|^2\right) \\
& = & \frac{1}{2 \eta} \left(\|x_s - x^*\|^2 - \|y_{s+1} - x^*\|^2\right) + \frac{\eta}{2} \|g_s\|^2.
\end{aligned}$$ Now note that $\|g_s\| \leq L$, and furthermore by Lemma
[\[lem:todonow\]](#lem:todonow){reference-type="ref"
reference="lem:todonow"} $$\|y_{s+1} - x^*\| \geq \|x_{s+1} - x^*\| .$$
Summing the resulting inequality over $s$, and using that
$\|x_1 - x^*\| \leq R$ yield
$$\sum_{s=1}^t \left( f(x_s) - f(x^*) \right) \leq \frac{R^2}{2 \eta} + \frac{\eta L^2 t}{2} .$$
Plugging in the value of $\eta$ directly gives the statement (recall
that by convexity
$f((1/t) \sum_{s=1}^t x_s) \leq \frac1{t} \sum_{s=1}^t f(x_s)$). ◻
:::
We will show in Section [3.5](#sec:chap3LB){reference-type="ref"
reference="sec:chap3LB"} that the rate given in Theorem
[\[th:pgd\]](#th:pgd){reference-type="ref" reference="th:pgd"} is
unimprovable from a black-box perspective. Thus to reach an
$\varepsilon$-optimal point one needs $\Theta(1/\varepsilon^2)$ calls to
the oracle. In some sense this is an astonishing result as this
complexity is independent[^5] of the ambient dimension $n$. On the other
hand this is also quite disappointing compared to the scaling in
$\log(1/\varepsilon)$ of the center of gravity and ellipsoid method of
Chapter [2](#finitedim){reference-type="ref" reference="finitedim"}. To
put it differently with gradient descent one could hope to reach a
reasonable accuracy in very high dimension, while with the ellipsoid
method one can reach very high accuracy in reasonably small dimension. A
major task in the following sections will be to explore more restrictive
assumptions on the function to be optimized in order to have the best of
both worlds, that is an oracle complexity independent of the dimension
and with a scaling in $\log(1/\varepsilon)$.
The computational bottleneck of the projected subgradient descent is
often the projection step [\[eq:PGD2\]](#eq:PGD2){reference-type="eqref"
reference="eq:PGD2"} which is a convex optimization problem by itself.
In some cases this problem may admit an analytical solution (think of
$\mathcal{X}$ being an Euclidean ball), or an easy and fast
combinatorial algorithm to solve it (this is the case for $\mathcal{X}$
being an $\ell_1$-ball, see [@MP89]). We will see in Section
[3.3](#sec:FW){reference-type="ref" reference="sec:FW"} a
projection-free algorithm which operates under an extra assumption of
smoothness on the function to be optimized.
Finally we observe that the step-size recommended by Theorem
[\[th:pgd\]](#th:pgd){reference-type="ref" reference="th:pgd"} depends
on the number of iterations to be performed. In practice this may be an
undesirable feature. However using a time-varying step size of the form
$\eta_s = \frac{R}{L \sqrt{s}}$ one can prove the same rate up to a
$\log t$ factor. In any case these step sizes are very small, which is
the reason for the slow convergence. In the next section we will see
that by assuming *smoothness* in the function $f$ one can afford to be
much more aggressive. Indeed in this case, as one approaches the optimum
the size of the gradients themselves will go to $0$, resulting in a sort
of "auto-tuning\" of the step sizes which does not happen for an
arbitrary convex function.
## Gradient descent for smooth functions {#sec:gdsmooth}
We say that a continuously differentiable function $f$ is $\beta$-smooth
if the gradient $\nabla f$ is $\beta$-Lipschitz, that is
$$\|\nabla f(x) - \nabla f(y) \| \leq \beta \|x-y\| .$$ Note that if $f$
is twice differentiable then this is equivalent to the eigenvalues of
the Hessians being smaller than $\beta$. In this section we explore
potential improvements in the rate of convergence under such a
smoothness assumption. In order to avoid technicalities we consider
first the unconstrained situation, where $f$ is a convex and
$\beta$-smooth function on $\mathbb{R}^n$. The next theorem shows that
*gradient descent*, which iterates $x_{t+1} = x_t - \eta \nabla f(x_t)$,
attains a much faster rate in this situation than in the non-smooth case
of the previous section.
::: theorem
[]{#th:gdsmooth label="th:gdsmooth"} Let $f$ be convex and
$\beta$-smooth on $\mathbb{R}^n$. Then gradient descent with
$\eta = \frac{1}{\beta}$ satisfies
$$f(x_t) - f(x^*) \leq \frac{2 \beta \|x_1 - x^*\|^2}{t-1} .$$
:::
Before embarking on the proof we state a few properties of smooth convex
functions.
::: lemma
[]{#lem:sand label="lem:sand"} Let $f$ be a $\beta$-smooth function on
$\mathbb{R}^n$. Then for any $x, y \in \mathbb{R}^n$, one has
$$|f(x) - f(y) - \nabla f(y)^{\top} (x - y)| \leq \frac{\beta}{2} \|x - y\|^2 .$$
:::
::: proof
*Proof.* We represent $f(x) - f(y)$ as an integral, apply Cauchy-Schwarz
and then $\beta$-smoothness: $$\begin{aligned}
& |f(x) - f(y) - \nabla f(y)^{\top} (x - y)| \\
& = \left|\int_0^1 \nabla f(y + t(x-y))^{\top} (x-y) dt - \nabla f(y)^{\top} (x - y) \right| \\
& \leq \int_0^1 \|\nabla f(y + t(x-y)) - \nabla f(y)\| \cdot \|x - y\| dt \\
& \leq \int_0^1 \beta t \|x-y\|^2 dt \\
& = \frac{\beta}{2} \|x-y\|^2 .
\end{aligned}$$ ◻
:::
In particular this lemma shows that if $f$ is convex and $\beta$-smooth,
then for any $x, y \in \mathbb{R}^n$, one has $$\label{eq:defaltsmooth}
0 \leq f(x) - f(y) - \nabla f(y)^{\top} (x - y) \leq \frac{\beta}{2} \|x - y\|^2 .$$
This gives in particular the following important inequality to evaluate
the improvement in one step of gradient descent:
$$\label{eq:onestepofgd}
f\left(x - \frac{1}{\beta} \nabla f(x)\right) - f(x) \leq - \frac{1}{2 \beta} \|\nabla f(x)\|^2 .$$
The next lemma, which improves the basic inequality for subgradients
under the smoothness assumption, shows that in fact $f$ is convex and
$\beta$-smooth if and only if
[\[eq:defaltsmooth\]](#eq:defaltsmooth){reference-type="eqref"
reference="eq:defaltsmooth"} holds true. In the literature
[\[eq:defaltsmooth\]](#eq:defaltsmooth){reference-type="eqref"
reference="eq:defaltsmooth"} is often used as a definition of smooth
convex functions.
::: lemma
[]{#lem:2 label="lem:2"} Let $f$ be such that
[\[eq:defaltsmooth\]](#eq:defaltsmooth){reference-type="eqref"
reference="eq:defaltsmooth"} holds true. Then for any
$x, y \in \mathbb{R}^n$, one has
$$f(x) - f(y) \leq \nabla f(x)^{\top} (x - y) - \frac{1}{2 \beta} \|\nabla f(x) - \nabla f(y)\|^2 .$$
:::
::: proof
*Proof.* Let $z = y - \frac{1}{\beta} (\nabla f(y) - \nabla f(x))$. Then
one has $$\begin{aligned}
& f(x) - f(y) \\
& = f(x) - f(z) + f(z) - f(y) \\
& \leq \nabla f(x)^{\top} (x-z) + \nabla f(y)^{\top} (z-y) + \frac{\beta}{2} \|z - y\|^2 \\
& = \nabla f(x)^{\top}(x-y) + (\nabla f(x) - \nabla f(y))^{\top} (y-z) + \frac{1}{2 \beta} \|\nabla f(x) - \nabla f(y)\|^2 \\
& = \nabla f(x)^{\top} (x - y) - \frac{1}{2 \beta} \|\nabla f(x) - \nabla f(y)\|^2 .
\end{aligned}$$ ◻
:::
We can now prove Theorem
[\[th:gdsmooth\]](#th:gdsmooth){reference-type="ref"
reference="th:gdsmooth"}
::: proof
*Proof.* Using
[\[eq:onestepofgd\]](#eq:onestepofgd){reference-type="eqref"
reference="eq:onestepofgd"} and the definition of the method one has
$$f(x_{s+1}) - f(x_s) \leq - \frac{1}{2 \beta} \|\nabla f(x_s)\|^2.$$ In
particular, denoting $\delta_s = f(x_s) - f(x^*)$, this shows:
$$\delta_{s+1} \leq \delta_s - \frac{1}{2 \beta} \|\nabla f(x_s)\|^2.$$
One also has by convexity
$$\delta_s \leq \nabla f(x_s)^{\top} (x_s - x^*) \leq \|x_s - x^*\| \cdot \|\nabla f(x_s)\| .$$
We will prove that $\|x_s - x^*\|$ is decreasing with $s$, which with
the two above displays will imply
$$\delta_{s+1} \leq \delta_s - \frac{1}{2 \beta \|x_1 - x^*\|^2} \delta_s^2.$$
Let us see how to use this last inequality to conclude the proof. Let
$\omega = \frac{1}{2 \beta \|x_1 - x^*\|^2}$, then[^6]
$$\omega \delta_s^2 + \delta_{s+1} \leq \delta_s \Leftrightarrow \omega \frac{\delta_s}{\delta_{s+1}} + \frac{1}{\delta_{s}} \leq \frac{1}{\delta_{s+1}} \Rightarrow \frac{1}{\delta_{s+1}} - \frac{1}{\delta_{s}} \geq \omega \Rightarrow \frac{1}{\delta_t} \geq \omega (t-1) .$$
Thus it only remains to show that $\|x_s - x^*\|$ is decreasing with
$s$. Using Lemma [\[lem:2\]](#lem:2){reference-type="ref"
reference="lem:2"} one immediately gets $$\label{eq:coercive1}
(\nabla f(x) - \nabla f(y))^{\top} (x - y) \geq \frac{1}{\beta} \|\nabla f(x) - \nabla f(y)\|^2 .$$
We use this as follows (together with $\nabla f(x^*) = 0$)
$$\begin{aligned}
\|x_{s+1} - x^*\|^2& = & \|x_{s} - \frac{1}{\beta} \nabla f(x_s) - x^*\|^2 \\
& = & \|x_{s} - x^*\|^2 - \frac{2}{\beta} \nabla f(x_s)^{\top} (x_s - x^*) + \frac{1}{\beta^2} \|\nabla f(x_s)\|^2 \\
& \leq & \|x_{s} - x^*\|^2 - \frac{1}{\beta^2} \|\nabla f(x_s)\|^2 \\
& \leq & \|x_{s} - x^*\|^2 ,
\end{aligned}$$ which concludes the proof. ◻
:::
### The constrained case {#the-constrained-case .unnumbered}
We now come back to the constrained problem $$\begin{aligned}
& \mathrm{min.} \; f(x) \\
& \text{s.t.} \; x \in \mathcal{X}.
\end{aligned}$$ Similarly to what we did in Section
[3.1](#sec:psgd){reference-type="ref" reference="sec:psgd"} we consider
the projected gradient descent algorithm, which iterates
$x_{t+1} = \Pi_{\mathcal{X}}(x_t - \eta \nabla f(x_t))$.
The key point in the analysis of gradient descent for unconstrained
smooth optimization is that a step of gradient descent started at $x$
will decrease the function value by at least
$\frac{1}{2\beta} \|\nabla f(x)\|^2$, see
[\[eq:onestepofgd\]](#eq:onestepofgd){reference-type="eqref"
reference="eq:onestepofgd"}. In the constrained case we cannot expect
that this would still hold true as a step may be cut short by the
projection. The next lemma defines the "right\" quantity to measure
progress in the constrained case.
::: lemma
[]{#lem:smoothconst label="lem:smoothconst"} Let $x, y \in \mathcal{X}$,
$x^+ = \Pi_{\mathcal{X}}\left(x - \frac{1}{\beta} \nabla f(x)\right)$,
and $g_{\mathcal{X}}(x) = \beta(x - x^+)$. Then the following holds
true:
$$f(x^+) - f(y) \leq g_{\mathcal{X}}(x)^{\top}(x-y) - \frac{1}{2 \beta} \|g_{\mathcal{X}}(x)\|^2 .$$
:::
::: proof
*Proof.* We first observe that $$\label{eq:chap3eq1}
\nabla f(x)^{\top} (x^+ - y) \leq g_{\mathcal{X}}(x)^{\top}(x^+ - y) .$$
Indeed the above inequality is equivalent to
$$\left(x^+- \left(x - \frac{1}{\beta} \nabla f(x) \right)\right)^{\top} (x^+ - y) \leq 0,$$
which follows from Lemma
[\[lem:todonow\]](#lem:todonow){reference-type="ref"
reference="lem:todonow"}. Now we use
[\[eq:chap3eq1\]](#eq:chap3eq1){reference-type="eqref"
reference="eq:chap3eq1"} as follows to prove the lemma (we also use
[\[eq:defaltsmooth\]](#eq:defaltsmooth){reference-type="eqref"
reference="eq:defaltsmooth"} which still holds true in the constrained
case) $$\begin{aligned}
& f(x^+) - f(y) \\
& = f(x^+) - f(x) + f(x) - f(y) \\
& \leq \nabla f(x)^{\top} (x^+-x) + \frac{\beta}{2} \|x^+-x\|^2 + \nabla f(x)^{\top} (x-y) \\
& = \nabla f(x)^{\top} (x^+ - y) + \frac{1}{2 \beta} \|g_{\mathcal{X}}(x)\|^2 \\
& \leq g_{\mathcal{X}}(x)^{\top}(x^+ - y) + \frac{1}{2 \beta} \|g_{\mathcal{X}}(x)\|^2 \\
& = g_{\mathcal{X}}(x)^{\top}(x - y) - \frac{1}{2 \beta} \|g_{\mathcal{X}}(x)\|^2 .
\end{aligned}$$ ◻
:::
We can now prove the following result.
::: theorem
[]{#th:gdsmoothconstrained label="th:gdsmoothconstrained"} Let $f$ be
convex and $\beta$-smooth on $\mathcal{X}$. Then projected gradient
descent with $\eta = \frac{1}{\beta}$ satisfies
$$f(x_t) - f(x^*) \leq \frac{3 \beta \|x_1 - x^*\|^2 + f(x_1) - f(x^*)}{t} .$$
:::
::: proof
*Proof.* Lemma
[\[lem:smoothconst\]](#lem:smoothconst){reference-type="ref"
reference="lem:smoothconst"} immediately gives
$$f(x_{s+1}) - f(x_s) \leq - \frac{1}{2 \beta} \|g_{\mathcal{X}}(x_s)\|^2 ,$$
and
$$f(x_{s+1}) - f(x^*) \leq \|g_{\mathcal{X}}(x_s)\| \cdot \|x_s - x^*\| .$$
We will prove that $\|x_s - x^*\|$ is decreasing with $s$, which with
the two above displays will imply
$$\delta_{s+1} \leq \delta_s - \frac{1}{2 \beta \|x_1 - x^*\|^2} \delta_{s+1}^2.$$
An easy induction shows that
$$\delta_s \leq \frac{3 \beta \|x_1 - x^*\|^2 + f(x_1) - f(x^*)}{s}.$$
Thus it only remains to show that $\|x_s - x^*\|$ is decreasing with
$s$. Using Lemma
[\[lem:smoothconst\]](#lem:smoothconst){reference-type="ref"
reference="lem:smoothconst"} one can see that
$g_{\mathcal{X}}(x_s)^{\top} (x_s - x^*) \geq \frac{1}{2 \beta} \|g_{\mathcal{X}}(x_s)\|^2$
which implies $$\begin{aligned}
\|x_{s+1} - x^*\|^2& = & \|x_{s} - \frac{1}{\beta} g_{\mathcal{X}}(x_s) - x^*\|^2 \\
& = & \|x_{s} - x^*\|^2 - \frac{2}{\beta} g_{\mathcal{X}}(x_s)^{\top} (x_s - x^*) + \frac{1}{\beta^2} \|g_{\mathcal{X}}(x_s)\|^2 \\
& \leq & \|x_{s} - x^*\|^2 .
\end{aligned}$$ ◻
:::
## Conditional gradient descent, aka Frank-Wolfe {#sec:FW}
We describe now an alternative algorithm to minimize a smooth convex
function $f$ over a compact convex set $\mathcal{X}$. The *conditional
gradient descent*, introduced in [@FW56], performs the following update
for $t \geq 1$, where $(\gamma_s)_{s \geq 1}$ is a fixed sequence,
$$\begin{aligned}
&y_{t} \in \mathrm{argmin}_{y \in \mathcal{X}} \nabla f(x_t)^{\top} y \label{eq:FW1} \\
& x_{t+1} = (1 - \gamma_t) x_t + \gamma_t y_t . \label{eq:FW2}
\end{aligned}$$ In words conditional gradient descent makes a step in
the steepest descent direction *given the constraint set $\mathcal{X}$*,
see Figure [\[fig:FW\]](#fig:FW){reference-type="ref"
reference="fig:FW"} for an illustration. From a computational
perspective, a key property of this scheme is that it replaces the
projection step of projected gradient descent by a linear optimization
over $\mathcal{X}$, which in some cases can be a much simpler problem.
::: center
:::
We now turn to the analysis of this method. A major advantage of
conditional gradient descent over projected gradient descent is that the
former can adapt to smoothness in an arbitrary norm. Precisely let $f$
be $\beta$-smooth in some norm $\|\cdot\|$, that is
$\|\nabla f(x) - \nabla f(y) \|_* \leq \beta \|x-y\|$ where the dual
norm $\|\cdot\|_*$ is defined as
$\|g\|_* = \sup_{x \in \mathbb{R}^n : \|x\| \leq 1} g^{\top} x$. The
following result is extracted from [@Jag13] (see also [@DH78]).
::: theorem
Let $f$ be a convex and $\beta$-smooth function w.r.t. some norm
$\|\cdot\|$, $R = \sup_{x, y \in \mathcal{X}} \|x - y\|$, and
$\gamma_s = \frac{2}{s+1}$ for $s \geq 1$. Then for any $t \geq 2$, one
has $$f(x_t) - f(x^*) \leq \frac{2 \beta R^2}{t+1} .$$
:::
::: proof
*Proof.* The following inequalities hold true, using respectively
$\beta$-smoothness (it can easily be seen that
[\[eq:defaltsmooth\]](#eq:defaltsmooth){reference-type="eqref"
reference="eq:defaltsmooth"} holds true for smoothness in an arbitrary
norm), the definition of $x_{s+1}$, the definition of $y_s$, and the
convexity of $f$: $$\begin{aligned}
f(x_{s+1}) - f(x_s) & \leq & \nabla f(x_s)^{\top} (x_{s+1} - x_s) + \frac{\beta}{2} \|x_{s+1} - x_s\|^2 \\
& \leq & \gamma_s \nabla f(x_s)^{\top} (y_{s} - x_s) + \frac{\beta}{2} \gamma_s^2 R^2 \\
& \leq & \gamma_s \nabla f(x_s)^{\top} (x^* - x_s) + \frac{\beta}{2} \gamma_s^2 R^2 \\
& \leq & \gamma_s (f(x^*) - f(x_s)) + \frac{\beta}{2} \gamma_s^2 R^2 .
\end{aligned}$$ Rewriting this inequality in terms of
$\delta_s = f(x_s) - f(x^*)$ one obtains
$$\delta_{s+1} \leq (1 - \gamma_s) \delta_s + \frac{\beta}{2} \gamma_s^2 R^2 .$$
A simple induction using that $\gamma_s = \frac{2}{s+1}$ finishes the
proof (note that the initialization is done at step $2$ with the above
inequality yielding $\delta_2 \leq \frac{\beta}{2} R^2$). ◻
:::
In addition to being projection-free and "norm-free\", the conditional
gradient descent satisfies a perhaps even more important property: it
produces *sparse iterates*. More precisely consider the situation where
$\mathcal{X}\subset \mathbb{R}^n$ is a polytope, that is the convex hull
of a finite set of points (these points are called the vertices of
$\mathcal{X}$). Then Carathéodory's theorem states that any point
$x \in \mathcal{X}$ can be written as a convex combination of at most
$n+1$ vertices of $\mathcal{X}$. On the other hand, by definition of the
conditional gradient descent, one knows that the $t^{th}$ iterate $x_t$
can be written as a convex combination of $t$ vertices (assuming that
$x_1$ is a vertex). Thanks to the dimension-free rate of convergence one
is usually interested in the regime where $t \ll n$, and thus we see
that the iterates of conditional gradient descent are very sparse in
their vertex representation.
We note an interesting corollary of the sparsity property together with
the rate of convergence we proved: smooth functions on the simplex
$\{x \in \mathbb{R}_+^n : \sum_{i=1}^n x_i = 1\}$ always admit sparse
approximate minimizers. More precisely there must exist a point $x$ with
only $t$ non-zero coordinates and such that $f(x) - f(x^*) = O(1/t)$.
Clearly this is the best one can hope for in general, as it can be seen
with the function $f(x) = \|x\|^2_2$ since by Cauchy-Schwarz one has
$\|x\|_1 \leq \sqrt{\|x\|_0} \|x\|_2$ which implies on the simplex
$\|x\|_2^2 \geq 1 / \|x\|_0$.
Next we describe an application where the three properties of
conditional gradient descent (projection-free, norm-free, and sparse
iterates) are critical to develop a computationally efficient procedure.
### An application of conditional gradient descent: Least-squares regression with structured sparsity {#an-application-of-conditional-gradient-descent-least-squares-regression-with-structured-sparsity .unnumbered}
This example is inspired by [@Lug10] (see also [@Jon92]). Consider the
problem of approximating a signal $Y \in \mathbb{R}^n$ by a "small\"
combination of dictionary elements $d_1, \hdots, d_N \in \mathbb{R}^n$.
One way to do this is to consider a LASSO type problem in dimension $N$
of the following form (with $\lambda \in \mathbb{R}$ fixed)
$$\min_{x \in \mathbb{R}^N} \big\| Y - \sum_{i=1}^N x(i) d_i \big\|_2^2 + \lambda \|x\|_1 .$$
Let $D \in \mathbb{R}^{n \times N}$ be the dictionary matrix with
$i^{th}$ column given by $d_i$. Instead of considering the penalized
version of the problem one could look at the following constrained
problem (with $s \in \mathbb{R}$ fixed) on which we will now focus, see
e.g. [@FT07], $$\begin{aligned}
\min_{x \in \mathbb{R}^N} \| Y - D x \|_2^2
& \qquad \Leftrightarrow \qquad & \min_{x \in \mathbb{R}^N} \| Y / s - D x \|_2^2 \label{eq:structuredsparsity} \\
\text{subject to} \; \|x\|_1 \leq s
& & \text{subject to} \; \|x\|_1 \leq 1 . \notag
\end{aligned}$$ We make some assumptions on the dictionary. We are
interested in situations where the size of the dictionary $N$ can be
very large, potentially exponential in the ambient dimension $n$.
Nonetheless we want to restrict our attention to algorithms that run in
reasonable time with respect to the ambient dimension $n$, that is we
want polynomial time algorithms in $n$. Of course in general this is
impossible, and we need to assume that the dictionary has some structure
that can be exploited. Here we make the assumption that one can do
*linear optimization* over the dictionary in polynomial time in $n$.
More precisely we assume that one can solve in time $p(n)$ (where $p$ is
polynomial) the following problem for any $y \in \mathbb{R}^n$:
$$\min_{1 \leq i \leq N} y^{\top} d_i .$$ This assumption is met for
many *combinatorial* dictionaries. For instance the dictionary elements
could be vector of incidence of spanning trees in some fixed graph, in
which case the linear optimization problem can be solved with a greedy
algorithm.
Finally, for normalization issues, we assume that the $\ell_2$-norm of
the dictionary elements are controlled by some $m>0$, that is
$\|d_i\|_2 \leq m, \forall i \in [N]$.
Our problem of interest
[\[eq:structuredsparsity\]](#eq:structuredsparsity){reference-type="eqref"
reference="eq:structuredsparsity"} corresponds to minimizing the
function $f(x) = \frac{1}{2} \| Y - D x \|^2_2$ on the $\ell_1$-ball of
$\mathbb{R}^N$ in polynomial time in $n$. At first sight this task may
seem completely impossible, indeed one is not even allowed to write down
entirely a vector $x \in \mathbb{R}^N$ (since this would take time
linear in $N$). The key property that will save us is that this function
admits *sparse minimizers* as we discussed in the previous section, and
this will be exploited by the conditional gradient descent method.
First let us study the computational complexity of the $t^{th}$ step of
conditional gradient descent. Observe that
$$\nabla f(x) = D^{\top} (D x - Y).$$ Now assume that
$z_t = D x_t - Y \in \mathbb{R}^n$ is already computed, then to compute
[\[eq:FW1\]](#eq:FW1){reference-type="eqref" reference="eq:FW1"} one
needs to find the coordinate $i_t \in [N]$ that maximizes
$|[\nabla f(x_t)](i)|$ which can be done by maximizing $d_i^{\top} z_t$
and $- d_i^{\top} z_t$. Thus
[\[eq:FW1\]](#eq:FW1){reference-type="eqref" reference="eq:FW1"} takes
time $O(p(n))$. Computing $x_{t+1}$ from $x_t$ and $i_{t}$ takes time
$O(t)$ since $\|x_t\|_0 \leq t$, and computing $z_{t+1}$ from $z_t$ and
$i_t$ takes time $O(n)$. Thus the overall time complexity of running $t$
steps is (we assume $p(n) = \Omega(n)$)
$$O(t p(n) + t^2). \label{eq:structuredsparsity2}$$
To derive a rate of convergence it remains to study the smoothness of
$f$. This can be done as follows: $$\begin{aligned}
\| \nabla f(x) - \nabla f(y) \|_{\infty} & = & \|D^{\top} D (x-y) \|_{\infty} \\
& = & \max_{1 \leq i \leq N} \bigg| d_i^{\top} \left(\sum_{j=1}^N d_j (x(j) - y(j))\right) \bigg| \\
& \leq & m^2 \|x-y\|_1 ,
\end{aligned}$$ which means that $f$ is $m^2$-smooth with respect to the
$\ell_1$-norm. Thus we get the following rate of convergence:
$$f(x_t) - f(x^*) \leq \frac{8 m^2}{t+1} . \label{eq:structuredsparsity3}$$
Putting together
[\[eq:structuredsparsity2\]](#eq:structuredsparsity2){reference-type="eqref"
reference="eq:structuredsparsity2"} and
[\[eq:structuredsparsity3\]](#eq:structuredsparsity3){reference-type="eqref"
reference="eq:structuredsparsity3"} we proved that one can get an
$\varepsilon$-optimal solution to
[\[eq:structuredsparsity\]](#eq:structuredsparsity){reference-type="eqref"
reference="eq:structuredsparsity"} with a computational effort of
$O(m^2 p(n)/\varepsilon+ m^4/\varepsilon^2)$ using the conditional
gradient descent.
## Strong convexity
We will now discuss another property of convex functions that can
significantly speed-up the convergence of first order methods: strong
convexity. We say that $f: \mathcal{X}\rightarrow \mathbb{R}$ is
$\alpha$-*strongly convex* if it satisfies the following improved
subgradient inequality: $$\label{eq:defstrongconv}
f(x) - f(y) \leq \nabla f(x)^{\top} (x - y) - \frac{\alpha}{2} \|x - y \|^2 .$$
Of course this definition does not require differentiability of the
function $f$, and one can replace $\nabla f(x)$ in the inequality above
by $g \in \partial f(x)$. It is immediate to verify that a function $f$
is $\alpha$-strongly convex if and only if
$x \mapsto f(x) - \frac{\alpha}{2} \|x\|^2$ is convex (in particular if
$f$ is twice differentiable then the eigenvalues of the Hessians of $f$
have to be larger than $\alpha$). The strong convexity parameter
$\alpha$ is a measure of the *curvature* of $f$. For instance a linear
function has no curvature and hence $\alpha = 0$. On the other hand one
can clearly see why a large value of $\alpha$ would lead to a faster
rate: in this case a point far from the optimum will have a large
gradient, and thus gradient descent will make very big steps when far
from the optimum. Of course if the function is non-smooth one still has
to be careful and tune the step-sizes to be relatively small, but
nonetheless we will be able to improve the oracle complexity from
$O(1/\varepsilon^2)$ to $O(1/(\alpha \varepsilon))$. On the other hand
with the additional assumption of $\beta$-smoothness we will prove that
gradient descent with a constant step-size achieves a *linear rate of
convergence*, precisely the oracle complexity will be
$O(\frac{\beta}{\alpha} \log(1/\varepsilon))$. This achieves the
objective we had set after Theorem
[\[th:pgd\]](#th:pgd){reference-type="ref" reference="th:pgd"}:
strongly-convex and smooth functions can be optimized in very large
dimension and up to very high accuracy.
Before going into the proofs let us discuss another interpretation of
strong-convexity and its relation to smoothness. Equation
[\[eq:defstrongconv\]](#eq:defstrongconv){reference-type="eqref"
reference="eq:defstrongconv"} can be read as follows: at any point $x$
one can find a (convex) quadratic lower bound
$q_x^-(y) = f(x) + \nabla f(x)^{\top} (y - x) + \frac{\alpha}{2} \|x - y \|^2$
to the function $f$, i.e.
$q_x^-(y) \leq f(y), \forall y \in \mathcal{X}$ (and $q_x^-(x) = f(x)$).
On the other hand for $\beta$-smoothness
[\[eq:defaltsmooth\]](#eq:defaltsmooth){reference-type="eqref"
reference="eq:defaltsmooth"} implies that at any point $y$ one can find
a (convex) quadratic upper bound
$q_y^+(x) = f(y) + \nabla f(y)^{\top} (x - y) + \frac{\beta}{2} \|x - y \|^2$
to the function $f$, i.e.
$q_y^+(x) \geq f(x), \forall x \in \mathcal{X}$ (and $q_y^+(y) = f(y)$).
Thus in some sense strong convexity is a *dual* assumption to
smoothness, and in fact this can be made precise within the framework of
Fenchel duality. Also remark that clearly one always has
$\beta \geq \alpha$.
### Strongly convex and Lipschitz functions
We consider here the projected subgradient descent algorithm with
time-varying step size $(\eta_t)_{t \geq 1}$, that is $$\begin{aligned}
& y_{t+1} = x_t - \eta_t g_t , \ \text{where} \ g_t \in \partial f(x_t) \\
& x_{t+1} = \Pi_{\mathcal{X}}(y_{t+1}) .
\end{aligned}$$ The following result is extracted from [@LJSB12].
::: theorem
[]{#th:LJSB12 label="th:LJSB12"} Let $f$ be $\alpha$-strongly convex and
$L$-Lipschitz on $\mathcal{X}$. Then projected subgradient descent with
$\eta_s = \frac{2}{\alpha (s+1)}$ satisfies
$$f \left(\sum_{s=1}^t \frac{2 s}{t(t+1)} x_s \right) - f(x^*) \leq \frac{2 L^2}{\alpha (t+1)} .$$
:::
::: proof
*Proof.* Coming back to our original analysis of projected subgradient
descent in Section [3.1](#sec:psgd){reference-type="ref"
reference="sec:psgd"} and using the strong convexity assumption one
immediately obtains
$$f(x_s) - f(x^*) \leq \frac{\eta_s}{2} L^2 + \left( \frac{1}{2 \eta_s} - \frac{\alpha}{2} \right) \|x_s - x^*\|^2 - \frac{1}{2 \eta_s} \|x_{s+1} - x^*\|^2 .$$
Multiplying this inequality by $s$ yields
$$s( f(x_s) - f(x^*) ) \leq \frac{L^2}{\alpha} + \frac{\alpha}{4} \bigg( s(s-1) \|x_s - x^*\|^2 - s (s+1) \|x_{s+1} - x^*\|^2 \bigg),$$
Now sum the resulting inequality over $s=1$ to $s=t$, and apply Jensen's
inequality to obtain the claimed statement. ◻
:::
### Strongly convex and smooth functions
As we will see now, having both strong convexity and smoothness allows
for a drastic improvement in the convergence rate. We denote
$\kappa= \frac{\beta}{\alpha}$ for the *condition number* of $f$. The
key observation is that Lemma
[\[lem:smoothconst\]](#lem:smoothconst){reference-type="ref"
reference="lem:smoothconst"} can be improved to (with the notation of
the lemma): $$\label{eq:improvedstrongsmooth}
f(x^+) - f(y) \leq g_{\mathcal{X}}(x)^{\top}(x-y) - \frac{1}{2 \beta} \|g_{\mathcal{X}}(x)\|^2 - \frac{\alpha}{2} \|x-y\|^2 .$$
::: theorem
[]{#th:gdssc label="th:gdssc"} Let $f$ be $\alpha$-strongly convex and
$\beta$-smooth on $\mathcal{X}$. Then projected gradient descent with
$\eta = \frac{1}{\beta}$ satisfies for $t \geq 0$,
$$\|x_{t+1} - x^*\|^2 \leq \exp\left( - \frac{t}{\kappa} \right) \|x_1 - x^*\|^2 .$$
:::
::: proof
*Proof.* Using
[\[eq:improvedstrongsmooth\]](#eq:improvedstrongsmooth){reference-type="eqref"
reference="eq:improvedstrongsmooth"} with $y=x^*$ one directly obtains
$$\begin{aligned}
\|x_{t+1} - x^*\|^2& = & \|x_{t} - \frac{1}{\beta} g_{\mathcal{X}}(x_t) - x^*\|^2 \\
& = & \|x_{t} - x^*\|^2 - \frac{2}{\beta} g_{\mathcal{X}}(x_t)^{\top} (x_t - x^*) + \frac{1}{\beta^2} \|g_{\mathcal{X}}(x_t)\|^2 \\
& \leq & \left(1 - \frac{\alpha}{\beta} \right) \|x_{t} - x^*\|^2 \\
& \leq & \left(1 - \frac{\alpha}{\beta} \right)^t \|x_{1} - x^*\|^2 \\
& \leq & \exp\left( - \frac{t}{\kappa} \right) \|x_1 - x^*\|^2 ,
\end{aligned}$$ which concludes the proof. ◻
:::
We now show that in the unconstrained case one can improve the rate by a
constant factor, precisely one can replace $\kappa$ by $(\kappa+1) / 4$
in the oracle complexity bound by using a larger step size. This is not
a spectacular gain but the reasoning is based on an improvement of
[\[eq:coercive1\]](#eq:coercive1){reference-type="eqref"
reference="eq:coercive1"} which can be of interest by itself. Note that
[\[eq:coercive1\]](#eq:coercive1){reference-type="eqref"
reference="eq:coercive1"} and the lemma to follow are sometimes referred
to as *coercivity* of the gradient.
::: lemma
[]{#lem:coercive2 label="lem:coercive2"} Let $f$ be $\beta$-smooth and
$\alpha$-strongly convex on $\mathbb{R}^n$. Then for all
$x, y \in \mathbb{R}^n$, one has
$$(\nabla f(x) - \nabla f(y))^{\top} (x - y) \geq \frac{\alpha \beta}{\beta + \alpha} \|x-y\|^2 + \frac{1}{\beta + \alpha} \|\nabla f(x) - \nabla f(y)\|^2 .$$
:::
::: proof
*Proof.* Let $\varphi(x) = f(x) - \frac{\alpha}{2} \|x\|^2$. By
definition of $\alpha$-strong convexity one has that $\varphi$ is
convex. Furthermore one can show that $\varphi$ is
$(\beta-\alpha)$-smooth by proving
[\[eq:defaltsmooth\]](#eq:defaltsmooth){reference-type="eqref"
reference="eq:defaltsmooth"} (and using that it implies smoothness).
Thus using [\[eq:coercive1\]](#eq:coercive1){reference-type="eqref"
reference="eq:coercive1"} one gets
$$(\nabla \varphi(x) - \nabla \varphi(y))^{\top} (x - y) \geq \frac{1}{\beta - \alpha} \|\nabla \varphi(x) - \nabla \varphi(y)\|^2 ,$$
which gives the claimed result with straightforward computations. (Note
that if $\alpha = \beta$ the smoothness of $\varphi$ directly implies
that $\nabla f(x) - \nabla f(y) = \alpha (x-y)$ which proves the lemma
in this case.) ◻
:::
::: theorem
Let $f$ be $\beta$-smooth and $\alpha$-strongly convex on
$\mathbb{R}^n$. Then gradient descent with
$\eta = \frac{2}{\alpha + \beta}$ satisfies
$$f(x_{t+1}) - f(x^*) \leq \frac{\beta}{2} \exp\left( - \frac{4 t}{\kappa+1} \right) \|x_1 - x^*\|^2 .$$
:::
::: proof
*Proof.* First note that by $\beta$-smoothness (since
$\nabla f(x^*) = 0$) one has
$$f(x_t) - f(x^*) \leq \frac{\beta}{2} \|x_t - x^*\|^2 .$$ Now using
Lemma [\[lem:coercive2\]](#lem:coercive2){reference-type="ref"
reference="lem:coercive2"} one obtains $$\begin{aligned}
\|x_{t+1} - x^*\|^2& = & \|x_{t} - \eta \nabla f(x_{t}) - x^*\|^2 \\
& = & \|x_{t} - x^*\|^2 - 2 \eta \nabla f(x_{t})^{\top} (x_{t} - x^*) + \eta^2 \|\nabla f(x_{t})\|^2 \\
& \leq & \left(1 - 2 \frac{\eta \alpha \beta}{\beta + \alpha}\right)\|x_{t} - x^*\|^2 + \left(\eta^2 - 2 \frac{\eta}{\beta + \alpha}\right) \|\nabla f(x_{t})\|^2 \\
& = & \left(\frac{\kappa - 1}{\kappa+1}\right)^2 \|x_{t} - x^*\|^2 \\
& \leq & \exp\left( - \frac{4 t}{\kappa+1} \right) \|x_1 - x^*\|^2 ,
\end{aligned}$$ which concludes the proof. ◻
:::
## Lower bounds {#sec:chap3LB}
We prove here various oracle complexity lower bounds. These results
first appeared in [@NY83] but we follow here the simplified presentation
of [@Nes04]. In general a black-box procedure is a mapping from
"history\" to the next query point, that is it maps
$(x_1, g_1, \hdots, x_t, g_t)$ (with $g_s \in \partial f (x_s)$) to
$x_{t+1}$. In order to simplify the notation and the argument,
throughout the section we make the following assumption on the black-box
procedure: $x_1=0$ and for any $t \geq 0$, $x_{t+1}$ is in the linear
span of $g_1, \hdots, g_t$, that is $$\label{eq:ass1}
x_{t+1} \in \mathrm{Span}(g_1, \hdots, g_t) .$$ Let $e_1, \hdots, e_n$
be the canonical basis of $\mathbb{R}^n$, and
$\mathrm{B}_2(R) = \{x \in \mathbb{R}^n : \|x\| \leq R\}$. We start with
a theorem for the two non-smooth cases (convex and strongly convex).
::: theorem
[]{#th:lb1 label="th:lb1"} Let $t \leq n$, $L, R >0$. There exists a
convex and $L$-Lipschitz function $f$ such that for any black-box
procedure satisfying [\[eq:ass1\]](#eq:ass1){reference-type="eqref"
reference="eq:ass1"},
$$\min_{1 \leq s \leq t} f(x_s) - \min_{x \in \mathrm{B}_2(R)} f(x) \geq \frac{R L}{2 (1 + \sqrt{t})} .$$
There also exists an $\alpha$-strongly convex and $L$-lipschitz function
$f$ such that for any black-box procedure satisfying
[\[eq:ass1\]](#eq:ass1){reference-type="eqref" reference="eq:ass1"},
$$\min_{1 \leq s \leq t} f(x_s) - \min_{x \in \mathrm{B}_2\left(\frac{L}{2 \alpha}\right)} f(x) \geq \frac{L^2}{8 \alpha t} .$$
:::
Note that the above result is restricted to a number of iterations
smaller than the dimension, that is $t \leq n$. This restriction is of
course necessary to obtain lower bounds polynomial in $1/t$: as we saw
in Chapter [2](#finitedim){reference-type="ref" reference="finitedim"}
one can always obtain an exponential rate of convergence when the number
of calls to the oracle is larger than the dimension.
::: proof
*Proof.* We consider the following $\alpha$-strongly convex function:
$$f(x) = \gamma \max_{1 \leq i \leq t} x(i) + \frac{\alpha}{2} \|x\|^2 .$$
It is easy to see that
$$\partial f(x) = \alpha x + \gamma \mathrm{conv}\left(e_i , i : x(i) = \max_{1 \leq j \leq t} x(j) \right).$$
In particular if $\|x\| \leq R$ then for any $g \in \partial f(x)$ one
has $\|g\| \leq \alpha R + \gamma$. In other words $f$ is
$(\alpha R + \gamma)$-Lipschitz on $\mathrm{B}_2(R)$.
Next we describe the first order oracle for this function: when asked
for a subgradient at $x$, it returns $\alpha x + \gamma e_{i}$ where $i$
is the *first* coordinate that satisfies
$x(i) = \max_{1 \leq j \leq t} x(j)$. In particular when asked for a
subgradient at $x_1=0$ it returns $e_1$. Thus $x_2$ must lie on the line
generated by $e_1$. It is easy to see by induction that in fact $x_s$
must lie in the linear span of $e_1, \hdots, e_{s-1}$. In particular for
$s \leq t$ we necessarily have $x_s(t) = 0$ and thus $f(x_s) \geq 0$.
It remains to compute the minimal value of $f$. Let $y$ be such that
$y(i) = - \frac{\gamma}{\alpha t}$ for $1 \leq i \leq t$ and $y(i) = 0$
for $t+1 \leq i \leq n$. It is clear that $0 \in \partial f(y)$ and thus
the minimal value of $f$ is
$$f(y) = - \frac{\gamma^2}{\alpha t} + \frac{\alpha}{2} \frac{\gamma^2}{\alpha^2 t} = - \frac{\gamma^2}{2 \alpha t} .$$
Wrapping up, we proved that for any $s \leq t$ one must have
$$f(x_s) - f(x^*) \geq \frac{\gamma^2}{2 \alpha t} .$$ Taking
$\gamma = L/2$ and $R= \frac{L}{2 \alpha}$ we proved the lower bound for
$\alpha$-strongly convex functions (note in particular that
$\|y\|^2 = \frac{\gamma^2}{\alpha^2 t} = \frac{L^2}{4 \alpha^2 t} \leq R^2$
with these parameters). On the other taking
$\alpha = \frac{L}{R} \frac{1}{1 + \sqrt{t}}$ and
$\gamma = L \frac{\sqrt{t}}{1 + \sqrt{t}}$ concludes the proof for
convex functions (note in particular that
$\|y\|^2 = \frac{\gamma^2}{\alpha^2 t} = R^2$ with these parameters). ◻
:::
We proceed now to the smooth case. As we will see in the following
proofs we restrict our attention to quadratic functions, and it might be
useful to recall that in this case one can attain the exact optimum in
$n$ calls to the oracle (see Section [2.4](#sec:CG){reference-type="ref"
reference="sec:CG"}). We also recall that for a twice differentiable
function $f$, $\beta$-smoothness is equivalent to the largest eigenvalue
of the Hessians of $f$ being smaller than $\beta$ at any point, which we
write $$\nabla^2 f(x) \preceq \beta \mathrm{I}_n , \forall x .$$
Furthermore $\alpha$-strong convexity is equivalent to
$$\nabla^2 f(x) \succeq \alpha \mathrm{I}_n , \forall x .$$
::: theorem
[]{#th:lb2 label="th:lb2"} Let $t \leq (n-1)/2$, $\beta >0$. There
exists a $\beta$-smooth convex function $f$ such that for any black-box
procedure satisfying [\[eq:ass1\]](#eq:ass1){reference-type="eqref"
reference="eq:ass1"},
$$\min_{1 \leq s \leq t} f(x_s) - f(x^*) \geq \frac{3 \beta}{32} \frac{\|x_1 - x^*\|^2}{(t+1)^2} .$$
:::
::: proof
*Proof.* In this proof for $h: \mathbb{R}^n \rightarrow \mathbb{R}$ we
denote $h^* = \inf_{x \in \mathbb{R}^n} h(x)$. For $k \leq n$ let
$A_k \in \mathbb{R}^{n \times n}$ be the symmetric and tridiagonal
matrix defined by $$(A_k)_{i,j} = \left\{\begin{array}{ll}
2, & i = j, i \leq k \\
-1, & j \in \{i-1, i+1\}, i \leq k, j \neq k+1\\
0, & \text{otherwise}.
\end{array}\right.$$ It is easy to verify that
$0 \preceq A_k \preceq 4 \mathrm{I}_n$ since
$$x^{\top} A_k x = 2 \sum_{i=1}^k x(i)^2 - 2 \sum_{i=1}^{k-1} x(i) x(i+1) = x(1)^2 + x(k)^2 + \sum_{i=1}^{k-1} (x(i) - x(i+1))^2 .$$
We consider now the following $\beta$-smooth convex function:
$$f(x) = \frac{\beta}{8} x^{\top} A_{2 t + 1} x - \frac{\beta}{4} x^{\top} e_1 .$$
Similarly to what happened in the proof Theorem
[\[th:lb1\]](#th:lb1){reference-type="ref" reference="th:lb1"}, one can
see here too that $x_s$ must lie in the linear span of
$e_1, \hdots, e_{s-1}$ (because of our assumption on the black-box
procedure). In particular for $s \leq t$ we necessarily have
$x_s(i) = 0$ for $i=s, \hdots, n$, which implies
$x_s^{\top} A_{2 t+1} x_s = x_s^{\top} A_{s} x_s$. In other words, if we
denote
$$f_k(x) = \frac{\beta}{8} x^{\top} A_{k} x - \frac{\beta}{4} x^{\top} e_1 ,$$
then we just proved that
$$f(x_s) - f^* = f_s(x_s) - f_{2t+1}^* \geq f_{s}^* - f_{2 t + 1}^* \geq f_{t}^* - f_{2 t + 1}^* .$$
Thus it simply remains to compute the minimizer $x^*_k$ of $f_k$, its
norm, and the corresponding function value $f_k^*$.
The point $x^*_k$ is the unique solution in the span of
$e_1, \hdots, e_k$ of $A_k x = e_1$. It is easy to verify that it is
defined by $x^*_k(i) = 1 - \frac{i}{k+1}$ for $i=1, \hdots, k$. Thus we
immediately have:
$$f^*_k = \frac{\beta}{8} (x^*_k)^{\top} A_{k} x^*_k - \frac{\beta}{4} (x^*_k)^{\top} e_1 = - \frac{\beta}{8} (x^*_k)^{\top} e_1 = - \frac{\beta}{8} \left(1 - \frac{1}{k+1}\right) .$$
Furthermore note that
$$\|x^*_k\|^2 = \sum_{i=1}^k \left(1 - \frac{i}{k+1}\right)^2 = \sum_{i=1}^k \left( \frac{i}{k+1}\right)^2 \leq \frac{k+1}{3} .$$
Thus one obtains:
$$f_{t}^* - f_{2 t+1}^* = \frac{\beta}{8} \left(\frac{1}{t+1} - \frac{1}{2 t + 2} \right) \geq \frac{3 \beta}{32} \frac{\|x^*_{2 t + 1}\|^2}{(t+1)^2},$$
which concludes the proof. ◻
:::
To simplify the proof of the next theorem we will consider the limiting
situation $n \to +\infty$. More precisely we assume now that we are
working in
$\ell_2 = \{ x = (x(n))_{n \in \mathbb{N}} : \sum_{i=1}^{+\infty} x(i)^2 < + \infty\}$
rather than in $\mathbb{R}^n$. Note that all the theorems we proved in
this chapter are in fact valid in an arbitrary Hilbert space
$\mathcal{H}$. We chose to work in $\mathbb{R}^n$ only for clarity of
the exposition.
::: theorem
[]{#th:lb3 label="th:lb3"} Let $\kappa > 1$. There exists a
$\beta$-smooth and $\alpha$-strongly convex function
$f: \ell_2 \rightarrow \mathbb{R}$ with $\kappa = \beta / \alpha$ such
that for any $t \geq 1$ and any black-box procedure satisfying
[\[eq:ass1\]](#eq:ass1){reference-type="eqref" reference="eq:ass1"} one
has
$$f(x_t) - f(x^*) \geq \frac{\alpha}{2} \left(\frac{\sqrt{\kappa} - 1}{\sqrt{\kappa}+1}\right)^{2 (t-1)} \|x_1 - x^*\|^2 .$$
:::
Note that for large values of the condition number $\kappa$ one has
$$\left(\frac{\sqrt{\kappa} - 1}{\sqrt{\kappa}+1}\right)^{2 (t-1)} \approx \exp\left(- \frac{4 (t-1)}{\sqrt{\kappa}} \right) .$$
::: proof
*Proof.* The overall argument is similar to the proof of Theorem
[\[th:lb2\]](#th:lb2){reference-type="ref" reference="th:lb2"}. Let
$A : \ell_2 \rightarrow \ell_2$ be the linear operator that corresponds
to the infinite tridiagonal matrix with $2$ on the diagonal and $-1$ on
the upper and lower diagonals. We consider now the following function:
$$f(x) = \frac{\alpha (\kappa-1)}{8} \left(\langle Ax, x\rangle - 2 \langle e_1, x \rangle \right) + \frac{\alpha}{2} \|x\|^2 .$$
We already proved that $0 \preceq A \preceq 4 \mathrm{I}$ which easily
implies that $f$ is $\alpha$-strongly convex and $\beta$-smooth. Now as
always the key observation is that for this function, thanks to our
assumption on the black-box procedure, one necessarily has
$x_t(i) = 0, \forall i \geq t$. This implies in particular:
$$\|x_t - x^*\|^2 \geq \sum_{i=t}^{+\infty} x^*(i)^2 .$$ Furthermore
since $f$ is $\alpha$-strongly convex, one has
$$f(x_t) - f(x^*) \geq \frac{\alpha}{2} \|x_t - x^*\|^2 .$$ Thus it only
remains to compute $x^*$. This can be done by differentiating $f$ and
setting the gradient to $0$, which gives the following infinite set of
equations $$\begin{aligned}
& 1 - 2 \frac{\kappa+1}{\kappa-1} x^*(1) + x^*(2) = 0 , \\
& x^*(k-1) - 2 \frac{\kappa+1}{\kappa-1} x^*(k) + x^*(k+1) = 0, \forall k \geq 2 .
\end{aligned}$$ It is easy to verify that $x^*$ defined by
$x^*(i) = \left(\frac{\sqrt{\kappa} - 1}{\sqrt{\kappa} + 1}\right)^i$
satisfy this infinite set of equations, and the conclusion of the
theorem then follows by straightforward computations. ◻
:::
## Geometric descent {#sec:GeoD}
So far our results leave a gap in the case of smooth optimization:
gradient descent achieves an oracle complexity of $O(1/\varepsilon)$
(respectively $O(\kappa \log(1/\varepsilon))$ in the strongly convex
case) while we proved a lower bound of $\Omega(1/\sqrt{\varepsilon})$
(respectively $\Omega(\sqrt{\kappa} \log(1/\varepsilon))$). In this
section we close these gaps with the geometric descent method which was
recently introduced in [@BLS15]. Historically the first method with
optimal oracle complexity was proposed in [@NY83]. This method, inspired
by the conjugate gradient (see Section
[2.4](#sec:CG){reference-type="ref" reference="sec:CG"}), assumes an
oracle to compute *plane searches*. In [@Nem82] this assumption was
relaxed to a line search oracle (the geometric descent method also
requires a line search oracle). Finally in [@Nes83] an optimal method
requiring only a first order oracle was introduced. The latter
algorithm, called Nesterov's accelerated gradient descent, has been the
most influential optimal method for smooth optimization up to this day.
We describe and analyze this method in Section
[3.7](#sec:AGD){reference-type="ref" reference="sec:AGD"}. As we shall
see the intuition behind Nesterov's accelerated gradient descent (both
for the derivation of the algorithm and its analysis) is not quite
transparent, which motivates the present section as geometric descent
has a simple geometric interpretation loosely inspired from the
ellipsoid method (see Section [2.2](#sec:ellipsoid){reference-type="ref"
reference="sec:ellipsoid"}).
We focus here on the unconstrained optimization of a smooth and strongly
convex function, and we prove that geometric descent achieves the oracle
complexity of $O(\sqrt{\kappa} \log(1/\varepsilon))$, thus reducing the
complexity of the basic gradient descent by a factor $\sqrt{\kappa}$. We
note that this improvement is quite relevant for machine learning
applications. Consider for example the logistic regression problem
described in Section [1.1](#sec:mlapps){reference-type="ref"
reference="sec:mlapps"}: this is a smooth and strongly convex problem,
with a smoothness of order of a numerical constant, but with strong
convexity equal to the regularization parameter whose inverse can be as
large as the sample size. Thus in this case $\kappa$ can be of order of
the sample size, and a faster rate by a factor of $\sqrt{\kappa}$ is
quite significant. We also observe that this improved rate for smooth
and strongly convex objectives also implies an almost optimal rate of
$O(\log(1/\varepsilon) / \sqrt{\varepsilon})$ for the smooth case, as
one can simply run geometric descent on the function
$x \mapsto f(x) + \varepsilon\|x\|^2$.
In Section [3.6.1](#sec:warmup){reference-type="ref"
reference="sec:warmup"} we describe the basic idea of geometric descent,
and we show how to obtain effortlessly a geometric method with an oracle
complexity of $O(\kappa \log(1/\varepsilon))$ (i.e., similar to gradient
descent). Then we explain why one should expect to be able to accelerate
this method in Section [3.6.2](#sec:accafterwarmup){reference-type="ref"
reference="sec:accafterwarmup"}. The geometric descent method is
described precisely and analyzed in Section
[3.6.3](#sec:GeoDmethod){reference-type="ref"
reference="sec:GeoDmethod"}.
### Warm-up: a geometric alternative to gradient descent {#sec:warmup}
::: center
:::
We start with some notation. Let
$\mathrm{B}(x,r^2) := \{y \in \mathbb{R}^n : \|y-x\|^2 \leq r^2 \}$
(note that the second argument is the radius squared), and
$$x^+ = x - \frac{1}{\beta} \nabla f(x), \ \text{and} \ x^{++} = x - \frac{1}{\alpha} \nabla f(x) .$$
Rewriting the definition of strong convexity
[\[eq:defstrongconv\]](#eq:defstrongconv){reference-type="eqref"
reference="eq:defstrongconv"} as $$\begin{aligned}
& f(y) \geq f(x) + \nabla f(x)^{\top} (y-x) + \frac{\alpha}{2} \|y-x\|^2 \\
& \Leftrightarrow \ \frac{\alpha}{2} \|y - x + \frac{1}{\alpha} \nabla f(x) \|^2 \leq \frac{\|\nabla f(x)\|^2}{2 \alpha} - (f(x) - f(y)),
\end{aligned}$$ one obtains an enclosing ball for the minimizer of $f$
with the $0^{th}$ and $1^{st}$ order information at $x$:
$$x^* \in \mathrm{B}\left(x^{++}, \frac{\|\nabla f(x)\|^2}{\alpha^2} - \frac{2}{\alpha} (f(x) - f(x^*)) \right) .$$
Furthermore recall that by smoothness (see
[\[eq:onestepofgd\]](#eq:onestepofgd){reference-type="eqref"
reference="eq:onestepofgd"}) one has
$f(x^+) \leq f(x) - \frac{1}{2 \beta} \|\nabla f(x)\|^2$ which allows to
*shrink* the above ball by a factor of $1-\frac{1}{\kappa}$ and obtain
the following: $$\label{eq:ball2}
x^* \in \mathrm{B}\left(x^{++}, \frac{\|\nabla f(x)\|^2}{\alpha^2} \left(1 - \frac{1}{\kappa}\right) - \frac{2}{\alpha} (f(x^+) - f(x^*)) \right)$$
This suggests a natural strategy: assuming that one has an enclosing
ball $A:=\mathrm{B}(x,R^2)$ for $x^*$ (obtained from previous steps of
the strategy), one can then enclose $x^*$ in a ball $B$ containing the
intersection of $\mathrm{B}(x,R^2)$ and the ball
$\mathrm{B}\left(x^{++}, \frac{\|\nabla f(x)\|^2}{\alpha^2} \left(1 - \frac{1}{\kappa}\right)\right)$
obtained by [\[eq:ball2\]](#eq:ball2){reference-type="eqref"
reference="eq:ball2"}. Provided that the radius of $B$ is a fraction of
the radius of $A$, one can then iterate the procedure by replacing $A$
by $B$, leading to a linear convergence rate. Evaluating the rate at
which the radius shrinks is an elementary calculation: for any
$g \in \mathbb{R}^n$, $\varepsilon\in (0,1)$, there exists
$x \in \mathbb{R}^n$ such that
$$\mathrm{B}(0,1) \cap \mathrm{B}(g, \|g\|^2 (1- \varepsilon)) \subset \mathrm{B}(x, 1-\varepsilon) . \quad \quad \text{(Figure \ref{fig:one_ball})}$$
Thus we see that in the strategy described above, the radius squared of
the enclosing ball for $x^*$ shrinks by a factor $1 - \frac{1}{\kappa}$
at each iteration, thus matching the rate of convergence of gradient
descent (see Theorem [\[th:gdssc\]](#th:gdssc){reference-type="ref"
reference="th:gdssc"}).
### Acceleration {#sec:accafterwarmup}
::: center
:::
In the argument from the previous section we missed the following
opportunity: observe that the ball $A=\mathrm{B}(x,R^2)$ was obtained by
intersections of previous balls of the form given by
[\[eq:ball2\]](#eq:ball2){reference-type="eqref" reference="eq:ball2"},
and thus the new value $f(x)$ could be used to reduce the radius of
those previous balls too (an important caveat is that the value $f(x)$
should be smaller than the values used to build those previous balls).
Potentially this could show that the optimum is in fact contained in the
ball
$\mathrm{B}\left(x, R^2 - \frac{1}{\kappa} \|\nabla f(x)\|^2\right)$. By
taking the intersection with the ball
$\mathrm{B}\left(x^{++}, \frac{\|\nabla f(x)\|^2}{\alpha^2} \left(1 - \frac{1}{\kappa}\right)\right)$
this would allow to obtain a new ball with radius shrunk by a factor
$1- \frac{1}{\sqrt{\kappa}}$ (instead of $1 - \frac{1}{\kappa}$): indeed
for any $g \in \mathbb{R}^n$, $\varepsilon\in (0,1)$, there exists
$x \in \mathbb{R}^n$ such that
$$\mathrm{B}(0,1 - \varepsilon\|g\|^2) \cap \mathrm{B}(g, \|g\|^2 (1- \varepsilon)) \subset \mathrm{B}(x, 1-\sqrt{\varepsilon}) . \quad \quad \text{(Figure \ref{fig:two_ball})}$$
Thus it only remains to deal with the caveat noted above, which we do
via a line search. In turns this line search might shift the new ball
[\[eq:ball2\]](#eq:ball2){reference-type="eqref" reference="eq:ball2"},
and to deal with this we shall need the following strengthening of the
above set inclusion (we refer to [@BLS15] for a simple proof of this
result):
::: lemma
[]{#lem:geom label="lem:geom"} Let $a \in \mathbb{R}^n$ and
$\varepsilon\in (0,1), g \in \mathbb{R}_+$. Assume that $\|a\| \geq g$.
Then there exists $c \in \mathbb{R}^n$ such that for any
$\delta \geq 0$,
$$\mathrm{B}(0,1 - \varepsilon g^2 - \delta) \cap \mathrm{B}(a, g^2(1-\varepsilon) - \delta) \subset \mathrm{B}\left(c, 1 - \sqrt{\varepsilon} - \delta \right) .$$
:::
### The geometric descent method {#sec:GeoDmethod}
Let $x_0 \in \mathbb{R}^n$, $c_0 = x_0^{++}$, and
$R_0^2 = \left(1 - \frac{1}{\kappa}\right)\frac{\|\nabla f(x_0)\|^2}{\alpha^2}$.
For any $t \geq 0$ let
$$x_{t+1} = \mathop{\mathrm{argmin}}_{x \in \left\{(1-\lambda) c_t + \lambda x_t^+, \ \lambda \in \mathbb{R}\right\}} f(x) ,$$
and $c_{t+1}$ (respectively $R^2_{t+1}$) be the center (respectively the
squared radius) of the ball given by (the proof of) Lemma
[\[lem:geom\]](#lem:geom){reference-type="ref" reference="lem:geom"}
which contains
$$\mathrm{B}\left(c_t, R_t^2 - \frac{\|\nabla f(x_{t+1})\|^2}{\alpha^2 \kappa}\right) \cap \mathrm{B}\left(x_{t+1}^{++}, \frac{\|\nabla f(x_{t+1})\|^2}{\alpha^2} \left(1 - \frac{1}{\kappa}\right) \right).$$
Formulas for $c_{t+1}$ and $R^2_{t+1}$ are given at the end of this
section.
::: theorem
[]{#thm:main label="thm:main"} For any $t \geq 0$, one has
$x^* \in \mathrm{B}(c_t, R_t^2)$,
$R_{t+1}^2 \leq \left(1 - \frac{1}{\sqrt{\kappa}}\right) R_t^2$, and
thus
$$\|x^* - c_t\|^2 \leq \left(1 - \frac{1}{\sqrt{\kappa}}\right)^t R_0^2 .$$
:::
::: proof
*Proof.* We will prove a stronger claim by induction that for each
$t\geq 0$, one has
$$x^* \in \mathrm{B}\left(c_t, R_t^2 - \frac{2}{\alpha} \left(f(x_t^+) - f(x^*)\right)\right) .$$
The case $t=0$ follows immediately by
[\[eq:ball2\]](#eq:ball2){reference-type="eqref" reference="eq:ball2"}.
Let us assume that the above display is true for some $t \geq 0$. Then
using
$f(x_{t+1}^+) \leq f(x_{t+1}) - \frac{1}{2\beta} \|\nabla f(x_{t+1})\|^2 \leq f(x_t^+) - \frac{1}{2\beta} \|\nabla f(x_{t+1})\|^2 ,$
one gets
$$x^* \in \mathrm{B}\left(c_t, R_t^2 - \frac{\|\nabla f(x_{t+1})\|^2}{\alpha^2 \kappa} - \frac{2}{\alpha} \left(f(x_{t+1}^+) - f(x^*)\right) \right) .$$
Furthermore by [\[eq:ball2\]](#eq:ball2){reference-type="eqref"
reference="eq:ball2"} one also has
$$\mathrm{B}\left(x_{t+1}^{++}, \frac{\|\nabla f(x_{t+1})\|^2}{\alpha^2} \left(1 - \frac{1}{\kappa}\right) - \frac{2}{\alpha} \left(f(x_{t+1}^+) - f(x^*)\right) \right).$$
Thus it only remains to observe that the squared radius of the ball
given by Lemma [\[lem:geom\]](#lem:geom){reference-type="ref"
reference="lem:geom"} which encloses the intersection of the two above
balls is smaller than
$\left(1 - \frac{1}{\sqrt{\kappa}}\right) R_t^2 - \frac{2}{\alpha} (f(x_{t+1}^+) - f(x^*))$.
We apply Lemma [\[lem:geom\]](#lem:geom){reference-type="ref"
reference="lem:geom"} after moving $c_t$ to the origin and scaling
distances by $R_t$. We set $\varepsilon=\frac{1}{\kappa}$,
$g=\frac{\|\nabla f(x_{t+1})\|}{\alpha}$,
$\delta=\frac{2}{\alpha}\left(f(x_{t+1}^+)-f(x^*)\right)$ and
$a={x_{t+1}^{++}-c_t}$. The line search step of the algorithm implies
that $\nabla f(x_{t+1})^{\top} (x_{t+1} - c_t) = 0$ and therefore,
$\|a\|=\|x_{t+1}^{++} - c_t\| \geq \|\nabla f(x_{t+1})\|/\alpha=g$ and
Lemma [\[lem:geom\]](#lem:geom){reference-type="ref"
reference="lem:geom"} applies to give the result. ◻
:::
One can use the following formulas for $c_{t+1}$ and $R^2_{t+1}$ (they
are derived from the proof of Lemma
[\[lem:geom\]](#lem:geom){reference-type="ref" reference="lem:geom"}).
If $|\nabla f(x_{t+1})|^2 / \alpha^2 < R_t^2 / 2$ then one can tate
$c_{t+1} = x_{t+1}^{++}$ and
$R_{t+1}^2 = \frac{|\nabla f(x_{t+1})|^2}{\alpha^2} \left(1 - \frac{1}{\kappa}\right)$.
On the other hand if $|\nabla f(x_{t+1})|^2 / \alpha^2 \geq R_t^2 / 2$
then one can tate $$\begin{aligned}
c_{t+1} & = & c_t + \frac{R_t^2 + |x_{t+1} - c_t|^2}{2 |x_{t+1}^{++} - c_t|^2} (x_{t+1}^{++} - c_t) , \\
R_{t+1}^2 & = & R_t^2 - \frac{|\nabla f(x_{t+1})|^2}{\alpha^2 \kappa} - \left( \frac{R_t^2 + \|x_{t+1} - c_t\|^2}{2 \|x_{t+1}^{++} - c_t\|} \right)^2.
\end{aligned}$$
## Nesterov's accelerated gradient descent {#sec:AGD}
We describe here the original Nesterov's method which attains the
optimal oracle complexity for smooth convex optimization. We give the
details of the method both for the strongly convex and non-strongly
convex case. We refer to [@SBC14] for a recent interpretation of the
method in terms of differential equations, and to [@AO14] for its
relation to mirror descent (see Chapter
[4](#mirror){reference-type="ref" reference="mirror"}).
### The smooth and strongly convex case
Nesterov's accelerated gradient descent, illustrated in Figure
[\[fig:nesterovacc\]](#fig:nesterovacc){reference-type="ref"
reference="fig:nesterovacc"}, can be described as follows: Start at an
arbitrary initial point $x_1 = y_1$ and then iterate the following
equations for $t \geq 1$, $$\begin{aligned}
y_{t+1} & = & x_t - \frac{1}{\beta} \nabla f(x_t) , \\
x_{t+1} & = & \left(1 + \frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1} \right) y_{t+1} - \frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1} y_t .
\end{aligned}$$
::: center
:::
::: theorem
Let $f$ be $\alpha$-strongly convex and $\beta$-smooth, then Nesterov's
accelerated gradient descent satisfies
$$f(y_t) - f(x^*) \leq \frac{\alpha + \beta}{2} \|x_1 - x^*\|^2 \exp\left(- \frac{t-1}{\sqrt{\kappa}} \right).$$
:::
::: proof
*Proof.* We define $\alpha$-strongly convex quadratic functions
$\Phi_s, s \geq 1$ by induction as follows: $$\begin{aligned}
& \Phi_1(x) = f(x_1) + \frac{\alpha}{2} \|x-x_1\|^2 , \notag \\
& \Phi_{s+1}(x) = \left(1 - \frac{1}{\sqrt{\kappa}}\right) \Phi_s(x) \notag \\
& \qquad + \frac{1}{\sqrt{\kappa}} \left(f(x_s) + \nabla f(x_s)^{\top} (x-x_s) + \frac{\alpha}{2} \|x-x_s\|^2 \right). \label{eq:AGD0}
\end{aligned}$$ Intuitively $\Phi_s$ becomes a finer and finer
approximation (from below) to $f$ in the following sense:
$$\label{eq:AGD1}
\Phi_{s+1}(x) \leq f(x) + \left(1 - \frac{1}{\sqrt{\kappa}}\right)^s (\Phi_1(x) - f(x)).$$
The above inequality can be proved immediately by induction, using the
fact that by $\alpha$-strong convexity one has
$$f(x_s) + \nabla f(x_s)^{\top} (x-x_s) + \frac{\alpha}{2} \|x-x_s\|^2 \leq f(x) .$$
Equation [\[eq:AGD1\]](#eq:AGD1){reference-type="eqref"
reference="eq:AGD1"} by itself does not say much, for it to be useful
one needs to understand how "far\" below $f$ is $\Phi_s$. The following
inequality answers this question: $$\label{eq:AGD2}
f(y_s) \leq \min_{x \in \mathbb{R}^n} \Phi_s(x) .$$ The rest of the
proof is devoted to showing that
[\[eq:AGD2\]](#eq:AGD2){reference-type="eqref" reference="eq:AGD2"}
holds true, but first let us see how to combine
[\[eq:AGD1\]](#eq:AGD1){reference-type="eqref" reference="eq:AGD1"} and
[\[eq:AGD2\]](#eq:AGD2){reference-type="eqref" reference="eq:AGD2"} to
obtain the rate given by the theorem (we use that by $\beta$-smoothness
one has $f(x) - f(x^*) \leq \frac{\beta}{2} \|x-x^*\|^2$):
$$\begin{aligned}
f(y_t) - f(x^*) & \leq & \Phi_t(x^*) - f(x^*) \\
& \leq & \left(1 - \frac{1}{\sqrt{\kappa}}\right)^{t-1} (\Phi_1(x^*) - f(x^*)) \\
& \leq & \frac{\alpha + \beta}{2} \|x_1-x^*\|^2 \left(1 - \frac{1}{\sqrt{\kappa}}\right)^{t-1} .
\end{aligned}$$ We now prove
[\[eq:AGD2\]](#eq:AGD2){reference-type="eqref" reference="eq:AGD2"} by
induction (note that it is true at $s=1$ since $x_1=y_1$). Let
$\Phi_s^* = \min_{x \in \mathbb{R}^n} \Phi_s(x)$. Using the definition
of $y_{s+1}$ (and $\beta$-smoothness), convexity, and the induction
hypothesis, one gets $$\begin{aligned}
f(y_{s+1}) & \leq & f(x_s) - \frac{1}{2 \beta} \| \nabla f(x_s) \|^2 \\
& = & \left(1 - \frac{1}{\sqrt{\kappa}}\right) f(y_s) + \left(1 - \frac{1}{\sqrt{\kappa}}\right)(f(x_s) - f(y_s)) \\
& & + \frac{1}{\sqrt{\kappa}} f(x_s) - \frac{1}{2 \beta} \| \nabla f(x_s) \|^2 \\
& \leq & \left(1 - \frac{1}{\sqrt{\kappa}}\right) \Phi_s^* + \left(1 - \frac{1}{\sqrt{\kappa}}\right) \nabla f(x_s)^{\top} (x_s - y_s) \\
& & + \frac{1}{\sqrt{\kappa}} f(x_s) - \frac{1}{2 \beta} \| \nabla f(x_s) \|^2 .
\end{aligned}$$ Thus we now have to show that $$\begin{aligned}
\Phi_{s+1}^* & \geq & \left(1 - \frac{1}{\sqrt{\kappa}}\right) \Phi_s^* + \left(1 - \frac{1}{\sqrt{\kappa}}\right) \nabla f(x_s)^{\top} (x_s - y_s) \notag \\
& & + \frac{1}{\sqrt{\kappa}} f(x_s) - \frac{1}{2 \beta} \| \nabla f(x_s) \|^2 . \label{eq:AGD3}
\end{aligned}$$ To prove this inequality we have to understand better
the functions $\Phi_s$. First note that
$\nabla^2 \Phi_s(x) = \alpha \mathrm{I}_n$ (immediate by induction) and
thus $\Phi_s$ has to be of the following form:
$$\Phi_s(x) = \Phi_s^* + \frac{\alpha}{2} \|x - v_s\|^2 ,$$ for some
$v_s \in \mathbb{R}^n$. Now observe that by differentiating
[\[eq:AGD0\]](#eq:AGD0){reference-type="eqref" reference="eq:AGD0"} and
using the above form of $\Phi_s$ one obtains
$$\nabla \Phi_{s+1}(x) = \alpha \left(1 - \frac{1}{\sqrt{\kappa}}\right) (x-v_s) + \frac{1}{\sqrt{\kappa}} \nabla f(x_s) + \frac{\alpha}{\sqrt{\kappa}} (x-x_s) .$$
In particular $\Phi_{s+1}$ is by definition minimized at $v_{s+1}$ which
can now be defined by induction using the above identity, precisely:
$$\label{eq:AGD4}
v_{s+1} = \left(1 - \frac{1}{\sqrt{\kappa}}\right) v_s + \frac{1}{\sqrt{\kappa}} x_s - \frac{1}{\alpha \sqrt{\kappa}} \nabla f(x_s) .$$
Using the form of $\Phi_s$ and $\Phi_{s+1}$, as well as the original
definition [\[eq:AGD0\]](#eq:AGD0){reference-type="eqref"
reference="eq:AGD0"} one gets the following identity by evaluating
$\Phi_{s+1}$ at $x_s$: $$\begin{aligned}
& \Phi_{s+1}^* + \frac{\alpha}{2} \|x_s - v_{s+1}\|^2 \notag \\
& = \left(1 - \frac{1}{\sqrt{\kappa}}\right) \Phi_s^* + \frac{\alpha}{2} \left(1 - \frac{1}{\sqrt{\kappa}}\right) \|x_s - v_s\|^2 + \frac{1}{\sqrt{\kappa}} f(x_s) . \label{eq:AGD5}
\end{aligned}$$ Note that thanks to
[\[eq:AGD4\]](#eq:AGD4){reference-type="eqref" reference="eq:AGD4"} one
has $$\begin{aligned}
\|x_s - v_{s+1}\|^2 & = & \left(1 - \frac{1}{\sqrt{\kappa}}\right)^2 \|x_s - v_s\|^2 + \frac{1}{\alpha^2 \kappa} \|\nabla f(x_s)\|^2 \\
& & - \frac{2}{\alpha \sqrt{\kappa}} \left(1 - \frac{1}{\sqrt{\kappa}}\right) \nabla f(x_s)^{\top}(v_s-x_s) ,
\end{aligned}$$ which combined with
[\[eq:AGD5\]](#eq:AGD5){reference-type="eqref" reference="eq:AGD5"}
yields $$\begin{aligned}
\Phi_{s+1}^* & = & \left(1 - \frac{1}{\sqrt{\kappa}}\right) \Phi_s^* + \frac{1}{\sqrt{\kappa}} f(x_s) + \frac{\alpha}{2 \sqrt{\kappa}} \left(1 - \frac{1}{\sqrt{\kappa}}\right) \|x_s - v_s\|^2 \\
& & \qquad - \frac{1}{2 \beta} \| \nabla f(x_s) \|^2 + \frac{1}{\sqrt{\kappa}} \left(1 - \frac{1}{\sqrt{\kappa}}\right) \nabla f(x_s)^{\top}(v_s-x_s) .
\end{aligned}$$ Finally we show by induction that
$v_s - x_s = \sqrt{\kappa}(x_s - y_s)$, which concludes the proof of
[\[eq:AGD3\]](#eq:AGD3){reference-type="eqref" reference="eq:AGD3"} and
thus also concludes the proof of the theorem: $$\begin{aligned}
v_{s+1} - x_{s+1} & = & \left(1 - \frac{1}{\sqrt{\kappa}}\right) v_s + \frac{1}{\sqrt{\kappa}} x_s - \frac{1}{\alpha \sqrt{\kappa}} \nabla f(x_s) - x_{s+1} \\
& = & \sqrt{\kappa} x_s - (\sqrt{\kappa}-1) y_s - \frac{\sqrt{\kappa}}{\beta} \nabla f(x_s) - x_{s+1} \\
& = & \sqrt{\kappa} y_{s+1} - (\sqrt{\kappa}-1) y_s - x_{s+1} \\
& = & \sqrt{\kappa} (x_{s+1} - y_{s+1}) ,
\end{aligned}$$ where the first equality comes from
[\[eq:AGD4\]](#eq:AGD4){reference-type="eqref" reference="eq:AGD4"}, the
second from the induction hypothesis, the third from the definition of
$y_{s+1}$ and the last one from the definition of $x_{s+1}$. ◻
:::
### The smooth case
In this section we show how to adapt Nesterov's accelerated gradient
descent for the case $\alpha=0$, using a time-varying combination of the
elements in the primary sequence $(y_t)$. First we define the following
sequences:
$$\lambda_0 = 0, \ \lambda_{t} = \frac{1 + \sqrt{1+ 4 \lambda_{t-1}^2}}{2}, \ \text{and} \ \gamma_t = \frac{1 - \lambda_t}{\lambda_{t+1}}.$$
(Note that $\gamma_t \leq 0$.) Now the algorithm is simply defined by
the following equations, with $x_1 = y_1$ an arbitrary initial point,
$$\begin{aligned}
y_{t+1} & = & x_t - \frac{1}{\beta} \nabla f(x_t) , \\
x_{t+1} & = & (1 - \gamma_s) y_{t+1} + \gamma_t y_t .
\end{aligned}$$
::: theorem
Let $f$ be a convex and $\beta$-smooth function, then Nesterov's
accelerated gradient descent satisfies
$$f(y_t) - f(x^*) \leq \frac{2 \beta \|x_1 - x^*\|^2}{t^2} .$$
:::
We follow here the proof of [@BT09]. We also refer to [@Tse08] for a
proof with simpler step-sizes.
::: proof
*Proof.* Using the unconstrained version of Lemma
[\[lem:smoothconst\]](#lem:smoothconst){reference-type="ref"
reference="lem:smoothconst"} one obtains $$\begin{aligned}
& f(y_{s+1}) - f(y_s) \notag \\
& \leq \nabla f(x_s)^{\top} (x_s-y_s) - \frac{1}{2 \beta} \| \nabla f(x_s) \|^2 \notag \\
& = \beta (x_s - y_{s+1})^{\top} (x_s-y_s) - \frac{\beta}{2} \| x_s - y_{s+1} \|^2 . \label{eq:1}
\end{aligned}$$ Similarly we also get $$\label{eq:2}
f(y_{s+1}) - f(x^*) \leq \beta (x_s - y_{s+1})^{\top} (x_s-x^*) - \frac{\beta}{2} \| x_s - y_{s+1} \|^2 .$$
Now multiplying [\[eq:1\]](#eq:1){reference-type="eqref"
reference="eq:1"} by $(\lambda_{s}-1)$ and adding the result to
[\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"}, one obtains
with $\delta_s = f(y_s) - f(x^*)$, $$\begin{aligned}
& \lambda_{s} \delta_{s+1} - (\lambda_{s} - 1) \delta_s \\
& \leq \beta (x_s - y_{s+1})^{\top} (\lambda_{s} x_{s} - (\lambda_{s} - 1) y_s-x^*) - \frac{\beta}{2} \lambda_{s} \| x_s - y_{s+1} \|^2.
\end{aligned}$$ Multiplying this inequality by $\lambda_{s}$ and using
that by definition $\lambda_{s-1}^2 = \lambda_{s}^2 - \lambda_{s}$, as
well as the elementary identity
$2 a^{\top} b - \|a\|^2 = \|b\|^2 - \|b-a\|^2$, one obtains
$$\begin{aligned}
& \lambda_{s}^2 \delta_{s+1} - \lambda_{s-1}^2 \delta_s \notag \\
& \leq \frac{\beta}{2} \bigg( 2 \lambda_{s} (x_s - y_{s+1})^{\top} (\lambda_{s} x_{s} - (\lambda_{s} - 1) y_s-x^*) - \|\lambda_{s}( y_{s+1} - x_s )\|^2\bigg) \notag \\
& = \frac{\beta}{2} \bigg(\| \lambda_{s} x_{s} - (\lambda_{s} - 1) y_{s}-x^* \|^2 - \| \lambda_{s} y_{s+1} - (\lambda_{s} - 1) y_{s}-x^* \|^2 \bigg). \label{eq:3}
\end{aligned}$$ Next remark that, by definition, one has
$$\begin{aligned}
& x_{s+1} = y_{s+1} + \gamma_s (y_s - y_{s+1}) \notag \\
& \Leftrightarrow \lambda_{s+1} x_{s+1} = \lambda_{s+1} y_{s+1} + (1-\lambda_{s})(y_s - y_{s+1}) \notag \\
& \Leftrightarrow \lambda_{s+1} x_{s+1} - (\lambda_{s+1} - 1) y_{s+1}= \lambda_{s} y_{s+1} - (\lambda_{s}-1) y_{s} . \label{eq:5}
\end{aligned}$$ Putting together
[\[eq:3\]](#eq:3){reference-type="eqref" reference="eq:3"} and
[\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"} one gets with
$u_s = \lambda_{s} x_{s} - (\lambda_{s} - 1) y_{s} - x^*$,
$$\lambda_{s}^2 \delta_{s+1} - \lambda_{s-1}^2 \delta_s^2 \leq \frac{\beta}{2} \bigg(\|u_s\|^2 - \|u_{s+1}\|^2 \bigg) .$$
Summing these inequalities from $s=1$ to $s=t-1$ one obtains:
$$\delta_t \leq \frac{\beta}{2 \lambda_{t-1}^2} \|u_1\|^2.$$ By
induction it is easy to see that $\lambda_{t-1} \geq \frac{t}{2}$ which
concludes the proof. ◻
:::
# Almost dimension-free convex optimization in non-Euclidean spaces {#mirror}
In the previous chapter we showed that dimension-free oracle complexity
is possible when the objective function $f$ and the constraint set
$\mathcal{X}$ are well-behaved in the Euclidean norm; e.g. if for all
points $x \in \mathcal{X}$ and all subgradients $g \in \partial f(x)$,
one has that $\|x\|_2$ and $\|g\|_2$ are independent of the ambient
dimension $n$. If this assumption is not met then the gradient descent
techniques of Chapter [3](#dimfree){reference-type="ref"
reference="dimfree"} may lose their dimension-free convergence rates.
For instance consider a differentiable convex function $f$ defined on
the Euclidean ball $\mathrm{B}_{2,n}$ and such that
$\|\nabla f(x)\|_{\infty} \leq 1, \forall x \in \mathrm{B}_{2,n}$. This
implies that $\|\nabla f(x)\|_{2} \leq \sqrt{n}$, and thus projected
gradient descent will converge to the minimum of $f$ on
$\mathrm{B}_{2,n}$ at a rate $\sqrt{n / t}$. In this chapter we describe
the method of [@NY83], known as mirror descent, which allows to find the
minimum of such functions $f$ over the $\ell_1$-ball (instead of the
Euclidean ball) at the much faster rate $\sqrt{\log(n) / t}$. This is
only one example of the potential of mirror descent. This chapter is
devoted to the description of mirror descent and some of its
alternatives. The presentation is inspired from [@BT03], \[Chapter 11,
[@CL06]\], [@Rak09; @Haz11; @Bub11].
In order to describe the intuition behind the method let us abstract the
situation for a moment and forget that we are doing optimization in
finite dimension. We already observed that projected gradient descent
works in an arbitrary Hilbert space $\mathcal{H}$. Suppose now that we
are interested in the more general situation of optimization in some
Banach space $\mathcal{B}$. In other words the norm that we use to
measure the various quantity of interest does not derive from an inner
product (think of $\mathcal{B} = \ell_1$ for example). In that case the
gradient descent strategy does not even make sense: indeed the gradients
(more formally the Fréchet derivative) $\nabla f(x)$ are elements of the
dual space $\mathcal{B}^*$ and thus one cannot perform the computation
$x - \eta \nabla f(x)$ (it simply does not make sense). We did not have
this problem for optimization in a Hilbert space $\mathcal{H}$ since by
Riesz representation theorem $\mathcal{H}^*$ is isometric to
$\mathcal{H}$. The great insight of Nemirovski and Yudin is that one can
still do a gradient descent by first mapping the point
$x \in \mathcal{B}$ into the dual space $\mathcal{B}^*$, then performing
the gradient update in the dual space, and finally mapping back the
resulting point to the primal space $\mathcal{B}$. Of course the new
point in the primal space might lie outside of the constraint set
$\mathcal{X} \subset \mathcal{B}$ and thus we need a way to project back
the point on the constraint set $\mathcal{X}$. Both the primal/dual
mapping and the projection are based on the concept of a *mirror map*
which is the key element of the scheme. Mirror maps are defined in
Section [4.1](#sec:mm){reference-type="ref" reference="sec:mm"}, and the
above scheme is formally described in Section
[4.2](#sec:MD){reference-type="ref" reference="sec:MD"}.
In the rest of this chapter we fix an arbitrary norm $\|\cdot\|$ on
$\mathbb{R}^n$, and a compact convex set
$\mathcal{X}\subset \mathbb{R}^n$. The dual norm $\|\cdot\|_*$ is
defined as
$\|g\|_* = \sup_{x \in \mathbb{R}^n : \|x\| \leq 1} g^{\top} x$. We say
that a convex function $f : \mathcal{X}\rightarrow \mathbb{R}$ is (i)
$L$-Lipschitz w.r.t. $\|\cdot\|$ if
$\forall x \in \mathcal{X}, g \in \partial f(x), \|g\|_* \leq L$, (ii)
$\beta$-smooth w.r.t. $\|\cdot\|$ if
$\|\nabla f(x) - \nabla f(y) \|_* \leq \beta \|x-y\|, \forall x, y \in \mathcal{X}$,
and (iii) $\alpha$-strongly convex w.r.t. $\|\cdot\|$ if
$$f(x) - f(y) \leq g^{\top} (x - y) - \frac{\alpha}{2} \|x - y \|^2 , \forall x, y \in \mathcal{X}, g \in \partial f(x).$$
We also define the Bregman divergence associated to $f$ as
$$D_{f}(x,y) = f(x) - f(y) - \nabla f(y)^{\top} (x - y) .$$ The
following identity will be useful several times: $$\label{eq:useful1}
(\nabla f(x) - \nabla f(y))^{\top}(x-z) = D_{f}(x,y) + D_{f}(z,x) - D_{f}(z,y) .$$
## Mirror maps {#sec:mm}
Let $\mathcal{D}\subset \mathbb{R}^n$ be a convex open set such that
$\mathcal{X}$ is included in its closure, that is
$\mathcal{X} \subset \overline{\mathcal{D}}$, and
$\mathcal{X} \cap \mathcal{D} \neq \emptyset$. We say that
$\Phi : \mathcal{D}\rightarrow \mathbb{R}$ is a mirror map if it
safisfies the following properties[^7]:
1. $\Phi$ is strictly convex and differentiable.
2. The gradient of $\Phi$ takes all possible values, that is
$\nabla \Phi(\mathcal{D}) = \mathbb{R}^n$.
3. The gradient of $\Phi$ diverges on the boundary of $\mathcal{D}$,
that is
$$\lim_{x \rightarrow \partial \mathcal{D}} \|\nabla \Phi(x)\| = + \infty .$$
In mirror descent the gradient of the mirror map $\Phi$ is used to map
points from the "primal\" to the "dual\" (note that all points lie in
$\mathbb{R}^n$ so the notions of primal and dual spaces only have an
intuitive meaning). Precisely a point
$x \in \mathcal{X} \cap \mathcal{D}$ is mapped to $\nabla \Phi(x)$, from
which one takes a gradient step to get to
$\nabla \Phi(x) - \eta \nabla f(x)$. Property (ii) then allows us to
write the resulting point as
$\nabla \Phi(y) = \nabla \Phi(x) - \eta \nabla f(x)$ for some
$y \in \mathcal{D}$. The primal point $y$ may lie outside of the set of
constraints $\mathcal{X}$, in which case one has to project back onto
$\mathcal{X}$. In mirror descent this projection is done via the Bregman
divergence associated to $\Phi$. Precisely one defines
$$\Pi_{\mathcal{X}}^{\Phi} (y) = \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,y) .$$
Property (i) and (iii) ensures the existence and uniqueness of this
projection (in particular since $x \mapsto D_{\Phi}(x,y)$ is locally
increasing on the boundary of $\mathcal{D}$). The following lemma shows
that the Bregman divergence essentially behaves as the Euclidean norm
squared in terms of projections (recall Lemma
[\[lem:todonow\]](#lem:todonow){reference-type="ref"
reference="lem:todonow"}).
::: lemma
[]{#lem:todonow2 label="lem:todonow2"} Let
$x \in \mathcal{X}\cap \mathcal{D}$ and $y \in \mathcal{D}$, then
$$(\nabla \Phi(\Pi_{\mathcal{X}}^{\Phi}(y)) - \nabla \Phi(y))^{\top} (\Pi^{\Phi}_{\mathcal{X}}(y) - x) \leq 0 ,$$
which also implies
$$D_{\Phi}(x, \Pi^{\Phi}_{\mathcal{X}}(y)) + D_{\Phi}(\Pi^{\Phi}_{\mathcal{X}}(y), y) \leq D_{\Phi}(x,y) .$$
:::
::: proof
*Proof.* The proof is an immediate corollary of Proposition
[\[prop:firstorder\]](#prop:firstorder){reference-type="ref"
reference="prop:firstorder"} together with the fact that
$\nabla_x D_{\Phi}(x,y) = \nabla \Phi(x) - \nabla \Phi(y)$. ◻
:::
## Mirror descent {#sec:MD}
We can now describe the mirror descent strategy based on a mirror map
$\Phi$. Let
$x_1 \in \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x)$.
Then for $t \geq 1$, let $y_{t+1} \in \mathcal{D}$ such that
$$\label{eq:MD1}
\nabla \Phi(y_{t+1}) = \nabla \Phi(x_{t}) - \eta g_t, \ \text{where} \ g_t \in \partial f(x_t) ,$$
and $$\label{eq:MD2}
x_{t+1} \in \Pi_{\mathcal{X}}^{\Phi} (y_{t+1}) .$$ See Figure
[\[fig:MD\]](#fig:MD){reference-type="ref" reference="fig:MD"} for an
illustration of this procedure.
::: theorem
[]{#th:MD label="th:MD"} Let $\Phi$ be a mirror map $\rho$-strongly
convex on $\mathcal{X} \cap \mathcal{D}$ w.r.t. $\|\cdot\|$. Let
$R^2 = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x) - \Phi(x_1)$,
and $f$ be convex and $L$-Lipschitz w.r.t. $\|\cdot\|$. Then mirror
descent with $\eta = \frac{R}{L} \sqrt{\frac{2 \rho}{t}}$ satisfies
$$f\bigg(\frac{1}{t} \sum_{s=1}^t x_s \bigg) - f(x^*) \leq RL \sqrt{\frac{2}{\rho t}} .$$
:::
::: proof
*Proof.* Let $x \in \mathcal{X} \cap \mathcal{D}$. The claimed bound
will be obtained by taking a limit $x \rightarrow x^*$. Now by convexity
of $f$, the definition of mirror descent, equation
[\[eq:useful1\]](#eq:useful1){reference-type="eqref"
reference="eq:useful1"}, and Lemma
[\[lem:todonow2\]](#lem:todonow2){reference-type="ref"
reference="lem:todonow2"}, one has $$\begin{aligned}
& f(x_s) - f(x) \\
& \leq g_s^{\top} (x_s - x) \\
& = \frac{1}{\eta} (\nabla \Phi(x_s) - \nabla \Phi(y_{s+1}))^{\top} (x_s - x) \\
& = \frac{1}{\eta} \bigg( D_{\Phi}(x, x_s) + D_{\Phi}(x_s, y_{s+1}) - D_{\Phi}(x, y_{s+1}) \bigg) \\
& \leq \frac{1}{\eta} \bigg( D_{\Phi}(x, x_s) + D_{\Phi}(x_s, y_{s+1}) - D_{\Phi}(x, x_{s+1}) - D_{\Phi}(x_{s+1}, y_{s+1}) \bigg) .
\end{aligned}$$ The term $D_{\Phi}(x, x_s) - D_{\Phi}(x, x_{s+1})$ will
lead to a telescopic sum when summing over $s=1$ to $s=t$, and it
remains to bound the other term as follows using $\rho$-strong convexity
of the mirror map and
$a z - b z^2 \leq \frac{a^2}{4 b}, \forall z \in \mathbb{R}$:
$$\begin{aligned}
& D_{\Phi}(x_s, y_{s+1}) - D_{\Phi}(x_{s+1}, y_{s+1}) \\
& = \Phi(x_s) - \Phi(x_{s+1}) - \nabla \Phi(y_{s+1})^{\top} (x_{s} - x_{s+1}) \\
& \leq (\nabla \Phi(x_s) - \nabla \Phi(y_{s+1}))^{\top} (x_{s} - x_{s+1}) - \frac{\rho}{2} \|x_s - x_{s+1}\|^2 \\
& = \eta g_s^{\top} (x_{s} - x_{s+1}) - \frac{\rho}{2} \|x_s - x_{s+1}\|^2 \\
& \leq \eta L \|x_{s} - x_{s+1}\| - \frac{\rho}{2} \|x_s - x_{s+1}\|^2 \\
& \leq \frac{(\eta L)^2}{2 \rho}.
\end{aligned}$$ We proved
$$\sum_{s=1}^t \bigg(f(x_s) - f(x)\bigg) \leq \frac{D_{\Phi}(x,x_1)}{\eta} + \eta \frac{L^2 t}{2 \rho},$$
which concludes the proof up to trivial computation. ◻
:::
We observe that one can rewrite mirror descent as follows:
$$\begin{aligned}
x_{t+1} & = & \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} \ D_{\Phi}(x,y_{t+1}) \notag \\
& = & \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} \ \Phi(x) - \nabla \Phi(y_{t+1})^{\top} x \label{eq:MD3} \\
& = & \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} \ \Phi(x) - (\nabla \Phi(x_{t}) - \eta g_t)^{\top} x \notag \\
& = & \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} \ \eta g_t^{\top} x + D_{\Phi}(x,x_t) . \label{eq:MDproxview}
\end{aligned}$$ This last expression is often taken as the definition of
mirror descent (see [@BT03]). It gives a proximal point of view on
mirror descent: the method is trying to minimize the local linearization
of the function while not moving too far away from the previous point,
with distances measured via the Bregman divergence of the mirror map.
## Standard setups for mirror descent {#sec:mdsetups}
**"Ball setup\".** The simplest version of mirror descent is obtained by
taking $\Phi(x) = \frac{1}{2} \|x\|^2_2$ on
$\mathcal{D} = \mathbb{R}^n$. The function $\Phi$ is a mirror map
strongly convex w.r.t. $\|\cdot\|_2$, and furthermore the associated
Bregman divergence is given by
$D_{\Phi}(x,y) = \frac{1}{2} \|x - y\|^2_2$. Thus in that case mirror
descent is exactly equivalent to projected subgradient descent, and the
rate of convergence obtained in Theorem
[\[th:MD\]](#th:MD){reference-type="ref" reference="th:MD"} recovers our
earlier result on projected subgradient descent.
**"Simplex setup\".** A more interesting choice of a mirror map is given
by the negative entropy $$\Phi(x) = \sum_{i=1}^n x(i) \log x(i),$$ on
$\mathcal{D} = \mathbb{R}_{++}^n$. In that case the gradient update
$\nabla \Phi(y_{t+1}) = \nabla \Phi(x_t) - \eta \nabla f(x_t)$ can be
written equivalently as
$$y_{t+1}(i) = x_{t}(i) \exp\big(- \eta [\nabla f(x_t) ](i) \big) , \ i=1, \hdots, n.$$
The Bregman divergence of this mirror map is given by
$D_{\Phi}(x,y) = \sum_{i=1}^n x(i) \log \frac{x(i)}{y(i)}$ (also known
as the Kullback-Leibler divergence). It is easy to verify that the
projection with respect to this Bregman divergence on the simplex
$\Delta_n = \{x \in \mathbb{R}_+^n : \sum_{i=1}^n x(i) = 1\}$ amounts to
a simple renormalization $y \mapsto y / \|y\|_1$. Furthermore it is also
easy to verify that $\Phi$ is $1$-strongly convex w.r.t. $\|\cdot\|_1$
on $\Delta_n$ (this result is known as Pinsker's inequality). Note also
that for $\mathcal{X} = \Delta_n$ one has $x_1 = (1/n, \hdots, 1/n)$ and
$R^2 = \log n$.
The above observations imply that when minimizing on the simplex
$\Delta_n$ a function $f$ with subgradients bounded in
$\ell_{\infty}$-norm, mirror descent with the negative entropy achieves
a rate of convergence of order $\sqrt{\frac{\log n}{t}}$. On the other
hand the regular subgradient descent achieves only a rate of order
$\sqrt{\frac{n}{t}}$ in this case!
**"Spectrahedron setup\".** We consider here functions defined on
matrices, and we are interested in minimizing a function $f$ on the
*spectrahedron* $\mathcal{S}_n$ defined as:
$$\mathcal{S}_n = \left\{X \in \mathbb{S}_+^n : \mathrm{Tr}(X) = 1 \right\} .$$
In this setting we consider the mirror map on
$\mathcal{D} = \mathbb{S}_{++}^n$ given by the negative von Neumann
entropy: $$\Phi(X) = \sum_{i=1}^n \lambda_i(X) \log \lambda_i(X) ,$$
where $\lambda_1(X), \hdots, \lambda_n(X)$ are the eigenvalues of $X$.
It can be shown that the gradient update
$\nabla \Phi(Y_{t+1}) = \nabla \Phi(X_t) - \eta \nabla f(X_t)$ can be
written equivalently as
$$Y_{t+1} = \exp\big(\log X_t - \eta \nabla f(X_t) \big) ,$$ where the
matrix exponential and matrix logarithm are defined as usual.
Furthermore the projection on $\mathcal{S}_n$ is a simple trace
renormalization.
With highly non-trivial computation one can show that $\Phi$ is
$\frac{1}{2}$-strongly convex with respect to the Schatten $1$-norm
defined as $$\|X\|_1 = \sum_{i=1}^n \lambda_i(X).$$ It is easy to see
that for $\mathcal{X} = \mathcal{S}_n$ one has
$x_1 = \frac{1}{n} \mathrm{I}_n$ and $R^2 = \log n$. In other words the
rate of convergence for optimization on the spectrahedron is the same
than on the simplex!
## Lazy mirror descent, aka Nesterov's dual averaging
In this section we consider a slightly more efficient version of mirror
descent for which we can prove that Theorem
[\[th:MD\]](#th:MD){reference-type="ref" reference="th:MD"} still holds
true. This alternative algorithm can be advantageous in some situations
(such as distributed settings), but the basic mirror descent scheme
remains important for extensions considered later in this text (saddle
points, stochastic oracles, \...).
In lazy mirror descent, also commonly known as Nesterov's dual averaging
or simply dual averaging, one replaces
[\[eq:MD1\]](#eq:MD1){reference-type="eqref" reference="eq:MD1"} by
$$\nabla \Phi(y_{t+1}) = \nabla \Phi(y_{t}) - \eta g_t ,$$ and also
$y_1$ is such that $\nabla \Phi(y_1) = 0$. In other words instead of
going back and forth between the primal and the dual, dual averaging
simply averages the gradients in the dual, and if asked for a point in
the primal it simply maps the current dual point to the primal using the
same methodology as mirror descent. In particular using
[\[eq:MD3\]](#eq:MD3){reference-type="eqref" reference="eq:MD3"} one
immediately sees that dual averaging is defined by: $$\label{eq:DA0}
x_t = \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} \ \eta \sum_{s=1}^{t-1} g_s^{\top} x + \Phi(x) .$$
::: theorem
Let $\Phi$ be a mirror map $\rho$-strongly convex on
$\mathcal{X} \cap \mathcal{D}$ w.r.t. $\|\cdot\|$. Let
$R^2 = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x) - \Phi(x_1)$,
and $f$ be convex and $L$-Lipschitz w.r.t. $\|\cdot\|$. Then dual
averaging with $\eta = \frac{R}{L} \sqrt{\frac{\rho}{2 t}}$ satisfies
$$f\bigg(\frac{1}{t} \sum_{s=1}^t x_s \bigg) - f(x^*) \leq 2 RL \sqrt{\frac{2}{\rho t}} .$$
:::
::: proof
*Proof.* We define
$\psi_t(x) = \eta \sum_{s=1}^{t} g_s^{\top} x + \Phi(x)$, so that
$x_t \in \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} \psi_{t-1}(x)$.
Since $\Phi$ is $\rho$-strongly convex one clearly has that $\psi_t$ is
$\rho$-strongly convex, and thus $$\begin{aligned}
\psi_t(x_{t+1}) - \psi_t(x_t) & \leq & \nabla \psi_t(x_{t+1})^{\top}(x_{t+1} - x_{t}) - \frac{\rho}{2} \|x_{t+1} - x_t\|^2 \\
& \leq & - \frac{\rho}{2} \|x_{t+1} - x_t\|^2 ,
\end{aligned}$$ where the second inequality comes from the first order
optimality condition for $x_{t+1}$ (see Proposition
[\[prop:firstorder\]](#prop:firstorder){reference-type="ref"
reference="prop:firstorder"}). Next observe that $$\begin{aligned}
\psi_t(x_{t+1}) - \psi_t(x_t) & = & \psi_{t-1}(x_{t+1}) - \psi_{t-1}(x_t) + \eta g_t^{\top} (x_{t+1} - x_t) \\
& \geq & \eta g_t^{\top} (x_{t+1} - x_t) .
\end{aligned}$$ Putting together the two above displays and using
Cauchy-Schwarz (with the assumption $\|g_t\|_* \leq L$) one obtains
$$\frac{\rho}{2} \|x_{t+1} - x_t\|^2 \leq \eta g_t^{\top} (x_t - x_{t+1}) \leq \eta L \|x_t - x_{t+1} \|.$$
In particular this shows that
$\|x_{t+1} - x_t\| \leq \frac{2 \eta L}{\rho}$ and thus with the above
display $$\label{eq:DA1}
g_t^{\top} (x_t - x_{t+1}) \leq \frac{2 \eta L^2}{\rho} .$$ Now we claim
that for any $x \in \mathcal{X}\cap \mathcal{D}$, $$\label{eq:DA2}
\sum_{s=1}^t g_s^{\top} (x_s - x) \leq \sum_{s=1}^t g_s^{\top} (x_s - x_{s+1}) + \frac{\Phi(x) - \Phi(x_1)}{\eta} ,$$
which would clearly conclude the proof thanks to
[\[eq:DA1\]](#eq:DA1){reference-type="eqref" reference="eq:DA1"} and
straightforward computations. Equation
[\[eq:DA2\]](#eq:DA2){reference-type="eqref" reference="eq:DA2"} is
equivalent to
$$\sum_{s=1}^t g_s^{\top} x_{s+1} + \frac{\Phi(x_1)}{\eta} \leq \sum_{s=1}^t g_s^{\top} x + \frac{\Phi(x)}{\eta} ,$$
and we now prove the latter equation by induction. At $t=0$ it is true
since
$x_1 \in \mathop{\mathrm{argmin}}_{x \in \mathcal{X}\cap \mathcal{D}} \Phi(x)$.
The following inequalities prove the inductive step, where we use the
induction hypothesis at $x=x_{t+1}$ for the first inequality, and the
definition of $x_{t+1}$ for the second inequality:
$$\sum_{s=1}^{t} g_s^{\top} x_{s+1} + \frac{\Phi(x_1)}{\eta} \leq g_{t}^{\top}x_{t+1} + \sum_{s=1}^{t-1} g_s^{\top} x_{t+1} + \frac{\Phi(x_{t+1})}{\eta} \leq \sum_{s=1}^{t} g_s^{\top} x + \frac{\Phi(x)}{\eta} .$$ ◻
:::
## Mirror prox
It can be shown that mirror descent accelerates for smooth functions to
the rate $1/t$. We will prove this result in Chapter
[6](#rand){reference-type="ref" reference="rand"} (see Theorem
[\[th:SMDsmooth\]](#th:SMDsmooth){reference-type="ref"
reference="th:SMDsmooth"}). We describe here a variant of mirror descent
which also attains the rate $1/t$ for smooth functions. This method is
called mirror prox and it was introduced in [@Nem04]. The true power of
mirror prox will reveal itself later in the text when we deal with
smooth representations of non-smooth functions as well as stochastic
oracles[^8].
Mirror prox is described by the following equations: $$\begin{aligned}
& \nabla \Phi(y_{t+1}') = \nabla \Phi(x_{t}) - \eta \nabla f(x_t), \\ \\
& y_{t+1} \in \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,y_{t+1}') , \\ \\
& \nabla \Phi(x_{t+1}') = \nabla \Phi(x_{t}) - \eta \nabla f(y_{t+1}), \\ \\
& x_{t+1} \in \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,x_{t+1}') .
\end{aligned}$$ In words the algorithm first makes a step of mirror
descent to go from $x_t$ to $y_{t+1}$, and then it makes a similar step
to obtain $x_{t+1}$, starting again from $x_t$ but this time using the
gradient of $f$ evaluated at $y_{t+1}$ (instead of $x_t$), see Figure
[\[fig:mp\]](#fig:mp){reference-type="ref" reference="fig:mp"} for an
illustration. The following result justifies the procedure.
::: theorem
Let $\Phi$ be a mirror map $\rho$-strongly convex on
$\mathcal{X} \cap \mathcal{D}$ w.r.t. $\|\cdot\|$. Let
$R^2 = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x) - \Phi(x_1)$,
and $f$ be convex and $\beta$-smooth w.r.t. $\|\cdot\|$. Then mirror
prox with $\eta = \frac{\rho}{\beta}$ satisfies
$$f\bigg(\frac{1}{t} \sum_{s=1}^t y_{s+1} \bigg) - f(x^*) \leq \frac{\beta R^2}{\rho t} .$$
:::
::: proof
*Proof.* Let $x \in \mathcal{X} \cap \mathcal{D}$. We write
$$\begin{aligned}
f(y_{t+1}) - f(x) & \leq & \nabla f(y_{t+1})^{\top} (y_{t+1} - x) \\
& = & \nabla f(y_{t+1})^{\top} (x_{t+1} - x) + \nabla f(x_t)^{\top} (y_{t+1} - x_{t+1}) \\
& & + (\nabla f(y_{t+1}) - \nabla f(x_t))^{\top} (y_{t+1} - x_{t+1}) .
\end{aligned}$$ We will now bound separately these three terms. For the
first one, using the definition of the method, Lemma
[\[lem:todonow2\]](#lem:todonow2){reference-type="ref"
reference="lem:todonow2"}, and equation
[\[eq:useful1\]](#eq:useful1){reference-type="eqref"
reference="eq:useful1"}, one gets $$\begin{aligned}
& \eta \nabla f(y_{t+1})^{\top} (x_{t+1} - x) \\
& = ( \nabla \Phi(x_t) - \nabla \Phi(x_{t+1}'))^{\top} (x_{t+1} - x) \\
& \leq ( \nabla \Phi(x_t) - \nabla \Phi(x_{t+1}))^{\top} (x_{t+1} - x) \\
& = D_{\Phi}(x,x_t) - D_{\Phi}(x, x_{t+1}) - D_{\Phi}(x_{t+1}, x_t) .
\end{aligned}$$ For the second term using the same properties than above
and the strong-convexity of the mirror map one obtains $$\begin{aligned}
& \eta \nabla f(x_t)^{\top} (y_{t+1} - x_{t+1}) \notag\\
& = ( \nabla \Phi(x_t) - \nabla \Phi(y_{t+1}'))^{\top} (y_{t+1} - x_{t+1}) \notag\\
& \leq ( \nabla \Phi(x_t) - \nabla \Phi(y_{t+1}))^{\top} (y_{t+1} - x_{t+1}) \notag\\
& = D_{\Phi}(x_{t+1},x_t) - D_{\Phi}(x_{t+1}, y_{t+1}) - D_{\Phi}(y_{t+1}, x_t) \label{eq:pourplustard1}\\
& \leq D_{\Phi}(x_{t+1},x_t) - \frac{\rho}{2} \|x_{t+1} - y_{t+1} \|^2 - \frac{\rho}{2} \|y_{t+1} - x_t\|^2 . \notag
\end{aligned}$$ Finally for the last term, using Cauchy-Schwarz,
$\beta$-smoothness, and $2 ab \leq a^2 + b^2$ one gets $$\begin{aligned}
& (\nabla f(y_{t+1}) - \nabla f(x_t))^{\top} (y_{t+1} - x_{t+1}) \\
& \leq \|\nabla f(y_{t+1}) - \nabla f(x_t)\|_* \cdot \|y_{t+1} - x_{t+1} \| \\
& \leq \beta \|y_{t+1} - x_t\| \cdot \|y_{t+1} - x_{t+1} \| \\
& \leq \frac{\beta}{2} \|y_{t+1} - x_t\|^2 + \frac{\beta}{2} \|y_{t+1} - x_{t+1} \|^2 .
\end{aligned}$$ Thus summing up these three terms and using that
$\eta = \frac{\rho}{\beta}$ one gets
$$f(y_{t+1}) - f(x) \leq \frac{D_{\Phi}(x,x_t) - D_{\Phi}(x,x_{t+1})}{\eta} .$$
The proof is concluded with straightforward computations. ◻
:::
## The vector field point of view on MD, DA, and MP {#sec:vectorfield}
In this section we consider a mirror map $\Phi$ that satisfies the
assumptions from Theorem [\[th:MD\]](#th:MD){reference-type="ref"
reference="th:MD"}.
By inspecting the proof of Theorem
[\[th:MD\]](#th:MD){reference-type="ref" reference="th:MD"} one can see
that for arbitrary vectors $g_1, \hdots, g_t \in \mathbb{R}^n$ the
mirror descent strategy described by
[\[eq:MD1\]](#eq:MD1){reference-type="eqref" reference="eq:MD1"} or
[\[eq:MD2\]](#eq:MD2){reference-type="eqref" reference="eq:MD2"} (or
alternatively by
[\[eq:MDproxview\]](#eq:MDproxview){reference-type="eqref"
reference="eq:MDproxview"}) satisfies for any
$x \in \mathcal{X}\cap \mathcal{D}$, $$\label{eq:vfMD}
\sum_{s=1}^t g_s^{\top} (x_s - x) \leq \frac{R^2}{\eta} + \frac{\eta}{2 \rho} \sum_{s=1}^t \|g_s\|_*^2 .$$
The observation that the sequence of vectors $(g_s)$ does not have to
come from the subgradients of a *fixed* function $f$ is the starting
point for the theory of online learning, see [@Bub11] for more details.
In this monograph we will use this observation to generalize mirror
descent to saddle point calculations as well as stochastic settings. We
note that we could also use dual averaging (defined by
[\[eq:DA0\]](#eq:DA0){reference-type="eqref" reference="eq:DA0"}) which
satisfies
$$\sum_{s=1}^t g_s^{\top} (x_s - x) \leq \frac{R^2}{\eta} + \frac{2 \eta}{\rho} \sum_{s=1}^t \|g_s\|_*^2 .$$
In order to generalize mirror prox we simply replace the gradient
$\nabla f$ by an arbitrary vector field
$g: \mathcal{X}\rightarrow \mathbb{R}^n$ which yields the following
equations: $$\begin{aligned}
& \nabla \Phi(y_{t+1}') = \nabla \Phi(x_{t}) - \eta g(x_t), \\
& y_{t+1} \in \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,y_{t+1}') , \\
& \nabla \Phi(x_{t+1}') = \nabla \Phi(x_{t}) - \eta g(y_{t+1}), \\
& x_{t+1} \in \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,x_{t+1}') .
\end{aligned}$$ Under the assumption that the vector field is
$\beta$-Lipschitz w.r.t. $\|\cdot\|$, i.e.,
$\|g(x) - g(y)\|_* \leq \beta \|x-y\|$ one obtains with
$\eta = \frac{\rho}{\beta}$ $$\label{eq:vfMP}
\sum_{s=1}^t g(y_{s+1})^{\top}(y_{s+1} - x) \leq \frac{\beta R^2}{\rho}.$$
# Beyond the black-box model {#beyond}
In the black-box model non-smoothness dramatically deteriorates the rate
of convergence of first order methods from $1/t^2$ to $1/\sqrt{t}$.
However, as we already pointed out in Section
[1.5](#sec:structured){reference-type="ref" reference="sec:structured"},
we (almost) always know the function to be optimized *globally*. In
particular the "source\" of non-smoothness can often be identified. For
instance the LASSO objective (see Section
[1.1](#sec:mlapps){reference-type="ref" reference="sec:mlapps"}) is
non-smooth, but it is a sum of a smooth part (the least squares fit) and
a *simple* non-smooth part (the $\ell_1$-norm). Using this specific
structure we will propose in Section
[5.1](#sec:simplenonsmooth){reference-type="ref"
reference="sec:simplenonsmooth"} a first order method with a $1/t^2$
convergence rate, despite the non-smoothness. In Section
[5.2](#sec:sprepresentation){reference-type="ref"
reference="sec:sprepresentation"} we consider another type of
non-smoothness that can effectively be overcome, where the function is
the maximum of smooth functions. Finally we conclude this chapter with a
concise description of interior point methods, for which the structural
assumption is made on the constraint set rather than on the objective
function.
## Sum of a smooth and a simple non-smooth term {#sec:simplenonsmooth}
We consider here the following problem[^9]:
$$\min_{x \in \mathbb{R}^n} f(x) + g(x) ,$$ where $f$ is convex and
$\beta$-smooth, and $g$ is convex. We assume that $f$ can be accessed
through a first order oracle, and that $g$ is known and "simple\". What
we mean by simplicity will be clear from the description of the
algorithm. For instance a separable function, that is
$g(x) = \sum_{i=1}^n g_i(x(i))$, will be considered as simple. The prime
example being $g(x) = \|x\|_1$. This section is inspired from [@BT09]
(see also [@Nes07; @WNF09]).
## ISTA (Iterative Shrinkage-Thresholding Algorithm) {#ista-iterative-shrinkage-thresholding-algorithm .unnumbered}
Recall that gradient descent on the smooth function $f$ can be written
as (see [\[eq:MDproxview\]](#eq:MDproxview){reference-type="eqref"
reference="eq:MDproxview"})
$$x_{t+1} = \mathop{\mathrm{argmin}}_{x \in \mathbb{R}^n} \eta \nabla f(x_t)^{\top} x + \frac{1}{2} \|x - x_t\|^2_2 .$$
Here one wants to minimize $f+g$, and $g$ is assumed to be known and
"simple\". Thus it seems quite natural to consider the following update
rule, where only $f$ is locally approximated with a first order oracle:
$$\begin{aligned}
x_{t+1} & = & \mathop{\mathrm{argmin}}_{x \in \mathbb{R}^n} \eta (g(x) + \nabla f(x_t)^{\top} x) + \frac{1}{2} \|x - x_t\|^2_2 \notag \\
& = & \mathop{\mathrm{argmin}}_{x \in \mathbb{R}^n} \ g(x) + \frac{1}{2\eta} \|x - (x_t - \eta \nabla f(x_t)) \|_2^2 . \label{eq:proxoperator}
\end{aligned}$$ The algorithm described by the above iteration is known
as ISTA (Iterative Shrinkage-Thresholding Algorithm). In terms of
convergence rate it is easy to show that ISTA has the same convergence
rate on $f+g$ as gradient descent on $f$. More precisely with
$\eta=\frac{1}{\beta}$ one has
$$f(x_t) + g(x_t) - (f(x^*) + g(x^*)) \leq \frac{\beta \|x_1 - x^*\|^2_2}{2 t} .$$
This improved convergence rate over a subgradient descent directly on
$f+g$ comes at a price: in general
[\[eq:proxoperator\]](#eq:proxoperator){reference-type="eqref"
reference="eq:proxoperator"} may be a difficult optimization problem by
itself, and this is why one needs to assume that $g$ is simple. For
instance if $g$ can be written as $g(x) = \sum_{i=1}^n g_i(x(i))$ then
one can compute $x_{t+1}$ by solving $n$ convex problems in dimension
$1$. In the case where $g(x) = \lambda \|x\|_1$ this one-dimensional
problem is given by:
$$\min_{x \in \mathbb{R}} \ \lambda |x| + \frac{1}{2 \eta}(x - x_0)^2, \ \text{where} \ x_0 \in \mathbb{R} .$$
Elementary computations shows that this problem has an analytical
solution given by $\tau_{\lambda \eta}(x_0)$, where $\tau$ is the
shrinkage operator (hence the name ISTA), defined by
$$\tau_{\alpha}(x) = (|x|-\alpha)_+ \mathrm{sign}(x) .$$ Much more is
known about
[\[eq:proxoperator\]](#eq:proxoperator){reference-type="eqref"
reference="eq:proxoperator"} (which is called the *proximal operator* of
$g$), and in fact entire monographs have been written about this
equation, see e.g. [@PB13; @BJMO12].
## FISTA (Fast ISTA) {#fista-fast-ista .unnumbered}
An obvious idea is to combine Nesterov's accelerated gradient descent
(which results in a $1/t^2$ rate to optimize $f$) with ISTA. This
results in FISTA (Fast ISTA) which is described as follows. Let
$$\lambda_0 = 0, \ \lambda_{t} = \frac{1 + \sqrt{1+ 4 \lambda_{t-1}^2}}{2}, \ \text{and} \ \gamma_t = \frac{1 - \lambda_t}{\lambda_{t+1}}.$$
Let $x_1 = y_1$ an arbitrary initial point, and $$\begin{aligned}
y_{t+1} & = & \mathrm{argmin}_{x \in \mathbb{R}^n} \ g(x) + \frac{\beta}{2} \|x - (x_t - \frac1{\beta} \nabla f(x_t)) \|_2^2 , \\
x_{t+1} & = & (1 - \gamma_t) y_{t+1} + \gamma_t y_t .
\end{aligned}$$ Again it is easy show that the rate of convergence of
FISTA on $f+g$ is similar to the one of Nesterov's accelerated gradient
descent on $f$, more precisely:
$$f(y_t) + g(y_t) - (f(x^*) + g(x^*)) \leq \frac{2 \beta \|x_1 - x^*\|^2}{t^2} .$$
## CMD and RDA {#cmd-and-rda .unnumbered}
ISTA and FISTA assume smoothness in the Euclidean metric. Quite
naturally one can also use these ideas in a non-Euclidean setting.
Starting from [\[eq:MDproxview\]](#eq:MDproxview){reference-type="eqref"
reference="eq:MDproxview"} one obtains the CMD (Composite Mirror
Descent) algorithm of [@DSSST10], while with
[\[eq:DA0\]](#eq:DA0){reference-type="eqref" reference="eq:DA0"} one
obtains the RDA (Regularized Dual Averaging) of [@Xia10]. We refer to
these papers for more details.
## Smooth saddle-point representation of a non-smooth function {#sec:sprepresentation}
Quite often the non-smoothness of a function $f$ comes from a $\max$
operation. More precisely non-smooth functions can often be represented
as $$\label{eq:sprepresentation}
f(x) = \max_{1 \leq i \leq m} f_i(x) ,$$ where the functions $f_i$ are
smooth. This was the case for instance with the function we used to
prove the black-box lower bound $1/\sqrt{t}$ for non-smooth optimization
in Theorem [\[th:lb1\]](#th:lb1){reference-type="ref"
reference="th:lb1"}. We will see now that by using this structural
representation one can in fact attain a rate of $1/t$. This was first
observed in [@Nes04b] who proposed the Nesterov's smoothing technique.
Here we will present the alternative method of [@Nem04] which we find
more transparent (yet another version is the Chambolle-Pock algorithm,
see [@CP11]). Most of what is described in this section can be found in
[@JN11a; @JN11b].
In the next subsection we introduce the more general problem of saddle
point computation. We then proceed to apply a modified version of mirror
descent to this problem, which will be useful both in Chapter
[6](#rand){reference-type="ref" reference="rand"} and also as a warm-up
for the more powerful modified mirror prox that we introduce next.
### Saddle point computation {#sec:sp}
Let $\mathcal{X}\subset \mathbb{R}^n$, $\mathcal{Y}\subset \mathbb{R}^m$
be compact and convex sets. Let
$\varphi: \mathcal{X}\times \mathcal{Y}\rightarrow \mathbb{R}$ be a
continuous function, such that $\varphi(\cdot, y)$ is convex and
$\varphi(x, \cdot)$ is concave. We write $g_{\mathcal{X}}(x,y)$
(respectively $g_{\mathcal{Y}}(x,y)$) for an element of
$\partial_x \varphi(x,y)$ (respectively $\partial_y (-\varphi(x,y))$).
We are interested in computing
$$\min_{x \in \mathcal{X}} \max_{y \in \mathcal{Y}} \varphi(x,y) .$$ By
Sion's minimax theorem there exists a pair
$(x^*, y^*) \in \mathcal{X}\times \mathcal{Y}$ such that
$$\varphi(x^*,y^*) = \min_{x \in \mathcal{X}} \max_{y \in \mathcal{Y}} \varphi(x,y) = \max_{y \in \mathcal{Y}} \min_{x \in \mathcal{X}} \varphi(x,y) .$$
We will explore algorithms that produce a candidate pair of solutions
$(\widetilde{x}, \widetilde{y}) \in \mathcal{X}\times \mathcal{Y}$. The
quality of $(\widetilde{x}, \widetilde{y})$ is evaluated through the
so-called duality gap[^10]
$$\max_{y \in \mathcal{Y}} \varphi(\widetilde{x},y) - \min_{x \in \mathcal{X}} \varphi(x,\widetilde{y}) .$$
The key observation is that the duality gap can be controlled similarly
to the suboptimality gap $f(x) - f(x^*)$ in a simple convex optimization
problem. Indeed for any $(x, y) \in \mathcal{X}\times \mathcal{Y}$,
$$\varphi(\widetilde{x},\widetilde{y}) - \varphi(x,\widetilde{y}) \leq g_{\mathcal{X}}(\widetilde{x},\widetilde{y})^{\top} (\widetilde{x}-x),$$
and
$$- \varphi(\widetilde{x},\widetilde{y}) - (- \varphi(\widetilde{x},y)) \leq g_{\mathcal{Y}}(\widetilde{x},\widetilde{y})^{\top} (\widetilde{y}-y) .$$
In particular, using the notation
$z = (x,y) \in \mathcal{Z}:= \mathcal{X}\times \mathcal{Y}$ and
$g(z) = (g_{\mathcal{X}}(x,y), g_{\mathcal{Y}}(x,y))$ we just proved
$$\label{eq:keysp}
\max_{y \in \mathcal{Y}} \varphi(\widetilde{x},y) - \min_{x \in \mathcal{X}} \varphi(x,\widetilde{y}) \leq g(\widetilde{z})^{\top} (\widetilde{z}- z) ,$$
for some $z \in \mathcal{Z}.$ In view of the vector field point of view
developed in Section [4.6](#sec:vectorfield){reference-type="ref"
reference="sec:vectorfield"} this suggests to do a mirror descent in the
$\mathcal{Z}$-space with the vector field
$g : \mathcal{Z}\rightarrow \mathbb{R}^n \times \mathbb{R}^m$.
We will assume in the next subsections that $\mathcal{X}$ is equipped
with a mirror map $\Phi_{\mathcal{X}}$ (defined on
$\mathcal{D}_{\mathcal{X}}$) which is $1$-strongly convex w.r.t. a norm
$\|\cdot\|_{\mathcal{X}}$ on
$\mathcal{X}\cap \mathcal{D}_{\mathcal{X}}$. We denote
$R^2_{\mathcal{X}} = \sup_{x \in \mathcal{X}} \Phi(x) - \min_{x \in \mathcal{X}} \Phi(x)$.
We define similar quantities for the space $\mathcal{Y}$.
### Saddle Point Mirror Descent (SP-MD) {#sec:spmd}
We consider here mirror descent on the space
$\mathcal{Z}= \mathcal{X}\times \mathcal{Y}$ with the mirror map
$\Phi(z) = a \Phi_{\mathcal{X}}(x) + b \Phi_{\mathcal{Y}}(y)$ (defined
on
$\mathcal{D}= \mathcal{D}_{\mathcal{X}} \times \mathcal{D}_{\mathcal{Y}}$),
where $a, b \in \mathbb{R}_+$ are to be defined later, and with the
vector field
$g : \mathcal{Z}\rightarrow \mathbb{R}^n \times \mathbb{R}^m$ defined in
the previous subsection. We call the resulting algorithm SP-MD (Saddle
Point Mirror Descent). It can be described succintly as follows.
Let
$z_1 \in \mathop{\mathrm{argmin}}_{z \in \mathcal{Z}\cap \mathcal{D}} \Phi(z)$.
Then for $t \geq 1$, let
$$z_{t+1} \in \mathop{\mathrm{argmin}}_{z \in \mathcal{Z}\cap \mathcal{D}} \ \eta g_t^{\top} z + D_{\Phi}(z,z_t) ,$$
where $g_t = (g_{\mathcal{X},t}, g_{\mathcal{Y},t})$ with
$g_{\mathcal{X},t} \in \partial_x \varphi(x_t,y_t)$ and
$g_{\mathcal{Y},t} \in \partial_y (- \varphi(x_t,y_t))$.
::: theorem
[]{#th:spmd label="th:spmd"} Assume that $\varphi(\cdot, y)$ is
$L_{\mathcal{X}}$-Lipschitz w.r.t. $\|\cdot\|_{\mathcal{X}}$, that is
$\|g_{\mathcal{X}}(x,y)\|_{\mathcal{X}}^* \leq L_{\mathcal{X}}, \forall (x, y) \in \mathcal{X}\times \mathcal{Y}$.
Similarly assume that $\varphi(x, \cdot)$ is $L_{\mathcal{Y}}$-Lipschitz
w.r.t. $\|\cdot\|_{\mathcal{Y}}$. Then SP-MD with
$a= \frac{L_{\mathcal{X}}}{R_{\mathcal{X}}}$,
$b=\frac{L_{\mathcal{Y}}}{R_{\mathcal{Y}}}$, and
$\eta=\sqrt{\frac{2}{t}}$ satisfies
$$\max_{y \in \mathcal{Y}} \varphi\left( \frac1{t} \sum_{s=1}^t x_s,y \right) - \min_{x \in \mathcal{X}} \varphi\left(x, \frac1{t} \sum_{s=1}^t y_s \right) \leq (R_{\mathcal{X}} L_{\mathcal{X}} + R_{\mathcal{Y}} L_{\mathcal{Y}}) \sqrt{\frac{2}{t}}.$$
:::
::: proof
*Proof.* First we endow $\mathcal{Z}$ with the norm
$\|\cdot\|_{\mathcal{Z}}$ defined by
$$\|z\|_{\mathcal{Z}} = \sqrt{a \|x\|_{\mathcal{X}}^2 + b \|y\|_{\mathcal{Y}}^2} .$$
It is immediate that $\Phi$ is $1$-strongly convex with respect to
$\|\cdot\|_{\mathcal{Z}}$ on $\mathcal{Z} \cap \mathcal{D}$. Furthermore
one can easily check that
$$\|z\|_{\mathcal{Z}}^* = \sqrt{\frac1{a} \left(\|x\|_{\mathcal{X}}^*\right)^2 + \frac1{b} \left(\|y\|_{\mathcal{Y}}^*\right)^2} ,$$
and thus the vector field $(g_t)$ used in the SP-MD satisfies:
$$\|g_t\|_{\mathcal{Z}}^* \leq \sqrt{\frac{L_{\mathcal{X}}^2}{a} + \frac{L_{\mathcal{Y}}^2}{b}} .$$
Using [\[eq:vfMD\]](#eq:vfMD){reference-type="eqref"
reference="eq:vfMD"} together with
[\[eq:keysp\]](#eq:keysp){reference-type="eqref" reference="eq:keysp"}
and the values of $a, b$ and $\eta$ concludes the proof. ◻
:::
### Saddle Point Mirror Prox (SP-MP)
We now consider the most interesting situation in the context of this
chapter, where the function $\varphi$ is smooth. Precisely we say that
$\varphi$ is $(\beta_{11}, \beta_{12}, \beta_{22}, \beta_{21})$-smooth
if for any $x, x' \in \mathcal{X}, y, y' \in \mathcal{Y}$,
$$\begin{aligned}
& \|\nabla_x \varphi(x,y) - \nabla_x \varphi(x',y) \|_{\mathcal{X}}^* \leq \beta_{11} \|x-x'\|_{\mathcal{X}} , \\
& \|\nabla_x \varphi(x,y) - \nabla_x \varphi(x,y') \|_{\mathcal{X}}^* \leq \beta_{12} \|y-y'\|_{\mathcal{Y}} , \\
& \|\nabla_y \varphi(x,y) - \nabla_y \varphi(x,y') \|_{\mathcal{Y}}^* \leq \beta_{22} \|y-y'\|_{\mathcal{Y}} , \\
& \|\nabla_y \varphi(x,y) - \nabla_y \varphi(x',y) \|_{\mathcal{Y}}^* \leq \beta_{21} \|x-x'\|_{\mathcal{X}} ,
\end{aligned}$$ This will imply the Lipschitzness of the vector field
$g : \mathcal{Z}\rightarrow \mathbb{R}^n \times \mathbb{R}^m$ under the
appropriate norm. Thus we use here mirror prox on the space
$\mathcal{Z}$ with the mirror map
$\Phi(z) = a \Phi_{\mathcal{X}}(x) + b \Phi_{\mathcal{Y}}(y)$ and the
vector field $g$. The resulting algorithm is called SP-MP (Saddle Point
Mirror Prox) and we can describe it succintly as follows.
Let
$z_1 \in \mathop{\mathrm{argmin}}_{z \in \mathcal{Z}\cap \mathcal{D}} \Phi(z)$.
Then for $t \geq 1$, let $z_t=(x_t,y_t)$ and $w_t=(u_t, v_t)$ be defined
by $$\begin{aligned}
w_{t+1} & = & \mathop{\mathrm{argmin}}_{z \in \mathcal{Z}\cap \mathcal{D}} \ \eta (\nabla_x \varphi(x_t, y_t), - \nabla_y \varphi(x_t,y_t))^{\top} z + D_{\Phi}(z,z_t) \\
z_{t+1} & = & \mathop{\mathrm{argmin}}_{z \in \mathcal{Z}\cap \mathcal{D}} \ \eta (\nabla_x \varphi(u_{t+1}, v_{t+1}), - \nabla_y \varphi(u_{t+1},v_{t+1}))^{\top} z + D_{\Phi}(z,z_t) .
\end{aligned}$$
::: theorem
[]{#th:spmp label="th:spmp"} Assume that $\varphi$ is
$(\beta_{11}, \beta_{12}, \beta_{22}, \beta_{21})$-smooth. Then SP-MP
with $a= \frac{1}{R_{\mathcal{X}}^2}$, $b=\frac{1}{R_{\mathcal{Y}}^2}$,
and
$\eta= 1 / \left(2 \max \left(\beta_{11} R^2_{\mathcal{X}}, \beta_{22} R^2_{\mathcal{Y}}, \beta_{12} R_{\mathcal{X}} R_{\mathcal{Y}}, \beta_{21} R_{\mathcal{X}} R_{\mathcal{Y}}\right) \right)$
satisfies $$\begin{aligned}
& \max_{y \in \mathcal{Y}} \varphi\left( \frac1{t} \sum_{s=1}^t u_{s+1},y \right) - \min_{x \in \mathcal{X}} \varphi\left(x, \frac1{t} \sum_{s=1}^t v_{s+1} \right) \\
& \leq \max \left(\beta_{11} R^2_{\mathcal{X}}, \beta_{22} R^2_{\mathcal{Y}}, \beta_{12} R_{\mathcal{X}} R_{\mathcal{Y}}, \beta_{21} R_{\mathcal{X}} R_{\mathcal{Y}}\right) \frac{4}{t} .
\end{aligned}$$
:::
::: proof
*Proof.* In light of the proof of Theorem
[\[th:spmd\]](#th:spmd){reference-type="ref" reference="th:spmd"} and
[\[eq:vfMP\]](#eq:vfMP){reference-type="eqref" reference="eq:vfMP"} it
clearly suffices to show that the vector field
$g(z) = (\nabla_x \varphi(x,y), - \nabla_y \varphi_(x,y))$ is
$\beta$-Lipschitz w.r.t.
$\|z\|_{\mathcal{Z}} = \sqrt{\frac{1}{R_{\mathcal{X}}^2} \|x\|_{\mathcal{X}}^2 + \frac{1}{R_{\mathcal{Y}}^2} \|y\|_{\mathcal{Y}}^2}$
with
$\beta = 2 \max \left(\beta_{11} R^2_{\mathcal{X}}, \beta_{22} R^2_{\mathcal{Y}}, \beta_{12} R_{\mathcal{X}} R_{\mathcal{Y}}, \beta_{21} R_{\mathcal{X}} R_{\mathcal{Y}}\right)$.
In other words one needs to show that
$$\|g(z) - g(z')\|_{\mathcal{Z}}^* \leq \beta \|z - z'\|_{\mathcal{Z}} ,$$
which can be done with straightforward calculations (by introducing
$g(x',y)$ and using the definition of smoothness for $\varphi$). ◻
:::
### Applications {#sec:spex}
We investigate briefly three applications for SP-MD and SP-MP.
#### Minimizing a maximum of smooth functions {#sec:spex1}
The problem
[\[eq:sprepresentation\]](#eq:sprepresentation){reference-type="eqref"
reference="eq:sprepresentation"} (when $f$ has to minimized over
$\mathcal{X}$) can be rewritten as
$$\min_{x \in \mathcal{X}} \max_{y \in \Delta_m} \vec{f}(x)^{\top} y ,$$
where $\vec{f}(x) = (f_1(x), \hdots, f_m(x)) \in \mathbb{R}^m$. We
assume that the functions $f_i$ are $L$-Lipschtiz and $\beta$-smooth
w.r.t. some norm $\|\cdot\|_{\mathcal{X}}$. Let us study the smoothness
of $\varphi(x,y) = \vec{f}(x)^{\top} y$ when $\mathcal{X}$ is equipped
with $\|\cdot\|_{\mathcal{X}}$ and $\Delta_m$ is equipped with
$\|\cdot\|_1$. On the one hand $\nabla_y \varphi(x,y) = \vec{f}(x)$, in
particular one immediately has $\beta_{22}=0$, and furthermore
$$\|\vec{f}(x) - \vec{f}(x') \|_{\infty} \leq L \|x-x'\|_{\mathcal{X}} ,$$
that is $\beta_{21}=L$. On the other hand
$\nabla_x \varphi(x,y) = \sum_{i=1}^m y_i \nabla f_i(x)$, and thus
$$\begin{aligned}
& \|\sum_{i=1}^m y(i) (\nabla f_i(x) - \nabla f_i(x')) \|_{\mathcal{X}}^* \leq \beta \|x-x'\|_{\mathcal{X}} , \\
& \|\sum_{i=1}^m (y(i)-y'(i)) \nabla f_i(x) \|_{\mathcal{X}}^* \leq L\|y-y'\|_1 ,
\end{aligned}$$ that is $\beta_{11} = \beta$ and $\beta_{12} = L$. Thus
using SP-MP with some mirror map on $\mathcal{X}$ and the negentropy on
$\Delta_m$ (see the "simplex setup\" in Section
[4.3](#sec:mdsetups){reference-type="ref" reference="sec:mdsetups"}),
one obtains an $\varepsilon$-optimal point of
$f(x) = \max_{1 \leq i \leq m} f_i(x)$ in
$O\left(\frac{\beta R_{\mathcal{X}}^2 + L R_{\mathcal{X}} \sqrt{\log(m)}}{\varepsilon} \right)$
iterations. Furthermore an iteration of SP-MP has a computational
complexity of order of a step of mirror descent in $\mathcal{X}$ on the
function $x \mapsto \sum_{i=1}^m y(i) f_i(x)$ (plus $O(m)$ for the
update in the $\mathcal{Y}$-space).
Thus by using the structure of $f$ we were able to obtain a much better
rate than black-box procedures (which would have required
$\Omega(1/\varepsilon^2)$ iterations as $f$ is potentially non-smooth).
#### Matrix games {#sec:spex2}
Let $A \in \mathbb{R}^{n \times m}$, we denote $\|A\|_{\mathrm{max}}$
for the maximal entry (in absolute value) of $A$, and
$A_i \in \mathbb{R}^n$ for the $i^{th}$ column of $A$. We consider the
problem of computing a Nash equilibrium for the zero-sum game
corresponding to the loss matrix $A$, that is we want to solve
$$\min_{x \in \Delta_n} \max_{y \in \Delta_m} x^{\top} A y .$$ Here we
equip both $\Delta_n$ and $\Delta_m$ with $\|\cdot\|_1$. Let
$\varphi(x,y) = x^{\top} A y$. Using that $\nabla_x \varphi(x,y) = Ay$
and $\nabla_y \varphi(x,y) = A^{\top} x$ one immediately obtains
$\beta_{11} = \beta_{22} = 0$. Furthermore since
$$\|A(y - y') \|_{\infty} = \|\sum_{i=1}^m (y(i) - y'(i)) A_i \|_{\infty} \leq \|A\|_{\mathrm{max}} \|y - y'\|_1 ,$$
one also has $\beta_{12} = \beta_{21} = \|A\|_{\mathrm{max}}$. Thus
SP-MP with the negentropy on both $\Delta_n$ and $\Delta_m$ attains an
$\varepsilon$-optimal pair of mixed strategies with
$O\left(\|A\|_{\mathrm{max}} \sqrt{\log(n) \log(m)} / \varepsilon\right)$
iterations. Furthermore the computational complexity of a step of SP-MP
is dominated by the matrix-vector multiplications which are $O(n m)$.
Thus overall the complexity of getting an $\varepsilon$-optimal Nash
equilibrium with SP-MP is
$O\left(\|A\|_{\mathrm{max}} n m \sqrt{\log(n) \log(m)} / \varepsilon\right)$.
#### Linear classification {#sec:spex3}
Let $(\ell_i, A_i) \in \{-1,1\} \times \mathbb{R}^n$, $i \in [m]$, be a
data set that one wishes to separate with a linear classifier. That is
one is looking for $x \in \mathrm{B}_{2,n}$ such that for all
$i \in [m]$, $\mathrm{sign}(x^{\top} A_i) = \mathrm{sign}(\ell_i)$, or
equivalently $\ell_i x^{\top} A_i > 0$. Clearly without loss of
generality one can assume $\ell_i = 1$ for all $i \in [m]$ (simply
replace $A_i$ by $\ell_i A_i$). Let $A \in \mathbb{R}^{n \times m}$ be
the matrix where the $i^{th}$ column is $A_i$. The problem of finding
$x$ with maximal margin can be written as $$\label{eq:linearclassif}
\max_{x \in \mathrm{B}_{2,n}} \min_{1 \leq i \leq m} A_i^{\top} x = \max_{x \in \mathrm{B}_{2,n}} \min_{y \in \Delta_m} x^{\top} A y .$$
Assuming that $\|A_i\|_2 \leq B$, and using the calculations we did in
Section [5.2.4.1](#sec:spex1){reference-type="ref"
reference="sec:spex1"}, it is clear that $\varphi(x,y) = x^{\top} A y$
is $(0, B, 0, B)$-smooth with respect to $\|\cdot\|_2$ on
$\mathrm{B}_{2,n}$ and $\|\cdot\|_1$ on $\Delta_m$. This implies in
particular that SP-MP with the Euclidean norm squared on
$\mathrm{B}_{2,n}$ and the negentropy on $\Delta_m$ will solve
[\[eq:linearclassif\]](#eq:linearclassif){reference-type="eqref"
reference="eq:linearclassif"} in $O(B \sqrt{\log(m)} / \varepsilon)$
iterations. Again the cost of an iteration is dominated by the
matrix-vector multiplications, which results in an overall complexity of
$O(B n m \sqrt{\log(m)} / \varepsilon)$ to find an $\varepsilon$-optimal
solution to
[\[eq:linearclassif\]](#eq:linearclassif){reference-type="eqref"
reference="eq:linearclassif"}.
## Interior point methods {#sec:IPM}
We describe here interior point methods (IPM), a class of algorithms
fundamentally different from what we have seen so far. The first
algorithm of this type was described in [@Kar84], but the theory we
shall present was developed in [@NN94]. We follow closely the
presentation given in \[Chapter 4, [@Nes04]\]. Other useful references
(in particular for the primal-dual IPM, which are the ones used in
practice) include [@Ren01; @Nem04b; @NW06].
IPM are designed to solve convex optimization problems of the form
$$\begin{aligned}
& \mathrm{min.} \; c^{\top} x \\
& \text{s.t.} \; x \in \mathcal{X},
\end{aligned}$$ with $c \in \mathbb{R}^n$, and
$\mathcal{X}\subset \mathbb{R}^n$ convex and compact. Note that, at this
point, the linearity of the objective is without loss of generality as
minimizing a convex function $f$ over $\mathcal{X}$ is equivalent to
minimizing a linear objective over the epigraph of $f$ (which is also a
convex set). The structural assumption on $\mathcal{X}$ that one makes
in IPM is that there exists a *self-concordant barrier* for
$\mathcal{X}$ with an easily computable gradient and Hessian. The
meaning of the previous sentence will be made precise in the next
subsections. The importance of IPM stems from the fact that LPs and SDPs
(see Section [1.5](#sec:structured){reference-type="ref"
reference="sec:structured"}) satisfy this structural assumption.
### The barrier method {#sec:barriermethod}
We say that $F : \mathrm{int}(\mathcal{X}) \rightarrow \mathbb{R}$ is a
*barrier* for $\mathcal{X}$ if
$$F(x) \xrightarrow[x \to \partial \mathcal{X}]{} +\infty .$$ We will
only consider strictly convex barriers. We extend the domain of
definition of $F$ to $\mathbb{R}^n$ with $F(x) = +\infty$ for
$x \not\in \mathrm{int}(\mathcal{X})$. For $t \in \mathbb{R}_+$ let
$$x^*(t) \in \mathop{\mathrm{argmin}}_{x \in \mathbb{R}^n} t c^{\top} x + F(x) .$$
In the following we denote $F_t(x) := t c^{\top} x + F(x)$. In IPM the
path $(x^*(t))_{t \in \mathbb{R}_+}$ is referred to as the *central
path*. It seems clear that the central path eventually leads to the
minimum $x^*$ of the objective function $c^{\top} x$ on $\mathcal{X}$,
precisely we will have $$x^*(t) \xrightarrow[t \to +\infty]{} x^* .$$
The idea of the *barrier method* is to move along the central path by
"boosting\" a fast locally convergent algorithm, which we denote for the
moment by $\mathcal{A}$, using the following scheme: Assume that one has
computed $x^*(t)$, then one uses $\mathcal{A}$ initialized at $x^*(t)$
to compute $x^*(t')$ for some $t'>t$. There is a clear tension for the
choice of $t'$, on the one hand $t'$ should be large in order to make as
much progress as possible on the central path, but on the other hand
$x^*(t)$ needs to be close enough to $x^*(t')$ so that it is in the
basin of fast convergence for $\mathcal{A}$ when run on $F_{t'}$.
IPM follows the above methodology with $\mathcal{A}$ being *Newton's
method*. Indeed as we will see in the next subsection, Newton's method
has a quadratic convergence rate, in the sense that if initialized close
enough to the optimum it attains an $\varepsilon$-optimal point in
$\log\log(1/\varepsilon)$ iterations! Thus we now have a clear plan to
make these ideas formal and analyze the iteration complexity of IPM:
1. First we need to describe precisely the region of fast convergence
for Newton's method. This will lead us to define self-concordant
functions, which are "natural\" functions for Newton's method.
2. Then we need to evaluate precisely how much larger $t'$ can be
compared to $t$, so that $x^*(t)$ is still in the region of fast
convergence of Newton's method when optimizing the function $F_{t'}$
with $t'>t$. This will lead us to define $\nu$-self concordant
barriers.
3. How do we get close to the central path in the first place? Is it
possible to compute
$x^*(0) = \mathop{\mathrm{argmin}}_{x \in \mathbb{R}^n} F(x)$ (the
so-called analytical center of $\mathcal{X}$)?
### Traditional analysis of Newton's method {#sec:tradanalysisNM}
We start by describing Newton's method together with its standard
analysis showing the quadratic convergence rate when initialized close
enough to the optimum. In this subsection we denote $\|\cdot\|$ for both
the Euclidean norm on $\mathbb{R}^n$ and the operator norm on matrices
(in particular $\|A x\| \leq \|A\| \cdot \|x\|$).
Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a $C^2$ function. Using
a Taylor's expansion of $f$ around $x$ one obtains
$$f(x+h) = f(x) + h^{\top} \nabla f(x) + \frac12 h^{\top} \nabla^2 f(x) h + o(\|h\|^2) .$$
Thus, starting at $x$, in order to minimize $f$ it seems natural to move
in the direction $h$ that minimizes
$$h^{\top} \nabla f(x) + \frac12 h^{\top} \nabla f^2(x) h .$$ If
$\nabla^2 f(x)$ is positive definite then the solution to this problem
is given by $h = - [\nabla^2 f(x)]^{-1} \nabla f(x)$. Newton's method
simply iterates this idea: starting at some point
$x_0 \in \mathbb{R}^n$, it iterates for $k \geq 0$ the following
equation: $$x_{k+1} = x_k - [\nabla^2 f(x_k)]^{-1} \nabla f(x_k) .$$
While this method can have an arbitrarily bad behavior in general, if
started close enough to a strict local minimum of $f$, it can have a
very fast convergence:
::: theorem
[]{#th:NM label="th:NM"} Assume that $f$ has a Lipschitz Hessian, that
is $\| \nabla^2 f(x) - \nabla^2 f(y) \| \leq M \|x - y\|$. Let $x^*$ be
local minimum of $f$ with strictly positive Hessian, that is
$\nabla^2 f(x^*) \succeq \mu \mathrm{I}_n$, $\mu > 0$. Suppose that the
initial starting point $x_0$ of Newton's method is such that
$$\|x_0 - x^*\| \leq \frac{\mu}{2 M} .$$ Then Newton's method is
well-defined and converges to $x^*$ at a quadratic rate:
$$\|x_{k+1} - x^*\| \leq \frac{M}{\mu} \|x_k - x^*\|^2.$$
:::
::: proof
*Proof.* We use the following simple formula, for
$x, h \in \mathbb{R}^n$,
$$\int_0^1 \nabla^2 f(x + s h) \ h \ ds = \nabla f(x+h) - \nabla f(x) .$$
Now note that $\nabla f(x^*) = 0$, and thus with the above formula one
obtains
$$\nabla f(x_k) = \int_0^1 \nabla^2 f(x^* + s (x_k - x^*)) \ (x_k - x^*) \ ds ,$$
which allows us to write: $$\begin{aligned}
& x_{k+1} - x^* \\
& = x_k - x^* - [\nabla^2 f(x_k)]^{-1} \nabla f(x_k) \\
& = x_k - x^* - [\nabla^2 f(x_k)]^{-1} \int_0^1 \nabla^2 f(x^* + s (x_k - x^*)) \ (x_k - x^*) \ ds \\
& = [\nabla^2 f(x_k)]^{-1} \int_0^1 [\nabla^2 f (x_k) - \nabla^2 f(x^* + s (x_k - x^*)) ] \ (x_k - x^*) \ ds .
\end{aligned}$$ In particular one has $$\begin{aligned}
& \|x_{k+1} - x^*\| \\
& \leq \|[\nabla^2 f(x_k)]^{-1}\| \\
& \times \left( \int_0^1 \| \nabla^2 f (x_k) - \nabla^2 f(x^* + s (x_k - x^*)) \| \ ds \right) \|x_k - x^* \|.
\end{aligned}$$ Using the Lipschitz property of the Hessian one
immediately obtains that
$$\left( \int_0^1 \| \nabla^2 f (x_k) - \nabla^2 f(x^* + s (x_k - x^*)) \| \ ds \right) \leq \frac{M}{2} \|x_k - x^*\| .$$
Using again the Lipschitz property of the Hessian (note that
$\|A - B\| \leq s \Leftrightarrow s \mathrm{I}_n \succeq A - B \succeq - s \mathrm{I}_n$),
the hypothesis on $x^*$, and an induction hypothesis that
$\|x_k - x^*\| \leq \frac{\mu}{2M}$, one has
$$\nabla^2 f(x_k) \succeq \nabla^2 f(x^*) - M \|x_k - x^*\| \mathrm{I}_n \succeq (\mu - M \|x_k - x^*\|) \mathrm{I}_n \succeq \frac{\mu}{2} \mathrm{I}_n ,$$
which concludes the proof. ◻
:::
### Self-concordant functions
Before giving the definition of self-concordant functions let us try to
get some insight into the "geometry\" of Newton's method. Let $A$ be a
$n \times n$ non-singular matrix. We look at a Newton step on the
functions $f: x \mapsto f(x)$ and $\varphi: y \mapsto f(A^{-1} y)$,
starting respectively from $x$ and $y= A x$, that is:
$$x^+ = x - [\nabla^2 f(x)]^{-1} \nabla f(x) , \; \text{and} \; y^+ = y - [\nabla^2 \varphi(y)]^{-1} \nabla \varphi(y) .$$
By using the following simple formulas
$$\nabla (x \mapsto f(A x) ) =A^{\top} \nabla f(A x) , \; \text{and} \; \nabla^2 (x \mapsto f(A x) ) =A^{\top} \nabla^2 f(A x) A .$$
it is easy to show that $$y^+ = A x^+ .$$ In other words Newton's method
will follow the same trajectory in the "$x$-space\" and in the
"$y$-space\" (the image through $A$ of the $x$-space), that is Newton's
method is *affine invariant*. Observe that this property is not shared
by the methods described in Chapter [3](#dimfree){reference-type="ref"
reference="dimfree"} (except for the conditional gradient descent).
The affine invariance of Newton's method casts some concerns on the
assumptions of the analysis in Section
[5.3.2](#sec:tradanalysisNM){reference-type="ref"
reference="sec:tradanalysisNM"}. Indeed the assumptions are all in terms
of the canonical inner product in $\mathbb{R}^n$. However we just showed
that the method itself does not depend on the choice of the inner
product (again this is not true for first order methods). Thus one would
like to derive a result similar to Theorem
[\[th:NM\]](#th:NM){reference-type="ref" reference="th:NM"} without any
reference to a prespecified inner product. The idea of self-concordance
is to modify the Lipschitz assumption on the Hessian to achieve this
goal.
Assume from now on that $f$ is $C^3$, and let
$\nabla^3 f(x) : \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}$
be the third order differential operator. The Lipschitz assumption on
the Hessian in Theorem [\[th:NM\]](#th:NM){reference-type="ref"
reference="th:NM"} can be written as:
$$\nabla^3 f(x) [h,h,h] \leq M \|h\|_2^3 .$$ The issue is that this
inequality depends on the choice of an inner product. More importantly
it is easy to see that a convex function which goes to infinity on a
compact set simply cannot satisfy the above inequality. A natural idea
to try fix these issues is to replace the Euclidean metric on the right
hand side by the metric given by the function $f$ itself at $x$, that
is: $$\|h\|_x = \sqrt{ h^{\top} \nabla^2 f(x) h }.$$ Observe that to be
clear one should rather use the notation $\|\cdot\|_{x, f}$, but since
$f$ will always be clear from the context we stick to $\|\cdot\|_x$.
::: definition
Let $\mathcal{X}$ be a convex set with non-empty interior, and $f$ a
$C^3$ convex function defined on $\mathrm{int}(\mathcal{X})$. Then $f$
is self-concordant (with constant $M$) if for all
$x \in \mathrm{int}(\mathcal{X}), h \in \mathbb{R}^n$,
$$\nabla^3 f(x) [h,h,h] \leq M \|h\|_x^3 .$$ We say that $f$ is standard
self-concordant if $f$ is self-concordant with constant $M=2$.
:::
An easy consequence of the definition is that a self-concordant function
is a barrier for the set $\mathcal{X}$, see \[Theorem 4.1.4, [@Nes04]\].
The main example to keep in mind of a standard self-concordant function
is $f(x) = - \log x$ for $x > 0$. The next definition will be key in
order to describe the region of quadratic convergence for Newton's
method on self-concordant functions.
::: definition
Let $f$ be a standard self-concordant function on $\mathcal{X}$. For
$x \in \mathrm{int}(\mathcal{X})$, we say that
$\lambda_f(x) = \|\nabla f(x)\|_x^*$ is the *Newton decrement* of $f$ at
$x$.
:::
An important inequality is that for $x$ such that $\lambda_f(x) < 1$,
and $x^* = \mathop{\mathrm{argmin}}f(x)$, one has $$\label{eq:trucipm3}
\|x - x^*\|_x \leq \frac{\lambda_f(x)}{1 - \lambda_f(x)} ,$$ see
\[Equation 4.1.18, [@Nes04]\]. We state the next theorem without a
proof, see also \[Theorem 4.1.14, [@Nes04]\].
::: theorem
[]{#th:NMsc label="th:NMsc"} Let $f$ be a standard self-concordant
function on $\mathcal{X}$, and $x \in \mathrm{int}(\mathcal{X})$ such
that $\lambda_f(x) \leq 1/4$, then
$$\lambda_f\Big(x - [\nabla^2 f(x)]^{-1} \nabla f(x)\Big) \leq 2 \lambda_f(x)^2 .$$
:::
In other words the above theorem states that, if initialized at a point
$x_0$ such that $\lambda_f(x_0) \leq 1/4$, then Newton's iterates
satisfy $\lambda_f(x_{k+1}) \leq 2 \lambda_f(x_k)^2$. Thus, Newton's
region of quadratic convergence for self-concordant functions can be
described as a "Newton decrement ball\" $\{x : \lambda_f(x) \leq 1/4\}$.
In particular by taking the barrier to be a self-concordant function we
have now resolved Step (1) of the plan described in Section
[5.3.1](#sec:barriermethod){reference-type="ref"
reference="sec:barriermethod"}.
### $\nu$-self-concordant barriers
We deal here with Step (2) of the plan described in Section
[5.3.1](#sec:barriermethod){reference-type="ref"
reference="sec:barriermethod"}. Given Theorem
[\[th:NMsc\]](#th:NMsc){reference-type="ref" reference="th:NMsc"} we
want $t'$ to be as large as possible and such that $$\label{eq:trucipm1}
\lambda_{F_{t'}}(x^*(t) ) \leq 1/4 .$$ Since the Hessian of $F_{t'}$ is
the Hessian of $F$, one has
$$\lambda_{F_{t'}}(x^*(t) ) = \|t' c + \nabla F(x^*(t)) \|_{x^*(t)}^* .$$
Observe that, by first order optimality, one has
$t c + \nabla F(x^*(t)) = 0,$ which yields $$\label{eq:trucipm11}
\lambda_{F_{t'}}(x^*(t) ) = (t'-t) \|c\|^*_{x^*(t)} .$$ Thus taking
$$\label{eq:trucipm2}
t' = t + \frac{1}{4 \|c\|^*_{x^*(t)}}$$ immediately yields
[\[eq:trucipm1\]](#eq:trucipm1){reference-type="eqref"
reference="eq:trucipm1"}. In particular with the value of $t'$ given in
[\[eq:trucipm2\]](#eq:trucipm2){reference-type="eqref"
reference="eq:trucipm2"} the Newton's method on $F_{t'}$ initialized at
$x^*(t)$ will converge quadratically fast to $x^*(t')$.
It remains to verify that by iterating
[\[eq:trucipm2\]](#eq:trucipm2){reference-type="eqref"
reference="eq:trucipm2"} one obtains a sequence diverging to infinity,
and to estimate the rate of growth. Thus one needs to control
$\|c\|^*_{x^*(t)} = \frac1{t} \|\nabla F(x^*(t))\|_{x^*(t)}^*$. Luckily
there is a natural class of functions for which one can control
$\|\nabla F(x)\|_x^*$ uniformly over $x$. This is the set of functions
such that $$\label{eq:nu}
\nabla^2 F(x) \succeq \frac1{\nu} \nabla F(x) [\nabla F(x) ]^{\top} .$$
Indeed in that case one has: $$\begin{aligned}
\|\nabla F(x)\|_x^* & = & \sup_{h : h^{\top} \nabla F^2(x) h \leq 1} \nabla F(x)^{\top} h \\
& \leq & \sup_{h : h^{\top} \left( \frac1{\nu} \nabla F(x) [\nabla F(x) ]^{\top} \right) h \leq 1} \nabla F(x)^{\top} h \\
& = & \sqrt{\nu} .
\end{aligned}$$ Thus a safe choice to increase the penalization
parameter is $t' = \left(1 + \frac1{4\sqrt{\nu}}\right) t$. Note that
the condition [\[eq:nu\]](#eq:nu){reference-type="eqref"
reference="eq:nu"} can also be written as the fact that the function $F$
is $\frac1{\nu}$-exp-concave, that is
$x \mapsto \exp(- \frac1{\nu} F(x))$ is concave. We arrive at the
following definition.
::: definition
$F$ is a $\nu$-self-concordant barrier if it is a standard
self-concordant function, and it is $\frac1{\nu}$-exp-concave.
:::
Again the canonical example is the logarithmic function,
$x \mapsto - \log x$, which is a $1$-self-concordant barrier for the set
$\mathbb{R}_{+}$. We state the next theorem without a proof (see [@BE14]
for more on this result).
::: theorem
Let $\mathcal{X} \subset \mathbb{R}^n$ be a closed convex set with
non-empty interior. There exists $F$ which is a
$(c \ n)$-self-concordant barrier for $\mathcal{X}$ (where $c$ is some
universal constant).
:::
A key property of $\nu$-self-concordant barriers is the following
inequality: $$\label{eq:key}
c^{\top} x^*(t) - \min_{x \in \mathcal{X}} c^{\top} x \leq \frac{\nu}{t} ,$$
see \[Equation (4.2.17), [@Nes04]\]. More generally using
[\[eq:key\]](#eq:key){reference-type="eqref" reference="eq:key"}
together with [\[eq:trucipm3\]](#eq:trucipm3){reference-type="eqref"
reference="eq:trucipm3"} one obtains $$\begin{aligned}
c^{\top} y- \min_{x \in \mathcal{X}} c^{\top} x & \leq & \frac{\nu}{t} + c^{\top} (y - x^*(t)) \notag \\
& = & \frac{\nu}{t} + \frac{1}{t} (\nabla F_t(y) - \nabla F(y))^{\top} (y - x^*(t)) \notag \\
& \leq & \frac{\nu}{t} + \frac{1}{t} \|\nabla F_t(y) - \nabla F(y)\|_y^* \cdot \|y - x^*(t)\|_y \notag \\
& \leq & \frac{\nu}{t} + \frac{1}{t} (\lambda_{F_t}(y) + \sqrt{\nu})\frac{\lambda_{F_t} (y)}{1 - \lambda_{F_t}(y)} \label{eq:trucipm4}
\end{aligned}$$ In the next section we describe a precise algorithm
based on the ideas we developed above. As we will see one cannot ensure
to be exactly on the central path, and thus it is useful to generalize
the identity [\[eq:trucipm11\]](#eq:trucipm11){reference-type="eqref"
reference="eq:trucipm11"} for a point $x$ close to the central path. We
do this as follows: $$\begin{aligned}
\lambda_{F_{t'}}(x) & = & \|t' c + \nabla F(x)\|_x^* \notag \\
& = & \|(t' / t) (t c + \nabla F(x)) + (1- t'/t) \nabla F(x)\|_x^* \notag \\
& \leq & \frac{t'}{t} \lambda_{F_t}(x) + \left(\frac{t'}{t} - 1\right) \sqrt{\nu} .\label{eq:trucipm12}
\end{aligned}$$
### Path-following scheme
We can now formally describe and analyze the most basic IPM called the
*path-following scheme*. Let $F$ be $\nu$-self-concordant barrier for
$\mathcal{X}$. Assume that one can find $x_0$ such that
$\lambda_{F_{t_0}}(x_0) \leq 1/4$ for some small value $t_0 >0$ (we
describe a method to find $x_0$ at the end of this subsection). Then for
$k \geq 0$, let $$\begin{aligned}
& & t_{k+1} = \left(1 + \frac1{13\sqrt{\nu}}\right) t_k ,\\
& & x_{k+1} = x_k - [\nabla^2 F(x_k)]^{-1} (t_{k+1} c + \nabla F(x_k) ) .
\end{aligned}$$ The next theorem shows that after
$O\left( \sqrt{\nu} \log \frac{\nu}{t_0 \varepsilon} \right)$ iterations
of the path-following scheme one obtains an $\varepsilon$-optimal point.
::: theorem
The path-following scheme described above satisfies
$$c^{\top} x_k - \min_{x \in \mathcal{X}} c^{\top} x \leq \frac{2 \nu}{t_0} \exp\left( - \frac{k}{1+13\sqrt{\nu}} \right) .$$
:::
::: proof
*Proof.* We show that the iterates $(x_k)_{k \geq 0}$ remain close to
the central path $(x^*(t_k))_{k \geq 0}$. Precisely one can easily prove
by induction that $$\lambda_{F_{t_k}}(x_k) \leq 1/4 .$$ Indeed using
Theorem [\[th:NMsc\]](#th:NMsc){reference-type="ref"
reference="th:NMsc"} and equation
[\[eq:trucipm12\]](#eq:trucipm12){reference-type="eqref"
reference="eq:trucipm12"} one immediately obtains $$\begin{aligned}
\lambda_{F_{t_{k+1}}}(x_{k+1}) & \leq & 2 \lambda_{F_{t_{k+1}}}(x_k)^2 \\
& \leq & 2 \left(\frac{t_{k+1}}{t_k} \lambda_{F_{t_k}}(x_k) + \left(\frac{t_{k+1}}{t_k} - 1\right) \sqrt{\nu}\right)^2 \\
& \leq & 1/4 ,
\end{aligned}$$ where we used in the last inequality that
$t_{k+1} / t_k = 1 + \frac1{13\sqrt{\nu}}$ and $\nu \geq 1$.
Thus using [\[eq:trucipm4\]](#eq:trucipm4){reference-type="eqref"
reference="eq:trucipm4"} one obtains
$$c^{\top} x_k - \min_{x \in \mathcal{X}} c^{\top} x \leq \frac{\nu + \sqrt{\nu} / 3 + 1/12}{t_k} \leq \frac{2 \nu}{t_k} .$$
Observe that $t_{k} = \left(1 + \frac1{13\sqrt{\nu}}\right)^{k} t_0$,
which finally yields
$$c^{\top} x_k - \min_{x \in \mathcal{X}} c^{\top} x \leq \frac{2 \nu}{t_0} \left(1 + \frac1{13\sqrt{\nu}}\right)^{- k}.$$ ◻
:::
At this point we still need to explain how one can get close to an
intial point $x^*(t_0)$ of the central path. This can be done with the
following rather clever trick. Assume that one has some point
$y_0 \in \mathcal{X}$. The observation is that $y_0$ is on the central
path at $t=1$ for the problem where $c$ is replaced by
$- \nabla F(y_0)$. Now instead of following this central path as
$t \to +\infty$, one follows it as $t \to 0$. Indeed for $t$ small
enough the central paths for $c$ and for $- \nabla F(y_0)$ will be very
close. Thus we iterate the following equations, starting with
$t_0' = 1$, $$\begin{aligned}
& & t_{k+1}' = \left(1 - \frac1{13\sqrt{\nu}}\right) t_k' ,\\
& & y_{k+1} = y_k - [\nabla^2 F(y_k)]^{-1} (- t_{k+1}' \nabla F(y_0) + \nabla F(y_k) ) .
\end{aligned}$$ A straightforward analysis shows that for
$k = O(\sqrt{\nu} \log \nu)$, which corresponds to $t_k'=1/\nu^{O(1)}$,
one obtains a point $y_k$ such that $\lambda_{F_{t_k'}}(y_k) \leq 1/4$.
In other words one can initialize the path-following scheme with
$t_0 = t_k'$ and $x_0 = y_k$.
### IPMs for LPs and SDPs
We have seen that, roughly, the complexity of interior point methods
with a $\nu$-self-concordant barrier is
$O\left(M \sqrt{\nu} \log \frac{\nu}{\varepsilon} \right)$, where $M$ is
the complexity of computing a Newton direction (which can be done by
computing and inverting the Hessian of the barrier). Thus the efficiency
of the method is directly related to the *form* of the self-concordant
barrier that one can construct for $\mathcal{X}$. It turns out that for
LPs and SDPs one has particularly nice self-concordant barriers. Indeed
one can show that $F(x) = - \sum_{i=1}^n \log x_i$ is an
$n$-self-concordant barrier on $\mathbb{R}_{+}^n$, and
$F(x) = - \log \mathrm{det}(X)$ is an $n$-self-concordant barrier on
$\mathbb{S}_{+}^n$. See also [@LS13] for a recent improvement of the
basic logarithmic barrier for LPs.
There is one important issue that we overlooked so far. In most
interesting cases LPs and SDPs come with *equality constraints*,
resulting in a set of constraints $\mathcal{X}$ with empty interior.
From a theoretical point of view there is an easy fix, which is to
reparametrize the problem as to enforce the variables to live in the
subspace spanned by $\mathcal{X}$. This modification also has
algorithmic consequences, as the evaluation of the Newton direction will
now be different. In fact, rather than doing a reparametrization, one
can simply search for Newton directions such that the updated point will
stay in $\mathcal{X}$. In other words one has now to solve a convex
quadratic optimization problem under linear equality constraints.
Luckily using Lagrange multipliers one can find a closed form solution
to this problem, and we refer to previous references for more details.
# Convex optimization and randomness {#rand}
In this chapter we explore the interplay between optimization and
randomness. A key insight, going back to [@RM51], is that first order
methods are quite robust: the gradients do not have to be computed
exactly to ensure progress towards the optimum. Indeed since these
methods usually do many small steps, as long as the gradients are
correct *on average*, the error introduced by the gradient
approximations will eventually vanish. As we will see below this
intuition is correct for non-smooth optimization (since the steps are
indeed small) but the picture is more subtle in the case of smooth
optimization (recall from Chapter [3](#dimfree){reference-type="ref"
reference="dimfree"} that in this case we take long steps).
We introduce now the main object of this chapter: a (first order)
*stochastic* oracle for a convex function
$f : \mathcal{X}\rightarrow \mathbb{R}$ takes as input a point
$x \in \mathcal{X}$ and outputs a random variable $\widetilde{g}(x)$
such that $\mathbb{E}\ \widetilde{g}(x) \in \partial f(x)$. In the case
where the query point $x$ is a random variable (possibly obtained from
previous queries to the oracle), one assumes that
$\mathbb{E}\ (\widetilde{g}(x) | x) \in \partial f(x)$.
The unbiasedness assumption by itself is not enough to obtain rates of
convergence, one also needs to make assumptions about the fluctuations
of $\widetilde{g}(x)$. Essentially in the non-smooth case we will assume
that there exists $B >0$ such that
$\mathbb{E}\|\widetilde{g}(x)\|_*^2 \leq B^2$ for all
$x \in \mathcal{X}$, while in the smooth case we assume that there
exists $\sigma > 0$ such that
$\mathbb{E}\|\widetilde{g}(x) - \nabla f(x)\|_*^2 \leq \sigma^2$ for all
$x \in \mathcal{X}$.
We also note that the situation with a *biased* oracle is quite
different, and we refer to [@Asp08; @SLRB11] for some works in this
direction.
The two canonical examples of a stochastic oracle in machine learning
are as follows.
Let $f(x) = \mathbb{E}_{\xi} \ell(x, \xi)$ where $\ell(x, \xi)$ should
be interpreted as the loss of predictor $x$ on the example $\xi$. We
assume that $\ell(\cdot, \xi)$ is a (differentiable[^11]) convex
function for any $\xi$. The goal is to find a predictor with minimal
expected loss, that is to minimize $f$. When queried at $x$ the
stochastic oracle can draw $\xi$ from the unknown distribution and
report $\nabla_x \ell(x, \xi)$. One obviously has
$\mathbb{E}_{\xi} \nabla_x \ell(x, \xi) \in \partial f(x)$.
The second example is the one described in Section
[1.1](#sec:mlapps){reference-type="ref" reference="sec:mlapps"}, where
one wants to minimize $f(x) = \frac{1}{m} \sum_{i=1}^m f_i(x)$. In this
situation a stochastic oracle can be obtained by selecting uniformly at
random $I \in [m]$ and reporting $\nabla f_I(x)$.
Observe that the stochastic oracles in the two above cases are quite
different. Consider the standard situation where one has access to a
data set of i.i.d. samples $\xi_1, \hdots, \xi_m$. Thus in the first
case, where one wants to minimize the *expected loss*, one is limited to
$m$ queries to the oracle, that is to a *single pass* over the data
(indeed one cannot ensure that the conditional expectations are correct
if one uses twice a data point). On the contrary for the *empirical
loss* where $f_i(x) = \ell(x, \xi_i)$ one can do as many passes as one
wishes.
## Non-smooth stochastic optimization {#sec:smd}
We initiate our study with stochastic mirror descent (S-MD) which is
defined as follows:
$x_1 \in \mathop{\mathrm{argmin}}_{\mathcal{X}\cap \mathcal{D}} \Phi(x)$,
and
$$x_{t+1} = \mathop{\mathrm{argmin}}_{x \in \mathcal{X} \cap \mathcal{D}} \ \eta \widetilde{g}(x_t)^{\top} x + D_{\Phi}(x,x_t) .$$
In this case equation [\[eq:vfMD\]](#eq:vfMD){reference-type="eqref"
reference="eq:vfMD"} rewrites
$$\sum_{s=1}^t \widetilde{g}(x_s)^{\top} (x_s - x) \leq \frac{R^2}{\eta} + \frac{\eta}{2 \rho} \sum_{s=1}^t \|\widetilde{g}(x_s)\|_*^2 .$$
This immediately yields a rate of convergence thanks to the following
simple observation based on the tower rule: $$\begin{aligned}
\mathbb{E}f\bigg(\frac{1}{t} \sum_{s=1}^t x_s \bigg) - f(x) & \leq & \frac{1}{t} \mathbb{E}\sum_{s=1}^t (f(x_s) - f(x)) \\
& \leq & \frac{1}{t} \mathbb{E}\sum_{s=1}^t \mathbb{E}(\widetilde{g}(x_s) | x_s)^{\top} (x_s - x) \\
& = & \frac{1}{t} \mathbb{E}\sum_{s=1}^t \widetilde{g}(x_s)^{\top} (x_s - x) .
\end{aligned}$$ We just proved the following theorem.
::: theorem
[]{#th:SMD label="th:SMD"} Let $\Phi$ be a mirror map $1$-strongly
convex on $\mathcal{X} \cap \mathcal{D}$ with respect to $\|\cdot\|$,
and let
$R^2 = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x) - \Phi(x_1)$.
Let $f$ be convex. Furthermore assume that the stochastic oracle is such
that $\mathbb{E}\|\widetilde{g}(x)\|_*^2 \leq B^2$. Then S-MD with
$\eta = \frac{R}{B} \sqrt{\frac{2}{t}}$ satisfies
$$\mathbb{E}f\bigg(\frac{1}{t} \sum_{s=1}^t x_s \bigg) - \min_{x \in \mathcal{X}} f(x) \leq R B \sqrt{\frac{2}{t}} .$$
:::
Similarly, in the Euclidean and strongly convex case, one can directly
generalize Theorem [\[th:LJSB12\]](#th:LJSB12){reference-type="ref"
reference="th:LJSB12"}. Precisely we consider stochastic gradient
descent (SGD), that is S-MD with $\Phi(x) = \frac12 \|x\|_2^2$, with
time-varying step size $(\eta_t)_{t \geq 1}$, that is
$$x_{t+1} = \Pi_{\mathcal{X}}(x_t - \eta_t \widetilde{g}(x_t)) .$$
::: theorem
[]{#th:sgdstrong label="th:sgdstrong"} Let $f$ be $\alpha$-strongly
convex, and assume that the stochastic oracle is such that
$\mathbb{E}\|\widetilde{g}(x)\|_*^2 \leq B^2$. Then SGD with
$\eta_s = \frac{2}{\alpha (s+1)}$ satisfies
$$f \left(\sum_{s=1}^t \frac{2 s}{t(t+1)} x_s \right) - f(x^*) \leq \frac{2 B^2}{\alpha (t+1)} .$$
:::
## Smooth stochastic optimization and mini-batch SGD
In the previous section we showed that, for non-smooth optimization,
there is basically no cost for having a stochastic oracle instead of an
exact oracle. Unfortunately one can show (see e.g. [@Tsy03]) that
smoothness does not bring any acceleration for a general stochastic
oracle[^12]. This is in sharp contrast with the exact oracle case where
we showed that gradient descent attains a $1/t$ rate (instead of
$1/\sqrt{t}$ for non-smooth), and this could even be improved to $1/t^2$
thanks to Nesterov's accelerated gradient descent.
The next result interpolates between the $1/\sqrt{t}$ for stochastic
smooth optimization, and the $1/t$ for deterministic smooth
optimization. We will use it to propose a useful modification of SGD in
the smooth case. The proof is extracted from [@DGBSX12].
::: theorem
[]{#th:SMDsmooth label="th:SMDsmooth"} Let $\Phi$ be a mirror map
$1$-strongly convex on $\mathcal{X} \cap \mathcal{D}$ w.r.t.
$\|\cdot\|$, and let
$R^2 = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x) - \Phi(x_1)$.
Let $f$ be convex and $\beta$-smooth w.r.t. $\|\cdot\|$. Furthermore
assume that the stochastic oracle is such that
$\mathbb{E}\|\nabla f(x) - \widetilde{g}(x)\|_*^2 \leq \sigma^2$. Then
S-MD with stepsize $\frac{1}{\beta + 1/\eta}$ and
$\eta = \frac{R}{\sigma} \sqrt{\frac{2}{t}}$ satisfies
$$\mathbb{E}f\bigg(\frac{1}{t} \sum_{s=1}^t x_{s+1} \bigg) - f(x^*) \leq R \sigma \sqrt{\frac{2}{t}} + \frac{\beta R^2}{t} .$$
:::
::: proof
*Proof.* Using $\beta$-smoothness, Cauchy-Schwarz (with
$2 ab \leq x a^2+ b^2 / x$ for any $x >0$), and the 1-strong convexity
of $\Phi$, one obtains $$\begin{aligned}
& f(x_{s+1}) - f(x_s) \\
& \leq \nabla f(x_s)^{\top} (x_{s+1} - x_s) + \frac{\beta}{2} \|x_{s+1} - x_s\|^2 \\
& = \widetilde{g}_s^{\top} (x_{s+1} - x_s) + (\nabla f(x_s) - \widetilde{g}_s)^{\top} (x_{s+1} - x_s) + \frac{\beta}{2} \|x_{s+1} - x_s\|^2 \\
& \leq \widetilde{g}_s^{\top} (x_{s+1} - x_s) + \frac{\eta}{2} \|\nabla f(x_s) - \widetilde{g}_s\|_*^2 + \frac12 (\beta + 1/\eta) \|x_{s+1} - x_s\|^2 \\
& \leq \widetilde{g}_s^{\top} (x_{s+1} - x_s) + \frac{\eta}{2} \|\nabla f(x_s) - \widetilde{g}_s\|_*^2 + (\beta + 1/\eta) D_{\Phi}(x_{s+1}, x_s) .
\end{aligned}$$ Observe that, using the same argument as to derive
[\[eq:pourplustard1\]](#eq:pourplustard1){reference-type="eqref"
reference="eq:pourplustard1"}, one has
$$\frac{1}{\beta + 1/\eta} \widetilde{g}_s^{\top} (x_{s+1} - x^*) \leq D_{\Phi} (x^*, x_s) - D_{\Phi}(x^*, x_{s+1}) - D_{\Phi}(x_{s+1}, x_s) .$$
Thus $$\begin{aligned}
& f(x_{s+1}) \\
& \leq f(x_s) + \widetilde{g}_s^{\top}(x^* - x_s) + (\beta + 1/\eta) \left(D_{\Phi} (x^*, x_s) - D_{\Phi}(x^*, x_{s+1})\right) \\
& \qquad + \frac{\eta}{2} \|\nabla f(x_s) - \widetilde{g}_s\|_*^2 \\
& \leq f(x^*) + (\widetilde{g}_s-\nabla f(x_s))^{\top}(x^* - x_s) \\
& \qquad + (\beta + 1/\eta) \left(D_{\Phi} (x^*, x_s) - D_{\Phi}(x^*, x_{s+1})\right) + \frac{\eta}{2} \|\nabla f(x_s) - \widetilde{g}_s\|_*^2 .
\end{aligned}$$ In particular this yields
$$\mathbb{E}f(x_{s+1}) - f(x^*) \leq (\beta + 1/\eta) \mathbb{E}\left(D_{\Phi} (x^*, x_s) - D_{\Phi}(x^*, x_{s+1})\right) + \frac{\eta \sigma^2}{2} .$$
By summing this inequality from $s=1$ to $s=t$ one can easily conclude
with the standard argument. ◻
:::
We can now propose the following modification of SGD based on the idea
of *mini-batches*. Let $m \in \mathbb{N}$, then mini-batch SGD iterates
the following equation:
$$x_{t+1} = \Pi_{\mathcal{X}}\left(x_t - \frac{\eta}{m} \sum_{i=1}^m \widetilde{g}_i(x_t)\right).$$
where $\widetilde{g}_i(x_t), i=1,\hdots,m$ are independent random
variables (conditionally on $x_t$) obtained from repeated queries to the
stochastic oracle. Assuming that $f$ is $\beta$-smooth and that the
stochastic oracle is such that $\|\widetilde{g}(x)\|_2 \leq B$, one can
obtain a rate of convergence for mini-batch SGD with Theorem
[\[th:SMDsmooth\]](#th:SMDsmooth){reference-type="ref"
reference="th:SMDsmooth"}. Indeed one can apply this result with the
modified stochastic oracle that returns
$\frac{1}{m} \sum_{i=1}^m \widetilde{g}_i(x)$, it satisfies
$$\mathbb{E}\| \frac1{m} \sum_{i=1}^m \widetilde{g}_i(x) - \nabla f(x) \|_2^2 = \frac{1}{m}\mathbb{E}\| \widetilde{g}_1(x) - \nabla f(x) \|_2^2 \leq \frac{2 B^2}{m} .$$
Thus one obtains that with $t$ calls to the (original) stochastic
oracle, that is $t/m$ iterations of the mini-batch SGD, one has a
suboptimality gap bounded by
$$R \sqrt{\frac{2 B^2}{m}} \sqrt{\frac{2}{t/m}} + \frac{\beta R^2}{t/m} = 2 \frac{R B}{\sqrt{t}} + \frac{m \beta R^2}{t} .$$
Thus as long as $m \leq \frac{B}{R \beta} \sqrt{t}$ one obtains, with
mini-batch SGD and $t$ calls to the oracle, a point which is
$3\frac{R B}{\sqrt{t}}$-optimal.
Mini-batch SGD can be a better option than basic SGD in at least two
situations: (i) When the computation for an iteration of mini-batch SGD
can be distributed between multiple processors. Indeed a central unit
can send the message to the processors that estimates of the gradient at
point $x_s$ have to be computed, then each processor can work
independently and send back the estimate they obtained. (ii) Even in a
serial setting mini-batch SGD can sometimes be advantageous, in
particular if some calculations can be re-used to compute several
estimated gradients at the same point.
## Sum of smooth and strongly convex functions
Let us examine in more details the main example from Section
[1.1](#sec:mlapps){reference-type="ref" reference="sec:mlapps"}. That is
one is interested in the unconstrained minimization of
$$f(x) = \frac1{m} \sum_{i=1}^m f_i(x) ,$$ where $f_1, \hdots, f_m$ are
$\beta$-smooth and convex functions, and $f$ is $\alpha$-strongly
convex. Typically in machine learning $\alpha$ can be as small as $1/m$,
while $\beta$ is of order of a constant. In other words the condition
number $\kappa= \beta / \alpha$ can be as large as $\Omega(m)$. Let us
now compare the basic gradient descent, that is
$$x_{t+1} = x_t - \frac{\eta}{m} \sum_{i=1}^m \nabla f_i(x) ,$$ to SGD
$$x_{t+1} = x_t - \eta \nabla f_{i_t}(x) ,$$ where $i_t$ is drawn
uniformly at random in $[m]$ (independently of everything else). Theorem
[\[th:gdssc\]](#th:gdssc){reference-type="ref" reference="th:gdssc"}
shows that gradient descent requires $O(m \kappa \log(1/\varepsilon))$
gradient computations (which can be improved to
$O(m \sqrt{\kappa} \log(1/\varepsilon))$ with Nesterov's accelerated
gradient descent), while Theorem
[\[th:sgdstrong\]](#th:sgdstrong){reference-type="ref"
reference="th:sgdstrong"} shows that SGD (with appropriate averaging)
requires $O(1/ (\alpha \varepsilon))$ gradient computations. Thus one
can obtain a low accuracy solution reasonably fast with SGD, but for
high accuracy the basic gradient descent is more suitable. Can we get
the best of both worlds? This question was answered positively in
[@LRSB12] with SAG (Stochastic Averaged Gradient) and in [@SSZ13] with
SDCA (Stochastic Dual Coordinate Ascent). These methods require only
$O((m+\kappa) \log(1/\varepsilon))$ gradient computations. We describe
below the SVRG (Stochastic Variance Reduced Gradient descent) algorithm
from [@JZ13] which makes the main ideas of SAG and SDCA more transparent
(see also [@DBLJ14] for more on the relation between these different
methods). We also observe that a natural question is whether one can
obtain a Nesterov's accelerated version of these algorithms that would
need only $O((m + \sqrt{m \kappa}) \log(1/\varepsilon))$, see
[@SSZ13b; @ZX14; @AB14] for recent works on this question.
To obtain a linear rate of convergence one needs to make "big steps\",
that is the step-size should be of order of a constant. In SGD the
step-size is typically of order $1/\sqrt{t}$ because of the variance
introduced by the stochastic oracle. The idea of SVRG is to "center\"
the output of the stochastic oracle in order to reduce the variance.
Precisely instead of feeding $\nabla f_{i}(x)$ into the gradient descent
one would use $\nabla f_i(x) - \nabla f_i(y) + \nabla f(y)$ where $y$ is
a centering sequence. This is a sensible idea since, when $x$ and $y$
are close to the optimum, one should have that
$\nabla f_i(x) - \nabla f_i(y)$ will have a small variance, and of
course $\nabla f(y)$ will also be small (note that $\nabla f_i(x)$ by
itself is not necessarily small). This intuition is made formal with the
following lemma.
::: lemma
[]{#lem:SVRG label="lem:SVRG"} Let $f_1, \hdots f_m$ be $\beta$-smooth
convex functions on $\mathbb{R}^n$, and $i$ be a random variable
uniformly distributed in $[m]$. Then
$$\mathbb{E}\| \nabla f_i(x) - \nabla f_i(x^*) \|_2^2 \leq 2 \beta (f(x) - f(x^*)) .$$
:::
::: proof
*Proof.* Let
$g_i(x) = f_i(x) - f_i(x^*) - \nabla f_i(x^*)^{\top} (x - x^*)$. By
convexity of $f_i$ one has $g_i(x) \geq 0$ for any $x$ and in particular
using [\[eq:onestepofgd\]](#eq:onestepofgd){reference-type="eqref"
reference="eq:onestepofgd"} this yields
$- g_i(x) \leq - \frac{1}{2\beta} \|\nabla g_i(x)\|_2^2$ which can be
equivalently written as
$$\| \nabla f_i(x) - \nabla f_i(x^*) \|_2^2 \leq 2 \beta (f_i(x) - f_i(x^*) - \nabla f_i(x^*)^{\top} (x - x^*)) .$$
Taking expectation with respect to $i$ and observing that
$\mathbb{E}\nabla f_i(x^*) = \nabla f(x^*) = 0$ yields the claimed
bound. ◻
:::
On the other hand the computation of $\nabla f(y)$ is expensive (it
requires $m$ gradient computations), and thus the centering sequence
should be updated more rarely than the main sequence. These ideas lead
to the following epoch-based algorithm.
Let $y^{(1)} \in \mathbb{R}^n$ be an arbitrary initial point. For
$s=1, 2 \ldots$, let $x_1^{(s)}=y^{(s)}$. For $t=1, \hdots, k$ let
$$x_{t+1}^{(s)} = x_t^{(s)} - \eta \left( \nabla f_{i_t^{(s)}}(x_t^{(s)}) - \nabla f_{i_t^{(s)}} (y^{(s)}) + \nabla f(y^{(s)}) \right) ,$$
where $i_t^{(s)}$ is drawn uniformly at random (and independently of
everything else) in $[m]$. Also let
$$y^{(s+1)} = \frac1{k} \sum_{t=1}^k x_t^{(s)} .$$
::: theorem
[]{#th:SVRG label="th:SVRG"} Let $f_1, \hdots f_m$ be $\beta$-smooth
convex functions on $\mathbb{R}^n$ and $f$ be $\alpha$-strongly convex.
Then SVRG with $\eta = \frac{1}{10\beta}$ and $k = 20 \kappa$ satisfies
$$\mathbb{E}f(y^{(s+1)}) - f(x^*) \leq 0.9^s (f(y^{(1)}) - f(x^*)) .$$
:::
::: proof
*Proof.* We fix a phase $s \geq 1$ and we denote by $\mathbb{E}$ the
expectation taken with respect to $i_1^{(s)}, \hdots, i_k^{(s)}$. We
show below that
$$\mathbb{E}f(y^{(s+1)}) - f(x^*) = \mathbb{E}f\left(\frac1{k} \sum_{t=1}^k x_t^{(s)}\right) - f(x^*) \leq 0.9 (f(y^{(s)}) - f(x^*)) ,$$
which clearly implies the theorem. To simplify the notation in the
following we drop the dependency on $s$, that is we want to show that
$$\label{eq:SVRG0}
\mathbb{E}f\left(\frac1{k} \sum_{t=1}^k x_t\right) - f(x^*) \leq 0.9 (f(y) - f(x^*)) .$$
We start as for the proof of Theorem
[\[th:gdssc\]](#th:gdssc){reference-type="ref" reference="th:gdssc"}
(analysis of gradient descent for smooth and strongly convex functions)
with $$\label{eq:SVRG1}
\|x_{t+1} - x^*\|_2^2 = \|x_t - x^*\|_2^2 - 2 \eta v_t^{\top}(x_t - x^*) + \eta^2 \|v_t\|_2^2 ,$$
where $$v_t = \nabla f_{i_t}(x_t) - \nabla f_{i_t} (y) + \nabla f(y) .$$
Using Lemma [\[lem:SVRG\]](#lem:SVRG){reference-type="ref"
reference="lem:SVRG"}, we upper bound $\mathbb{E}_{i_t} \|v_t\|_2^2$ as
follows (also recall that
$\mathbb{E}\|X-\mathbb{E}(X)\|_2^2 \leq \mathbb{E}\|X\|_2^2$, and
$\mathbb{E}_{i_t} \nabla f_{i_t}(x^*) = 0$): $$\begin{aligned}
& \mathbb{E}_{i_t} \|v_t\|_2^2 \notag \\
& \leq 2 \mathbb{E}_{i_t} \|\nabla f_{i_t}(x_t) - \nabla f_{i_t}(x^*) \|_2^2 + 2 \mathbb{E}_{i_t} \|\nabla f_{i_t}(y) - \nabla f_{i_t}(x^*) - \nabla f(y) \|_2^2 \notag \\
& \leq 2 \mathbb{E}_{i_t} \|\nabla f_{i_t}(x_t) - \nabla f_{i_t}(x^*) \|_2^2 + 2 \mathbb{E}_{i_t} \|\nabla f_{i_t}(y) - \nabla f_{i_t}(x^*) \|_2^2 \notag \\
& \leq 4 \beta (f(x_t) - f(x^*) + f(y) - f(x^*)) . \label{eq:SVRG2}
\end{aligned}$$ Also observe that
$$\mathbb{E}_{i_t} v_t^{\top}(x_t - x^*) = \nabla f(x_t)^{\top} (x_t - x^*) \geq f(x_t) - f(x^*) ,$$
and thus plugging this into
[\[eq:SVRG1\]](#eq:SVRG1){reference-type="eqref" reference="eq:SVRG1"}
together with [\[eq:SVRG2\]](#eq:SVRG2){reference-type="eqref"
reference="eq:SVRG2"} one obtains $$\begin{aligned}
\mathbb{E}_{i_t} \|x_{t+1} - x^*\|_2^2 & \leq & \|x_t - x^*\|_2^2 - 2 \eta (1 - 2 \beta \eta) (f(x_t) - f(x^*)) \\
& & + 4 \beta \eta^2 (f(y) - f(x^*)) .
\end{aligned}$$ Summing the above inequality over $t=1, \hdots, k$
yields $$\begin{aligned}
\mathbb{E}\|x_{k+1} - x^*\|_2^2 & \leq & \|x_1 - x^*\|_2^2 - 2 \eta (1 - 2 \beta \eta) \mathbb{E}\sum_{t=1}^k (f(x_t) - f(x^*)) \\
& & + 4 \beta \eta^2 k (f(y) - f(x^*)) .
\end{aligned}$$ Noting that $x_1 = y$ and that by $\alpha$-strong
convexity one has $f(x) - f(x^*) \geq \frac{\alpha}{2} \|x - x^*\|_2^2$,
one can rearrange the above display to obtain
$$\mathbb{E}f\left(\frac1{k} \sum_{t=1}^k x_t\right) - f(x^*) \leq \left(\frac{1}{\alpha \eta (1 - 2 \beta \eta) k} + \frac{2 \beta \eta}{1- 2\beta \eta} \right) (f(y) - f(x^*)) .$$
Using that $\eta = \frac{1}{10\beta}$ and $k = 20 \kappa$ finally yields
[\[eq:SVRG0\]](#eq:SVRG0){reference-type="eqref" reference="eq:SVRG0"}
which itself concludes the proof. ◻
:::
## Random coordinate descent
We assume throughout this section that $f$ is a convex and
differentiable function on $\mathbb{R}^n$, with a unique[^13] minimizer
$x^*$. We investigate one of the simplest possible scheme to optimize
$f$, the random coordinate descent (RCD) method. In the following we
denote $\nabla_i f(x) = \frac{\partial f}{\partial x_i} (x)$. RCD is
defined as follows, with an arbitrary initial point
$x_1 \in \mathbb{R}^n$,
$$x_{s+1} = x_s - \eta \nabla_{i_s} f(x) e_{i_s} ,$$ where $i_s$ is
drawn uniformly at random from $[n]$ (and independently of everything
else).
One can view RCD as SGD with the specific oracle
$\widetilde{g}(x) = n \nabla_{I} f(x) e_I$ where $I$ is drawn uniformly
at random from $[n]$. Clearly
$\mathbb{E}\widetilde{g}(x) = \nabla f(x)$, and furthermore
$$\mathbb{E}\|\widetilde{g}(x)\|_2^2 = \frac{1}{n}\sum_{i=1}^n \|n \nabla_{i} f(x) e_i\|_2^2 = n \|\nabla f(x)\|_2^2 .$$
Thus using Theorem [\[th:SMD\]](#th:SMD){reference-type="ref"
reference="th:SMD"} (with $\Phi(x) = \frac12 \|x\|_2^2$, that is S-MD
being SGD) one immediately obtains the following result.
::: theorem
Let $f$ be convex and $L$-Lipschitz on $\mathbb{R}^n$, then RCD with
$\eta = \frac{R}{L} \sqrt{\frac{2}{n t}}$ satisfies
$$\mathbb{E}f\bigg(\frac{1}{t} \sum_{s=1}^t x_s \bigg) - \min_{x \in \mathcal{X}} f(x) \leq R L \sqrt{\frac{2 n}{t}} .$$
:::
Somewhat unsurprisingly RCD requires $n$ times more iterations than
gradient descent to obtain the same accuracy. In the next section, we
will see that this statement can be greatly improved by taking into
account directional smoothness.
### RCD for coordinate-smooth optimization
We assume now directional smoothness for $f$, that is there exists
$\beta_1, \hdots, \beta_n$ such that for any
$i \in [n], x \in \mathbb{R}^n$ and $u \in \mathbb{R}$,
$$| \nabla_i f(x+u e_i) - \nabla_i f(x) | \leq \beta_i |u| .$$ If $f$ is
twice differentiable then this is equivalent to
$(\nabla^2 f(x))_{i,i} \leq \beta_i$. In particular, since the maximal
eigenvalue of a matrix is upper bounded by its trace, one can see that
the directional smoothness implies that $f$ is $\beta$-smooth with
$\beta \leq \sum_{i=1}^n \beta_i$. We now study the following
"aggressive\" RCD, where the step-sizes are of order of the inverse
smoothness:
$$x_{s+1} = x_s - \frac{1}{\beta_{i_s}} \nabla_{i_s} f(x) e_{i_s} .$$
Furthermore we study a more general sampling distribution than uniform,
precisely for $\gamma \geq 0$ we assume that $i_s$ is drawn
(independently) from the distribution $p_{\gamma}$ defined by
$$p_{\gamma}(i) = \frac{\beta_i^{\gamma}}{\sum_{j=1}^n \beta_j^{\gamma}}, i \in [n] .$$
This algorithm was proposed in [@Nes12], and we denote it by
RCD($\gamma$). Observe that, up to a preprocessing step of complexity
$O(n)$, one can sample from $p_{\gamma}$ in time $O(\log(n))$.
The following rate of convergence is derived in [@Nes12], using the dual
norms $\|\cdot\|_{[\gamma]}, \|\cdot\|_{[\gamma]}^*$ defined by
$$\|x\|_{[\gamma]} = \sqrt{\sum_{i=1}^n \beta_i^{\gamma} x_i^2} , \;\; \text{and} \;\; \|x\|_{[\gamma]}^* = \sqrt{\sum_{i=1}^n \frac1{\beta_i^{\gamma}} x_i^2} .$$
::: theorem
[]{#th:rcdgamma label="th:rcdgamma"} Let $f$ be convex and such that
$u \in \mathbb{R}\mapsto f(x + u e_i)$ is $\beta_i$-smooth for any
$i \in [n], x \in \mathbb{R}^n$. Then RCD($\gamma$) satisfies for
$t \geq 2$,
$$\mathbb{E}f(x_{t}) - f(x^*) \leq \frac{2 R_{1 - \gamma}^2(x_1) \sum_{i=1}^n \beta_i^{\gamma}}{t-1} ,$$
where
$$R_{1-\gamma}(x_1) = \sup_{x \in \mathbb{R}^n : f(x) \leq f(x_1)} \|x - x^*\|_{[1-\gamma]} .$$
:::
Recall from Theorem [\[th:gdsmooth\]](#th:gdsmooth){reference-type="ref"
reference="th:gdsmooth"} that in this context the basic gradient descent
attains a rate of $\beta \|x_1 - x^*\|_2^2 / t$ where
$\beta \leq \sum_{i=1}^n \beta_i$ (see the discussion above). Thus we
see that RCD($1$) greatly improves upon gradient descent for functions
where $\beta$ is of order of $\sum_{i=1}^n \beta_i$. Indeed in this case
both methods attain the same accuracy after a fixed number of
iterations, but the iterations of coordinate descent are potentially
much cheaper than the iterations of gradient descent.
::: proof
*Proof.* By applying
[\[eq:onestepofgd\]](#eq:onestepofgd){reference-type="eqref"
reference="eq:onestepofgd"} to the $\beta_i$-smooth function
$u \in \mathbb{R}\mapsto f(x + u e_i)$ one obtains
$$f\left(x - \frac{1}{\beta_i} \nabla_i f(x) e_i\right) - f(x) \leq - \frac{1}{2 \beta_i} (\nabla_i f(x))^2 .$$
We use this as follows: $$\begin{aligned}
\mathbb{E}_{i_s} f(x_{s+1}) - f(x_s)
& = & \sum_{i=1}^n p_{\gamma}(i) \left(f\left(x_s - \frac{1}{\beta_i} \nabla_i f(x_s) e_i\right) - f(x_s) \right) \\
& \leq & - \sum_{i=1}^n \frac{p_{\gamma}(i)}{2 \beta_i} (\nabla_i f(x_s))^2 \\
& = & - \frac{1}{2 \sum_{i=1}^n \beta_i^{\gamma}} \left(\|\nabla f(x_s)\|_{[1-\gamma]}^*\right)^2 .
\end{aligned}$$ Denote $\delta_s = \mathbb{E}f(x_s) - f(x^*)$. Observe
that the above calculation can be used to show that
$f(x_{s+1}) \leq f(x_s)$ and thus one has, by definition of
$R_{1-\gamma}(x_1)$, $$\begin{aligned}
\delta_s & \leq & \nabla f(x_s)^{\top} (x_s - x^*) \\
& \leq & \|x_s - x^*\|_{[1-\gamma]} \|\nabla f(x_s)\|_{[1-\gamma]}^* \\
& \leq & R_{1-\gamma}(x_1) \|\nabla f(x_s)\|_{[1-\gamma]}^* .
\end{aligned}$$ Thus putting together the above calculations one obtains
$$\delta_{s+1} \leq \delta_s - \frac{1}{2 R_{1 - \gamma}^2(x_1) \sum_{i=1}^n \beta_i^{\gamma} } \delta_s^2 .$$
The proof can be concluded with similar computations than for Theorem
[\[th:gdsmooth\]](#th:gdsmooth){reference-type="ref"
reference="th:gdsmooth"}. ◻
:::
We discussed above the specific case of $\gamma = 1$. Both $\gamma=0$
and $\gamma=1/2$ also have an interesting behavior, and we refer to
[@Nes12] for more details. The latter paper also contains a discussion
of high probability results and potential acceleration à la Nesterov. We
also refer to [@RT12] for a discussion of RCD in a distributed setting.
### RCD for smooth and strongly convex optimization
If in addition to directional smoothness one also assumes strong
convexity, then RCD attains in fact a linear rate.
::: theorem
[]{#th:linearratercd label="th:linearratercd"} Let $\gamma \geq 0$. Let
$f$ be $\alpha$-strongly convex w.r.t. $\|\cdot\|_{[1-\gamma]}$, and
such that $u \in \mathbb{R}\mapsto f(x + u e_i)$ is $\beta_i$-smooth for
any $i \in [n], x \in \mathbb{R}^n$. Let
$\kappa_{\gamma} = \frac{\sum_{i=1}^n \beta_i^{\gamma}}{\alpha}$, then
RCD($\gamma$) satisfies
$$\mathbb{E}f(x_{t+1}) - f(x^*) \leq \left(1 - \frac1{\kappa_{\gamma}}\right)^t (f(x_1) - f(x^*)) .$$
:::
We use the following elementary lemma.
::: lemma
[]{#lem:tittrucnes label="lem:tittrucnes"} Let $f$ be $\alpha$-strongly
convex w.r.t. $\| \cdot\|$ on $\mathbb{R}^n$, then
$$f(x) - f(x^*) \leq \frac1{2\alpha} \|\nabla f(x)\|_*^2 .$$
:::
::: proof
*Proof.* By strong convexity, Hölder's inequality, and an elementary
calculation, $$\begin{aligned}
f(x) - f(y) & \leq & \nabla f(x)^{\top} (x-y) - \frac{\alpha}{2} \|x-y\|_2^2 \\
& \leq & \|\nabla f(x)\|_* \|x-y\| - \frac{\alpha}{2} \|x-y\|_2^2 \\
& \leq & \frac1{2\alpha} \|\nabla f(x)\|_*^2 ,
\end{aligned}$$ which concludes the proof by taking $y = x^*$. ◻
:::
We can now prove Theorem
[\[th:linearratercd\]](#th:linearratercd){reference-type="ref"
reference="th:linearratercd"}.
::: proof
*Proof.* In the proof of Theorem
[\[th:rcdgamma\]](#th:rcdgamma){reference-type="ref"
reference="th:rcdgamma"} we showed that
$$\delta_{s+1} \leq \delta_s - \frac{1}{2 \sum_{i=1}^n \beta_i^{\gamma}} \left(\|\nabla f(x_s)\|_{[1-\gamma]}^*\right)^2 .$$
On the other hand Lemma
[\[lem:tittrucnes\]](#lem:tittrucnes){reference-type="ref"
reference="lem:tittrucnes"} shows that
$$\left(\|\nabla f(x_s)\|_{[1-\gamma]}^*\right)^2 \geq 2 \alpha \delta_s .$$
The proof is concluded with straightforward calculations. ◻
:::
## Acceleration by randomization for saddle points
We explore now the use of randomness for saddle point computations. That
is we consider the context of Section
[5.2.1](#sec:sp){reference-type="ref" reference="sec:sp"} with a
stochastic oracle of the following form: given
$z=(x,y) \in \mathcal{X}\times \mathcal{Y}$ it outputs
$\widetilde{g}(z) = (\widetilde{g}_{\mathcal{X}}(x,y), \widetilde{g}_{\mathcal{Y}}(x,y))$
where
$\mathbb{E}\ (\widetilde{g}_{\mathcal{X}}(x,y) | x,y) \in \partial_x \varphi(x,y)$,
and
$\mathbb{E}\ (\widetilde{g}_{\mathcal{Y}}(x,y) | x,y) \in \partial_y (-\varphi(x,y))$.
Instead of using true subgradients as in SP-MD (see Section
[5.2.2](#sec:spmd){reference-type="ref" reference="sec:spmd"}) we use
here the outputs of the stochastic oracle. We refer to the resulting
algorithm as S-SP-MD (Stochastic Saddle Point Mirror Descent). Using the
same reasoning than in Section [6.1](#sec:smd){reference-type="ref"
reference="sec:smd"} and Section [5.2.2](#sec:spmd){reference-type="ref"
reference="sec:spmd"} one can derive the following theorem.
::: theorem
[]{#th:sspmd label="th:sspmd"} Assume that the stochastic oracle is such
that
$\mathbb{E}\left(\|\widetilde{g}_{\mathcal{X}}(x,y)\|_{\mathcal{X}}^* \right)^2 \leq B_{\mathcal{X}}^2$,
and
$\mathbb{E}\left(\|\widetilde{g}_{\mathcal{Y}}(x,y)\|_{\mathcal{Y}}^* \right)^2 \leq B_{\mathcal{Y}}^2$.
Then S-SP-MD with $a= \frac{B_{\mathcal{X}}}{R_{\mathcal{X}}}$,
$b=\frac{B_{\mathcal{Y}}}{R_{\mathcal{Y}}}$, and
$\eta=\sqrt{\frac{2}{t}}$ satisfies
$$\mathbb{E}\left( \max_{y \in \mathcal{Y}} \varphi\left( \frac1{t} \sum_{s=1}^t x_s,y \right) - \min_{x \in \mathcal{X}} \varphi\left(x, \frac1{t} \sum_{s=1}^t y_s \right) \right) \leq (R_{\mathcal{X}} B_{\mathcal{X}} + R_{\mathcal{Y}} B_{\mathcal{Y}}) \sqrt{\frac{2}{t}}.$$
:::
Using S-SP-MD we revisit the examples of Section
[5.2.4.2](#sec:spex2){reference-type="ref" reference="sec:spex2"} and
Section [5.2.4.3](#sec:spex3){reference-type="ref"
reference="sec:spex3"}. In both cases one has
$\varphi(x,y) = x^{\top} A y$ (with $A_i$ being the $i^{th}$ column of
$A$), and thus $\nabla_x \varphi(x,y) = Ay$ and
$\nabla_y \varphi(x,y) = A^{\top} x$.
**Matrix games.** Here $x \in \Delta_n$ and $y \in \Delta_m$. Thus there
is a quite natural stochastic oracle: $$\label{eq:oraclematrixgame}
\widetilde{g}_{\mathcal{X}}(x,y) = A_I, \; \text{where} \; I \in [m] \; \text{is drawn according to} \; y \in \Delta_m ,$$
and $\forall i \in [m]$, $$\label{eq:oraclematrixgame2}
\widetilde{g}_{\mathcal{Y}}(x,y)(i) = A_i(J), \; \text{where} \; J \in [n] \; \text{is drawn according to} \; x \in \Delta_n .$$
Clearly
$\|\widetilde{g}_{\mathcal{X}}(x,y)\|_{\infty} \leq \|A\|_{\mathrm{max}}$
and
$\|\widetilde{g}_{\mathcal{X}}(x,y)\|_{\infty} \leq \|A\|_{\mathrm{max}}$,
which implies that S-SP-MD attains an $\varepsilon$-optimal pair of
points with
$O\left(\|A\|_{\mathrm{max}}^2 \log(n+m) / \varepsilon^2 \right)$
iterations. Furthermore the computational complexity of a step of
S-SP-MD is dominated by drawing the indices $I$ and $J$ which takes
$O(n + m)$. Thus overall the complexity of getting an
$\varepsilon$-optimal Nash equilibrium with S-SP-MD is
$O\left(\|A\|_{\mathrm{max}}^2 (n + m) \log(n+m) / \varepsilon^2 \right)$.
While the dependency on $\varepsilon$ is worse than for SP-MP (see
Section [5.2.4.2](#sec:spex2){reference-type="ref"
reference="sec:spex2"}), the dependencies on the dimensions is
$\widetilde{O}(n+m)$ instead of $\widetilde{O}(nm)$. In particular,
quite astonishingly, this is *sublinear* in the size of the matrix $A$.
The possibility of sublinear algorithms for this problem was first
observed in [@GK95].
**Linear classification.** Here $x \in \mathrm{B}_{2,n}$ and
$y \in \Delta_m$. Thus the stochastic oracle for the $x$-subgradient can
be taken as in
[\[eq:oraclematrixgame\]](#eq:oraclematrixgame){reference-type="eqref"
reference="eq:oraclematrixgame"} but for the $y$-subgradient we modify
[\[eq:oraclematrixgame2\]](#eq:oraclematrixgame2){reference-type="eqref"
reference="eq:oraclematrixgame2"} as follows. For a vector $x$ we denote
by $x^2$ the vector such that $x^2(i) = x(i)^2$. For all $i \in [m]$,
$$\widetilde{g}_{\mathcal{Y}}(x,y)(i) = \frac{\|x\|^2}{x(j)} A_i(J), \; \text{where} \; J \in [n] \; \text{is drawn according to} \; \frac{x^2}{\|x\|_2^2} \in \Delta_n .$$
Note that one indeed has
$\mathbb{E}(\widetilde{g}_{\mathcal{Y}}(x,y)(i) | x,y) = \sum_{j=1}^n x(j) A_i(j) = (A^{\top} x)(i)$.
Furthermore $\|\widetilde{g}_{\mathcal{X}}(x,y)\|_2 \leq B$, and
$$\mathbb{E}(\|\widetilde{g}_{\mathcal{Y}}(x,y)\|_{\infty}^2 | x,y) = \sum_{j=1}^n \frac{x(j)^2}{\|x\|_2^2} \max_{i \in [m]} \left(\frac{\|x\|^2}{x(j)} A_i(j)\right)^2 \leq \sum_{j=1}^n \max_{i \in [m]} A_i(j)^2 .$$
Unfortunately this last term can be $O(n)$. However it turns out that
one can do a more careful analysis of mirror descent in terms of local
norms, which allows to prove that the "local variance\" is
dimension-free. We refer to [@BC12] for more details on these local
norms, and to [@CHW12] for the specific details in the linear
classification situation.
## Convex relaxation and randomized rounding {#sec:convexrelaxation}
In this section we briefly discuss the concept of convex relaxation, and
the use of randomization to find approximate solutions. By now there is
an enormous literature on these topics, and we refer to [@Bar14] for
further pointers.
We study here the seminal example of $\mathrm{MAXCUT}$. This problem can
be described as follows. Let $A \in \mathbb{R}_+^{n \times n}$ be a
symmetric matrix of non-negative weights. The entry $A_{i,j}$ is
interpreted as a measure of the "dissimilarity\" between point $i$ and
point $j$. The goal is to find a partition of $[n]$ into two sets,
$S \subset [n]$ and $S^c$, so as to maximize the total dissimilarity
between the two groups: $\sum_{i \in S, j \in S^c} A_{i,j}$.
Equivalently $\mathrm{MAXCUT}$ corresponds to the following optimization
problem: $$\label{eq:maxcut1}
\max_{x \in \{-1,1\}^n} \frac12 \sum_{i,j =1}^n A_{i,j} (x_i - x_j)^2 .$$
Viewing $A$ as the (weighted) adjacency matrix of a graph, one can
rewrite [\[eq:maxcut1\]](#eq:maxcut1){reference-type="eqref"
reference="eq:maxcut1"} as follows, using the graph Laplacian $L=D-A$
where $D$ is the diagonal matrix with entries
$(\sum_{j=1}^n A_{i,j})_{i \in [n]}$, $$\label{eq:maxcut2}
\max_{x \in \{-1,1\}^n} x^{\top} L x .$$ It turns out that this
optimization problem is $\mathbf{NP}$-hard, that is the existence of a
polynomial time algorithm to solve
[\[eq:maxcut2\]](#eq:maxcut2){reference-type="eqref"
reference="eq:maxcut2"} would prove that $\mathbf{P} = \mathbf{NP}$. The
combinatorial difficulty of this problem stems from the hypercube
constraint. Indeed if one replaces $\{-1,1\}^n$ by the Euclidean sphere,
then one obtains an efficiently solvable problem (it is the problem of
computing the maximal eigenvalue of $L$).
We show now that, while
[\[eq:maxcut2\]](#eq:maxcut2){reference-type="eqref"
reference="eq:maxcut2"} is a difficult optimization problem, it is in
fact possible to find relatively good *approximate* solutions by using
the power of randomization. Let $\zeta$ be uniformly drawn on the
hypercube $\{-1,1\}^n$, then clearly
$$\mathbb{E}\ \zeta^{\top} L \zeta = \sum_{i,j=1, i \neq j}^n A_{i,j} \geq \frac{1}{2} \max_{x \in \{-1,1\}^n} x^{\top} L x .$$
This means that, on average, $\zeta$ is a $1/2$-approximate solution to
[\[eq:maxcut2\]](#eq:maxcut2){reference-type="eqref"
reference="eq:maxcut2"}. Furthermore it is immediate that the above
expectation bound implies that, with probability at least $\varepsilon$,
$\zeta$ is a $(1/2-\varepsilon)$-approximate solution. Thus by
repeatedly sampling uniformly from the hypercube one can get arbitrarily
close (with probability approaching $1$) to a $1/2$-approximation of
$\mathrm{MAXCUT}$.
Next we show that one can obtain an even better approximation ratio by
combining the power of convex optimization and randomization. This
approach was pioneered by [@GW95]. The Goemans-Williamson algorithm is
based on the following inequality
$$\max_{x \in \{-1,1\}^n} x^{\top} L x = \max_{x \in \{-1,1\}^n} \langle L, xx^{\top} \rangle \leq \max_{X \in \mathbb{S}_+^n, X_{i,i}=1, i \in [n]} \langle L, X \rangle .$$
The right hand side in the above display is known as the *convex (or
SDP) relaxation* of $\mathrm{MAXCUT}$. The convex relaxation is an SDP
and thus one can find its solution efficiently with Interior Point
Methods (see Section [5.3](#sec:IPM){reference-type="ref"
reference="sec:IPM"}). The following result states both the
Goemans-Williamson strategy and the corresponding approximation ratio.
::: theorem
[]{#th:GW label="th:GW"} Let $\Sigma$ be the solution to the SDP
relaxation of $\mathrm{MAXCUT}$. Let $\xi \sim \mathcal{N}(0, \Sigma)$
and $\zeta = \mathrm{sign}(\xi) \in \{-1,1\}^n$. Then
$$\mathbb{E}\ \zeta^{\top} L \zeta \geq 0.878 \max_{x \in \{-1,1\}^n} x^{\top} L x .$$
:::
The proof of this result is based on the following elementary geometric
lemma.
::: lemma
[]{#lem:GW label="lem:GW"} Let $\xi \sim \mathcal{N}(0,\Sigma)$ with
$\Sigma_{i,i}=1$ for $i \in [n]$, and $\zeta = \mathrm{sign}(\xi)$. Then
$$\mathbb{E}\ \zeta_i \zeta_j = \frac{2}{\pi} \mathrm{arcsin} \left(\Sigma_{i,j}\right) .$$
:::
::: proof
*Proof.* Let $V \in \mathbb{R}^{n \times n}$ (with $i^{th}$ row
$V_i^{\top}$) be such that $\Sigma = V V^{\top}$. Note that since
$\Sigma_{i,i}=1$ one has $\|V_i\|_2 = 1$ (remark also that necessarily
$|\Sigma_{i,j}| \leq 1$, which will be important in the proof of Theorem
[\[th:GW\]](#th:GW){reference-type="ref" reference="th:GW"}). Let
$\varepsilon\sim \mathcal{N}(0,\mathrm{I}_n)$ be such that
$\xi = V \varepsilon$. Then
$\zeta_i = \mathrm{sign}(V_i^{\top} \varepsilon)$, and in particular
$$\begin{aligned}
\mathbb{E}\ \zeta_i \zeta_j & = & \mathbb{P}(V_i^{\top} \varepsilon\geq 0 \ \text{and} \ V_j^{\top} \varepsilon\geq 0) + \mathbb{P}(V_i^{\top} \varepsilon\leq 0 \ \text{and} \ V_j^{\top} \varepsilon\leq 0 \\
& & - \mathbb{P}(V_i^{\top} \varepsilon\geq 0 \ \text{and} \ V_j^{\top} \varepsilon< 0) - \mathbb{P}(V_i^{\top} \varepsilon< 0 \ \text{and} \ V_j^{\top} \varepsilon\geq 0) \\
& = & 2 \mathbb{P}(V_i^{\top} \varepsilon\geq 0 \ \text{and} \ V_j^{\top} \varepsilon\geq 0) - 2 \mathbb{P}(V_i^{\top} \varepsilon\geq 0 \ \text{and} \ V_j^{\top} \varepsilon< 0) \\
& = & \mathbb{P}(V_j^{\top} \varepsilon\geq 0 | V_i^{\top} \varepsilon\geq 0) - \mathbb{P}(V_j^{\top} \varepsilon< 0 | V_i^{\top} \varepsilon\geq 0) \\
& = & 1 - 2 \mathbb{P}(V_j^{\top} \varepsilon< 0 | V_i^{\top} \varepsilon\geq 0).
\end{aligned}$$ Now a quick picture shows that
$\mathbb{P}(V_j^{\top} \varepsilon< 0 | V_i^{\top} \varepsilon\geq 0) = \frac{1}{\pi} \mathrm{arccos}(V_i^{\top} V_j)$
(recall that $\varepsilon/ \|\varepsilon\|_2$ is uniform on the
Euclidean sphere). Using the fact that $V_i^{\top} V_j = \Sigma_{i,j}$
and $\mathrm{arccos}(x) = \frac{\pi}{2} - \mathrm{arcsin}(x)$ conclude
the proof. ◻
:::
We can now get to the proof of Theorem
[\[th:GW\]](#th:GW){reference-type="ref" reference="th:GW"}.
::: proof
*Proof.* We shall use the following inequality: $$\label{eq:dependsonL}
1 - \frac{2}{\pi} \mathrm{arcsin}(t) \geq 0.878 (1-t), \ \forall t \in [-1,1] .$$
Also remark that for $X \in \mathbb{R}^{n \times n}$ such that
$X_{i,i}=1$, one has
$$\langle L, X \rangle = \sum_{i,j=1}^n A_{i,j} (1 - X_{i,j}) ,$$ and in
particular for $x \in \{-1,1\}^n$,
$x^{\top} L x = \sum_{i,j=1}^n A_{i,j} (1 - x_i x_j)$. Thus, using Lemma
[\[lem:GW\]](#lem:GW){reference-type="ref" reference="lem:GW"}, and the
facts that $A_{i,j} \geq 0$ and $|\Sigma_{i,j}| \leq 1$ (see the proof
of Lemma [\[lem:GW\]](#lem:GW){reference-type="ref"
reference="lem:GW"}), one has $$\begin{aligned}
\mathbb{E}\ \zeta^{\top} L \zeta
& = & \sum_{i,j=1}^n A_{i,j} \left(1- \frac{2}{\pi} \mathrm{arcsin} \left(\Sigma_{i,j}\right)\right) \\
& \geq & 0.878 \sum_{i,j=1}^n A_{i,j} \left(1- \Sigma_{i,j}\right) \\
& = & 0.878 \ \max_{X \in \mathbb{S}_+^n, X_{i,i}=1, i \in [n]} \langle L, X \rangle \\
& \geq & 0.878 \max_{x \in \{-1,1\}^n} x^{\top} L x .
\end{aligned}$$ ◻
:::
Theorem [\[th:GW\]](#th:GW){reference-type="ref" reference="th:GW"}
depends on the form of the Laplacian $L$ (insofar as
[\[eq:dependsonL\]](#eq:dependsonL){reference-type="eqref"
reference="eq:dependsonL"} was used). We show next a result from
[@Nes97] that applies to any positive semi-definite matrix, at the
expense of the constant of approximation. Precisely we are now
interested in the following optimization problem: $$\label{eq:quad}
\max_{x \in \{-1,1\}^n} x^{\top} B x .$$ The corresponding SDP
relaxation is
$$\max_{X \in \mathbb{S}_+^n, X_{i,i}=1, i \in [n]} \langle B, X \rangle .$$
::: theorem
Let $\Sigma$ be the solution to the SDP relaxation of
[\[eq:quad\]](#eq:quad){reference-type="eqref" reference="eq:quad"}. Let
$\xi \sim \mathcal{N}(0, \Sigma)$ and
$\zeta = \mathrm{sign}(\xi) \in \{-1,1\}^n$. Then
$$\mathbb{E}\ \zeta^{\top} B \zeta \geq \frac{2}{\pi} \max_{x \in \{-1,1\}^n} x^{\top} B x .$$
:::
::: proof
*Proof.* Lemma [\[lem:GW\]](#lem:GW){reference-type="ref"
reference="lem:GW"} shows that
$$\mathbb{E}\ \zeta^{\top} B \zeta = \sum_{i,j=1}^n B_{i,j} \frac{2}{\pi} \mathrm{arcsin} \left(X_{i,j}\right) = \frac{2}{\pi} \langle B, \mathrm{arcsin}(X) \rangle .$$
Thus to prove the result it is enough to show that
$\langle B, \mathrm{arcsin}(\Sigma) \rangle \geq \langle B, \Sigma \rangle$,
which is itself implied by $\mathrm{arcsin}(\Sigma) \succeq \Sigma$ (the
implication is true since $B$ is positive semi-definite, just write the
eigendecomposition). Now we prove the latter inequality via a Taylor
expansion. Indeed recall that $|\Sigma_{i,j}| \leq 1$ and thus denoting
by $A^{\circ \alpha}$ the matrix where the entries are raised to the
power $\alpha$ one has
$$\mathrm{arcsin}(\Sigma) = \sum_{k=0}^{+\infty} \frac{{2k \choose k}}{4^k (2k +1)} \Sigma^{\circ (2k+1)} = \Sigma + \sum_{k=1}^{+\infty} \frac{{2k \choose k}}{4^k (2k +1)} \Sigma^{\circ (2k+1)}.$$
Finally one can conclude using the fact if $A,B \succeq 0$ then
$A \circ B \succeq 0$. This can be seen by writing $A= V V^{\top}$,
$B=U U^{\top}$, and thus
$$(A \circ B)_{i,j} = V_i^{\top} V_j U_i^{\top} U_j = \mathrm{Tr}(U_j V_j^{\top} V_i U_i^{\top}) = \langle V_i U_i^{\top}, V_j U_j^{\top} \rangle .$$
In other words $A \circ B$ is a Gram-matrix and, thus it is positive
semi-definite. ◻
:::
## Random walk based methods {#sec:rwmethod}
Randomization naturally suggests itself in the center of gravity method
(see Section [2.1](#sec:gravity){reference-type="ref"
reference="sec:gravity"}), as a way to circumvent the exact calculation
of the center of gravity. This idea was proposed and developed in
[@BerVem04]. We give below a condensed version of the main ideas of this
paper.
Assuming that one can draw independent points $X_1, \hdots, X_N$
uniformly at random from the current set $\mathcal{S}_t$, one could
replace $c_t$ by $\hat{c}_t = \frac{1}{N} \sum_{i=1}^N X_i$. [@BerVem04]
proved the following generalization of Lemma
[\[lem:Gru60\]](#lem:Gru60){reference-type="ref" reference="lem:Gru60"}
for the situation where one cuts a convex set through a point close the
center of gravity. Recall that a convex set $\mathcal{K}$ is in
isotropic position if $\mathbb{E}X = 0$ and
$\mathbb{E}X X^{\top} = \mathrm{I}_n$, where $X$ is a random variable
drawn uniformly at random from $\mathcal{K}$. Note in particular that
this implies $\mathbb{E}\|X\|_2^2 = n$. We also say that $\mathcal{K}$
is in near-isotropic position if
$\frac{1}{2} \mathrm{I}_n \preceq \mathbb{E}X X^{\top} \preceq \frac3{2} \mathrm{I}_n$.
::: lemma
[]{#lem:BerVem04 label="lem:BerVem04"} Let $\mathcal{K}$ be a convex set
in isotropic position. Then for any $w \in \mathbb{R}^n, w \neq 0$,
$z \in \mathbb{R}^n$, one has
$$\mathrm{Vol} \left( \mathcal{K}\cap \{x \in \mathbb{R}^n : (x-z)^{\top} w \geq 0\} \right) \geq \left(\frac{1}{e} - \|z\|_2\right) \mathrm{Vol} (\mathcal{K}) .$$
:::
Thus if one can ensure that $\mathcal{S}_t$ is in (near) isotropic
position, and $\|c_t - \hat{c}_t\|_2$ is small (say smaller than $0.1$),
then the randomized center of gravity method (which replaces $c_t$ by
$\hat{c}_t$) will converge at the same speed than the original center of
gravity method.
Assuming that $\mathcal{S}_t$ is in isotropic position one immediately
obtains $\mathbb{E}\|c_t - \hat{c}_t\|_2^2 = \frac{n}{N}$, and thus by
Chebyshev's inequality one has
$\mathbb{P}(\|c_t - \hat{c}_t\|_2 > 0.1) \leq 100 \frac{n}{N}$. In other
words with $N = O(n)$ one can ensure that the randomized center of
gravity method makes progress on a constant fraction of the iterations
(to ensure progress at every step one would need a larger value of $N$
because of an union bound, but this is unnecessary).
Let us now consider the issue of putting $\mathcal{S}_t$ in
near-isotropic position. Let
$\hat{\Sigma}_t = \frac1{N} \sum_{i=1}^N (X_i-\hat{c}_t) (X_i-\hat{c}_t)^{\top}$.
[@Rud99] showed that as long as $N= \widetilde{\Omega}(n)$, one has with
high probability (say at least probability $1-1/n^2$) that the set
$\hat{\Sigma}_t^{-1/2} (\mathcal{S}_t - \hat{c}_t)$ is in near-isotropic
position.
Thus it only remains to explain how to sample from a near-isotropic
convex set $\mathcal{K}$. This is where random walk ideas come into the
picture. The hit-and-run walk[^14] is described as follows: at a point
$x \in \mathcal{K}$, let $\mathcal{L}$ be a line that goes through $x$
in a direction taken uniformly at random, then move to a point chosen
uniformly at random in $\mathcal{L}\cap \mathcal{K}$. [@Lov98] showed
that if the starting point of the hit-and-run walk is chosen from a
distribution "close enough\" to the uniform distribution on
$\mathcal{K}$, then after $O(n^3)$ steps the distribution of the last
point is $\varepsilon$ away (in total variation) from the uniform
distribution on $\mathcal{K}$. In the randomized center of gravity
method one can obtain a good initial distribution for $\mathcal{S}_t$ by
using the distribution that was obtained for $\mathcal{S}_{t-1}$. In
order to initialize the entire process correctly we start here with
$\mathcal{S}_1 = [-L, L]^n \supset \mathcal{X}$ (in Section
[2.1](#sec:gravity){reference-type="ref" reference="sec:gravity"} we
used $\mathcal{S}_1 = \mathcal{X}$), and thus we also have to use a
*separation oracle* at iterations where $\hat{c}_t \not\in \mathcal{X}$,
just like we did for the ellipsoid method (see Section
[2.2](#sec:ellipsoid){reference-type="ref" reference="sec:ellipsoid"}).
Wrapping up the above discussion, we showed (informally) that to attain
an $\varepsilon$-optimal point with the randomized center of gravity
method one needs: $\widetilde{O}(n)$ iterations, each iterations
requires $\widetilde{O}(n)$ random samples from $\mathcal{S}_t$ (in
order to put it in isotropic position) as well as a call to either the
separation oracle or the first order oracle, and each sample costs
$\widetilde{O}(n^3)$ steps of the random walk. Thus overall one needs
$\widetilde{O}(n)$ calls to the separation oracle and the first order
oracle, as well as $\widetilde{O}(n^5)$ steps of the random walk.
::: acknowledgements
This text grew out of lectures given at Princeton University in 2013 and
2014. I would like to thank Mike Jordan for his support in this project.
My gratitude goes to the four reviewers, and especially the
non-anonymous referee Francis Bach, whose comments have greatly helped
to situate this monograph in the vast optimization literature. Finally I
am thankful to Philippe Rigollet for suggesting the new title (a
previous version of the manuscript was titled "Theory of Convex
Optimization for Machine Learning\"), and to Yin-Tat Lee for many
insightful discussions about cutting-plane methods.
:::
[^1]: Note that this trick does not work in the context of Chapter
[6](#rand){reference-type="ref" reference="rand"}.
[^2]: As a warm-up we assume in this section that $\mathcal{X}$ is
known. It should be clear from the arguments in the next section
that in fact the same algorithm would work if initialized with
$\mathcal{S}_1 \supset \mathcal{X}$.
[^3]: Of course the computational complexity remains at least linear in
the dimension since one needs to manipulate gradients.
[^4]: In the optimization literature the term "descent\" is reserved for
methods such that $f(x_{t+1}) \leq f(x_t)$. In that sense the
projected subgradient descent is not a descent method.
[^5]: Observe however that the quantities $R$ and $L$ may dependent on
the dimension, see Chapter [4](#mirror){reference-type="ref"
reference="mirror"} for more on this.
[^6]: The last step in the sequence of implications can be improved by
taking $\delta_1$ into account. Indeed one can easily show with
[\[eq:defaltsmooth\]](#eq:defaltsmooth){reference-type="eqref"
reference="eq:defaltsmooth"} that
$\delta_1 \leq \frac{1}{4 \omega}$. This improves the rate of
Theorem [\[th:gdsmooth\]](#th:gdsmooth){reference-type="ref"
reference="th:gdsmooth"} from $\frac{2 \beta \|x_1 - x^*\|^2}{t-1}$
to $\frac{2 \beta \|x_1 - x^*\|^2}{t+3}$.
[^7]: Assumption (ii) can be relaxed in some cases, see for example
[@ABL14].
[^8]: Basically mirror prox allows for a smooth vector field point of
view (see Section [4.6](#sec:vectorfield){reference-type="ref"
reference="sec:vectorfield"}), while mirror descent does not.
[^9]: We restrict to unconstrained minimization for sake of simplicity.
One can extend the discussion to constrained minimization by using
ideas from Section [3.2](#sec:gdsmooth){reference-type="ref"
reference="sec:gdsmooth"}.
[^10]: Observe that the duality gap is the sum of the primal gap
$\max_{y \in \mathcal{Y}} \varphi(\widetilde{x},y) - \varphi(x^*,y^*)$
and the dual gap
$\varphi(x^*,y^*) - \min_{x \in \mathcal{X}} \varphi(x, \widetilde{y})$.
[^11]: We assume differentiability only for sake of notation here.
[^12]: While being true in general this statement does not say anything
about specific functions/oracles. For example it was shown in
[@BM13] that acceleration can be obtained for the square loss and
the logistic loss.
[^13]: Uniqueness is only assumed for sake of notation.
[^14]: Other random walks are known for this problem but hit-and-run is
the one with the sharpest theoretical guarantees. Curiously we note
that one of those walks is closely connected to projected gradient
descent, see [@BEL15].
|
# Building Abstractions with Procedures {#Chapter 1}
> The acts of the mind, wherein it exerts its power over simple ideas,
> are chiefly these three: 1. Combining several simple ideas into one
> compound one, and thus all complex ideas are made. 2. The second is
> bringing two ideas, whether simple or complex, together, and setting
> them by one another so as to take a view of them at once, without
> uniting them into one, by which it gets all its ideas of relations. 3.
> The third is separating them from all other ideas that accompany them
> in their real existence: this is called abstraction, and thus all its
> general ideas are made.
>
> ---John Locke, *An Essay Concerning Human Understanding* (1690)
We are about to study the idea of a
*computational process*. Computational processes are abstract beings
that inhabit computers. As they evolve, processes manipulate other
abstract things called *data*. The evolution of a process is directed by
a pattern of rules called a *program*. People create programs to direct
processes. In effect, we conjure the spirits of the computer with our
spells.
A computational process is indeed much like a sorcerer's idea of a
spirit. It cannot be seen or touched. It is not composed of matter at
all. However, it is very real. It can perform intellectual work. It can
answer questions. It can affect the world by disbursing money at a bank
or by controlling a robot arm in a factory. The programs we use to
conjure processes are like a sorcerer's spells. They are carefully
composed from symbolic expressions in arcane and esoteric *programming
languages* that prescribe the tasks we want our processes to perform.
A computational process, in a correctly working computer, executes
programs precisely and accurately. Thus, like the sorcerer's apprentice,
novice programmers must learn to understand and to anticipate the
consequences of their conjuring. Even small errors (usually called
*bugs* or *glitches*) in programs can have complex and unanticipated
consequences.
Fortunately, learning to program is considerably less dangerous than
learning sorcery, because the spirits we deal with are conveniently
contained in a secure way. Real-world programming, however, requires
care, expertise, and wisdom. A small bug in a computer-aided design
program, for example, can lead to the catastrophic collapse of an
airplane or a dam or the self-destruction of an industrial robot.
Master software engineers have the ability to organize programs so that
they can be reasonably sure that the resulting processes will perform
the tasks intended. They can visualize the behavior of their systems in
advance. They know how to structure programs so that unanticipated
problems do not lead to catastrophic consequences, and when problems do
arise, they can *debug* their programs. Well-designed computational
systems, like well-designed automobiles or nuclear reactors, are
designed in a modular manner, so that the parts can be constructed,
replaced, and debugged separately.
#### Programming in Lisp {#programming-in-lisp .unnumbered}
We need an appropriate language for describing processes, and we will
use for this purpose the programming language Lisp. Just as our everyday
thoughts are usually expressed in our natural language (such as English,
French, or Japanese), and descriptions of quantitative phenomena are
expressed with mathematical notations, our procedural thoughts will be
expressed in Lisp. Lisp was invented in the late 1950s as a formalism
for reasoning about the use of certain kinds of logical expressions,
called *recursion equations*, as a model for computation. The language
was conceived by John McCarthy and is based on his paper "Recursive
Functions of Symbolic Expressions and Their Computation by Machine"
([McCarthy 1960](#McCarthy 1960)).
Despite its inception as a mathematical formalism, Lisp is a practical
programming language. A Lisp *interpreter* is a machine that carries out
processes described in the Lisp language. The first Lisp interpreter was
implemented by McCarthy with the help of colleagues and students in the
Artificial Intelligence Group of the mit Research
Laboratory of Electronics and in the mit Computation
Center.[^1] Lisp, whose name is an acronym for LISt Processing, was
designed to provide symbol-manipulating capabilities for attacking
programming problems such as the symbolic differentiation and
integration of algebraic expressions. It included for this purpose new
data objects known as atoms and lists, which most strikingly set it
apart from all other languages of the period.
Lisp was not the product of a concerted design effort. Instead, it
evolved informally in an experimental manner in response to users' needs
and to pragmatic implementation considerations. Lisp's informal
evolution has continued through the years, and the community of Lisp
users has traditionally resisted attempts to promulgate any "official"
definition of the language. This evolution, together with the
flexibility and elegance of the initial conception, has enabled Lisp,
which is the second oldest language in widespread use today (only
Fortran is older), to continually adapt to encompass the most modern
ideas about program design. Thus, Lisp is by now a family of dialects,
which, while sharing most of the original features, may differ from one
another in significant ways. The dialect of Lisp used in this book is
called Scheme.[^2]
Because of its experimental character and its emphasis on symbol
manipulation, Lisp was at first very inefficient for numerical
computations, at least in comparison with Fortran. Over the years,
however, Lisp compilers have been developed that translate programs into
machine code that can perform numerical computations reasonably
efficiently. And for special applications, Lisp has been used with great
effectiveness.[^3] Although Lisp has not yet overcome its old reputation
as hopelessly inefficient, Lisp is now used in many applications where
efficiency is not the central concern. For example, Lisp has become a
language of choice for operating-system shell languages and for
extension languages for editors and computer-aided design systems.
If Lisp is not a mainstream language, why are we using it as the
framework for our discussion of programming? Because the language
possesses unique features that make it an excellent medium for studying
important programming constructs and data structures and for relating
them to the linguistic features that support them. The most significant
of these features is the fact that Lisp descriptions of processes,
called *procedures*, can themselves be represented and manipulated as
Lisp data. The importance of this is that there are powerful
program-design techniques that rely on the ability to blur the
traditional distinction between "passive" data and "active" processes.
As we shall discover, Lisp's flexibility in handling procedures as data
makes it one of the most convenient languages in existence for exploring
these techniques. The ability to represent procedures as data also makes
Lisp an excellent language for writing programs that must manipulate
other programs as data, such as the interpreters and compilers that
support computer languages. Above and beyond these considerations,
programming in Lisp is great fun.
## The Elements of Programming {#Section 1.1}
A powerful programming language is more than just a means for
instructing a computer to perform tasks. The language also serves as a
framework within which we organize our ideas about processes. Thus, when
we describe a language, we should pay particular attention to the means
that the language provides for combining simple ideas to form more
complex ideas. Every powerful language has three mechanisms for
accomplishing this:
- **primitive expressions**, which represent the simplest entities the
language is concerned with,
- **means of combination**, by which compound elements are built from
simpler ones, and
- **means of abstraction**, by which compound elements can be named
and manipulated as units.
In programming, we deal with two kinds of elements: procedures and data.
(Later we will discover that they are really not so distinct.)
Informally, data is "stuff$\kern0.1em$" that we want to manipulate, and
procedures are descriptions of the rules for manipulating the data.
Thus, any powerful programming language should be able to describe
primitive data and primitive procedures and should have methods for
combining and abstracting procedures and data.
In this chapter we will deal only with simple numerical data so that we
can focus on the rules for building procedures.[^4] In later chapters we
will see that these same rules allow us to build procedures to
manipulate compound data as well.
### Expressions {#Section 1.1.1}
One easy way to get started at programming is to examine some typical
interactions with an interpreter for the Scheme dialect of Lisp. Imagine
that you are sitting at a computer terminal. You type an *expression*,
and the interpreter responds by displaying the result of its
*evaluating* that expression.
One kind of primitive expression you might type is a number. (More
precisely, the expression that you type consists of the numerals that
represent the number in base 10.) If you present Lisp with a number
::: scheme
486
:::
the interpreter will respond by printing[^5]
::: scheme
*486*
:::
Expressions representing numbers may be combined with an expression
representing a primitive procedure (such as `+` or `*`) to form a
compound expression that represents the application of the procedure to
those numbers. For example:
::: scheme
(+ 137 349) *486*
:::
::: scheme
(- 1000 334) *666*
:::
::: scheme
(\* 5 99) *495*
:::
::: scheme
(/ 10 5) *2*
:::
::: scheme
(+ 2.7 10) *12.7*
:::
Expressions such as these, formed by delimiting a list of expressions
within parentheses in order to denote procedure application, are called
*combinations*. The leftmost element in the list is called the
*operator*, and the other elements are called *operands*. The value of a
combination is obtained by applying the procedure specified by the
operator to the *arguments* that are the values of the operands.
The convention of placing the operator to the left of the operands is
known as *prefix notation*, and it may be somewhat confusing at first
because it departs significantly from the customary mathematical
convention. Prefix notation has several advantages, however. One of them
is that it can accommodate procedures that may take an arbitrary number
of arguments, as in the following examples:
::: scheme
(+ 21 35 12 7) *75*
:::
::: scheme
(\* 25 4 12) *1200*
:::
No ambiguity can arise, because the operator is always the leftmost
element and the entire combination is delimited by the parentheses.
A second advantage of prefix notation is that it extends in a
straightforward way to allow combinations to be *nested*, that is, to
have combinations whose elements are themselves combinations:
::: scheme
(+ (\* 3 5) (- 10 6)) *19*
:::
There is no limit (in principle) to the depth of such nesting and to the
overall complexity of the expressions that the Lisp interpreter can
evaluate. It is we humans who get confused by still relatively simple
expressions such as
::: scheme
(+ (\* 3 (+ (\* 2 4) (+ 3 5))) (+ (- 10 7) 6))
:::
which the interpreter would readily evaluate to be 57. We can help
ourselves by writing such an expression in the form
::: scheme
(+ (\* 3 (+ (\* 2 4) (+ 3 5))) (+ (- 10 7) 6))
:::
following a formatting convention known as *pretty-printing*, in which
each long combination is written so that the operands are aligned
vertically. The resulting indentations display clearly the structure of
the expression.[^6]
Even with complex expressions, the interpreter always operates in the
same basic cycle: It reads an expression from the terminal, evaluates
the expression, and prints the result. This mode of operation is often
expressed by saying that the interpreter runs in a *read-eval-print
loop*. Observe in particular that it is not necessary to explicitly
instruct the interpreter to print the value of the expression.[^7]
### Naming and the Environment {#Section 1.1.2}
A critical aspect of a programming language is the means it provides for
using names to refer to computational objects. We say that the name
identifies a *variable* whose *value* is the object.
In the Scheme dialect of Lisp, we name things with `define`. Typing
::: scheme
(define size 2)
:::
causes the interpreter to associate the value 2 with the name
`size`.[^8] Once the name `size` has been associated with the number 2,
we can refer to the value 2 by name:
::: scheme
size *2*
:::
::: scheme
(\* 5 size) *10*
:::
Here are further examples of the use of `define`:
::: scheme
(define pi 3.14159) (define radius 10) (\* pi (\* radius radius))
*314.159* (define circumference (\* 2 pi radius)) circumference
*62.8318*
:::
`define` is our language's simplest means of abstraction, for it allows
us to use simple names to refer to the results of compound operations,
such as the `circumference` computed above. In general, computational
objects may have very complex structures, and it would be extremely
inconvenient to have to remember and repeat their details each time we
want to use them. Indeed, complex programs are constructed by building,
step by step, computational objects of increasing complexity. The
interpreter makes this step-by-step program construction particularly
convenient because name-object associations can be created incrementally
in successive interactions. This feature encourages the incremental
development and testing of programs and is largely responsible for the
fact that a Lisp program usually consists of a large number of
relatively simple procedures.
It should be clear that the possibility of associating values with
symbols and later retrieving them means that the interpreter must
maintain some sort of memory that keeps track of the name-object pairs.
This memory is called the *environment* (more precisely the *global
environment*, since we will see later that a computation may involve a
number of different environments).[^9]
### Evaluating Combinations {#Section 1.1.3}
One of our goals in this chapter is to isolate issues about thinking
procedurally. As a case in point, let us consider that, in evaluating
combinations, the interpreter is itself following a procedure.
To evaluate a combination, do the following:
1. Evaluate the subexpressions of the combination.
2. Apply the procedure that is the value of the leftmost subexpression
(the operator) to the arguments that are the values of the other
subexpressions (the operands).
Even this simple rule illustrates some important points about processes
in general. First, observe that the first step dictates that in order to
accomplish the evaluation process for a combination we must first
perform the evaluation process on each element of the combination. Thus,
the evaluation rule is *recursive* in nature; that is, it includes, as
one of its steps, the need to invoke the rule itself.[^10]
Notice how succinctly the idea of recursion can be used to express what,
in the case of a deeply nested combination, would otherwise be viewed as
a rather complicated process. For example, evaluating
::: scheme
(\* (+ 2 (\* 4 6)) (+ 3 5 7))
:::
requires that the evaluation rule be applied to four different
combinations. We can obtain a picture of this process by representing
the combination in the form of a tree, as shown in [Figure
1.1](#Figure 1.1). Each combination is represented by a node with
branches corresponding to the operator and the operands of the
combination stemming from it. The terminal nodes (that is, nodes with no
branches stemming from them) represent either operators or numbers.
Viewing evaluation in terms of the tree, we can imagine that the values
of the operands percolate upward, starting from the terminal nodes and
then combining at higher and higher levels. In general, we shall see
that recursion is a very powerful technique for dealing with
hierarchical, treelike objects. In fact, the "percolate values upward"
form of the evaluation rule is an example of a general kind of process
known as *tree accumulation*.
[]{#Figure 1.1 label="Figure 1.1"}
![image](fig/chap1/Fig1.1g.pdf){width="31mm"}
> **Figure 1.1:** Tree representation, showing the value of each
> subcombination.
Next, observe that the repeated application of the first step brings us
to the point where we need to evaluate, not combinations, but primitive
expressions such as numerals, built-in operators, or other names. We
take care of the primitive cases by stipulating that
- the values of numerals are the numbers that they name,
- the values of built-in operators are the machine instruction
sequences that carry out the corresponding operations, and
- the values of other names are the objects associated with those
names in the environment.
We may regard the second rule as a special case of the third one by
stipulating that symbols such as `+` and `*` are also included in the
global environment, and are associated with the sequences of machine
instructions that are their "values." The key point to notice is the
role of the environment in determining the meaning of the symbols in
expressions. In an interactive language such as Lisp, it is meaningless
to speak of the value of an expression such as `(+ x 1)` without
specifying any information about the environment that would provide a
meaning for the symbol `x` (or even for the symbol `+`). As we shall see
in [Chapter 3](#Chapter 3), the general notion of the environment as
providing a context in which evaluation takes place will play an
important role in our understanding of program execution.
Notice that the evaluation rule given above does not handle definitions.
For instance, evaluating `(define x 3)` does not apply `define` to two
arguments, one of which is the value of the symbol `x` and the other of
which is 3, since the purpose of the `define` is precisely to associate
`x` with a value. (That is, `(define x 3)` is not a combination.)
Such exceptions to the general evaluation rule are called *special
forms*. `define` is the only example of a special form that we have seen
so far, but we will meet others shortly. Each special form has its own
evaluation rule. The various kinds of expressions (each with its
associated evaluation rule) constitute the syntax of the programming
language. In comparison with most other programming languages, Lisp has
a very simple syntax; that is, the evaluation rule for expressions can
be described by a simple general rule together with specialized rules
for a small number of special forms.[^11]
### Compound Procedures {#Section 1.1.4}
We have identified in Lisp some of the elements that must appear in any
powerful programming language:
- Numbers and arithmetic operations are primitive data and procedures.
- Nesting of combinations provides a means of combining operations.
- Definitions that associate names with values provide a limited means
of abstraction.
Now we will learn about *procedure definitions*, a much more powerful
abstraction technique by which a compound operation can be given a name
and then referred to as a unit.
We begin by examining how to express the idea of "squaring." We might
say, "To square something, multiply it by itself." This is expressed in
our language as
::: scheme
(define (square x) (\* x x))
:::
We can understand this in the following way:
::: scheme
(define (square x) (\* x x)) \| \| \| \| \| \| To square something,
multiply it by itself.
:::
We have here a *compound procedure*, which has been given the name
`square`. The procedure represents the operation of multiplying
something by itself. The thing to be multiplied is given a local name,
`x`, which plays the same role that a pronoun plays in natural language.
Evaluating the definition creates this compound procedure and associates
it with the name `square`.[^12]
The general form of a procedure definition is
::: scheme
(define
( $\color{SchemeDark}\langle$ *name* $\color{SchemeDark}\kern0.03em\rangle$
$\color{SchemeDark}\langle$ *formal
parameters* $\color{SchemeDark}\kern0.02em\rangle$ )
$\color{SchemeDark}\langle\kern0.08em$ *body* $\color{SchemeDark}\rangle$ )
:::
The $\langle\hbox{\sl name}\kern0.08em\rangle$ is a symbol to be
associated with the procedure definition in the environment.[^13] The
$\langle\hbox{\sl formal parameters}\kern0.08em\rangle$ are the names
used within the body of the procedure to refer to the corresponding
arguments of the procedure. The
$\langle\hbox{\sl body}\kern0.08em\rangle$ is an expression that will
yield the value of the procedure application when the formal parameters
are replaced by the actual arguments to which the procedure is
applied.[^14] The $\langle$*name*$\kern0.08em\rangle$ and the
$\langle$*formal parameters*$\kern0.08em\rangle$ are grouped within
parentheses, just as they would be in an actual call to the procedure
being defined.
Having defined `square`, we can now use it:
::: scheme
(square 21) *441* (square (+ 2 5)) *49* (square (square 3)) *81*
:::
We can also use `square` as a building block in defining other
procedures. For example, $x^2 + y^2$ can be expressed as
::: scheme
(+ (square x) (square y))
:::
We can easily define a procedure `sum/of/squares` that, given any two
numbers as arguments, produces the sum of their squares:
::: scheme
(define (sum-of-squares x y) (+ (square x) (square y))) (sum-of-squares
3 4) *25*
:::
Now we can use `sum/of/squares` as a building block in constructing
further procedures:
::: scheme
(define (f a) (sum-of-squares (+ a 1) (\* a 2))) (f 5) *136*
:::
Compound procedures are used in exactly the same way as primitive
procedures. Indeed, one could not tell by looking at the definition of
`sum/of/squares` given above whether `square` was built into the
interpreter, like `+` and `*`, or defined as a compound procedure.
### The Substitution Model for Procedure Application {#Section 1.1.5}
To evaluate a combination whose operator names a compound procedure, the
interpreter follows much the same process as for combinations whose
operators name primitive procedures, which we described in [Section
1.1.3](#Section 1.1.3). That is, the interpreter evaluates the elements
of the combination and applies the procedure (which is the value of the
operator of the combination) to the arguments (which are the values of
the operands of the combination).
We can assume that the mechanism for applying primitive procedures to
arguments is built into the interpreter. For compound procedures, the
application process is as follows:
> To apply a compound procedure to arguments, evaluate the body of the
> procedure with each formal parameter replaced by the corresponding
> argument.
To illustrate this process, let's evaluate the combination
::: scheme
(f 5)
:::
where `f` is the procedure defined in [Section 1.1.4](#Section 1.1.4).
We begin by retrieving the body of `f`:
::: scheme
(sum-of-squares (+ a 1) (\* a 2))
:::
Then we replace the formal parameter `a` by the argument 5:
::: scheme
(sum-of-squares (+ 5 1) (\* 5 2))
:::
Thus the problem reduces to the evaluation of a combination with two
operands and an operator `sum/of/squares`. Evaluating this combination
involves three subproblems. We must evaluate the operator to get the
procedure to be applied, and we must evaluate the operands to get the
arguments. Now `(+ 5 1)` produces 6 and `(* 5 2)` produces 10, so we
must apply the `sum/of/squares` procedure to 6 and 10. These values are
substituted for the formal parameters `x` and `y` in the body of
`sum/of/squares`, reducing the expression to
::: scheme
(+ (square 6) (square 10))
:::
If we use the definition of `square`, this reduces to
::: scheme
(+ (\* 6 6) (\* 10 10))
:::
which reduces by multiplication to
::: scheme
(+ 36 100)
:::
and finally to
::: scheme
136
:::
The process we have just described is called the *substitution model*
for procedure application. It can be taken as a model that determines
the "meaning" of procedure application, insofar as the procedures in
this chapter are concerned. However, there are two points that should be
stressed:
- The purpose of the substitution is to help us think about procedure
application, not to provide a description of how the interpreter
really works. Typical interpreters do not evaluate procedure
applications by manipulating the text of a procedure to substitute
values for the formal parameters. In practice, the "substitution" is
accomplished by using a local environment for the formal parameters.
We will discuss this more fully in [Chapter 3](#Chapter 3) and
[Chapter 4](#Chapter 4) when we examine the implementation of an
interpreter in detail.
- Over the course of this book, we will present a sequence of
increasingly elaborate models of how interpreters work, culminating
with a complete implementation of an interpreter and compiler in
[Chapter 5](#Chapter 5). The substitution model is only the first of
these models---a way to get started thinking formally about the
evaluation process. In general, when modeling phenomena in science
and engineering, we begin with simplified, incomplete models. As we
examine things in greater detail, these simple models become
inadequate and must be replaced by more refined models. The
substitution model is no exception. In particular, when we address
in [Chapter 3](#Chapter 3) the use of procedures with "mutable
data," we will see that the substitution model breaks down and must
be replaced by a more complicated model of procedure
application.[^15]
#### Applicative order versus normal order {#applicative-order-versus-normal-order .unnumbered}
According to the description of evaluation given in [Section
1.1.3](#Section 1.1.3), the interpreter first evaluates the operator and
operands and then applies the resulting procedure to the resulting
arguments. This is not the only way to perform evaluation. An
alternative evaluation model would not evaluate the operands until their
values were needed. Instead it would first substitute operand
expressions for parameters until it obtained an expression involving
only primitive operators, and would then perform the evaluation. If we
used this method, the evaluation of `(f 5)` would proceed according to
the sequence of expansions
::: scheme
(sum-of-squares (+ 5 1) (\* 5 2)) (+ (square (+ 5 1)) (square (\* 5 2))
) (+ (\* (+ 5 1) (+ 5 1)) (\* (\* 5 2) (\* 5 2)))
:::
followed by the reductions
::: scheme
(+ (\* 6 6) (\* 10 10)) (+ 36 100) 136
:::
This gives the same answer as our previous evaluation model, but the
process is different. In particular, the evaluations of `(+ 5 1)` and
`(* 5 2)` are each performed twice here, corresponding to the reduction
of the expression `(* x x)` with `x` replaced respectively by `(+ 5 1)`
and `(* 5 2)`.
This alternative "fully expand and then reduce" evaluation method is
known as *normal-order evaluation*, in contrast to the "evaluate the
arguments and then apply" method that the interpreter actually uses,
which is called *applicative-order evaluation*. It can be shown that,
for procedure applications that can be modeled using substitution
(including all the procedures in the first two chapters of this book)
and that yield legitimate values, normal-order and applicative-order
evaluation produce the same value. (See [Exercise 1.5](#Exercise 1.5)
for an instance of an "illegitimate" value where normal-order and
applicative-order evaluation do not give the same result.)
Lisp uses applicative-order evaluation, partly because of the additional
efficiency obtained from avoiding multiple evaluations of expressions
such as those illustrated with `(+ 5 1)` and `(* 5 2)` above and, more
significantly, because normal-order evaluation becomes much more
complicated to deal with when we leave the realm of procedures that can
be modeled by substitution. On the other hand, normal-order evaluation
can be an extremely valuable tool, and we will investigate some of its
implications in [Chapter 3](#Chapter
3) and [Chapter 4](#Chapter 4).[^16]
### Conditional Expressions and Predicates {#Section 1.1.6}
The expressive power of the class of procedures that we can define at
this point is very limited, because we have no way to make tests and to
perform different operations depending on the result of a test. For
instance, we cannot define a procedure that computes the absolute value
of a number by testing whether the number is positive, negative, or zero
and taking different actions in the different cases according to the
rule
$$|x| = \left\{ \begin{array}{r@{\quad \mathrm{if} \quad}l}
x & x > 0, \\
0 & x = 0, \\
\!\! -x & x < 0. \end{array} \right.$$
This construct is called a *case analysis*, and there is a special form
in Lisp for notating such a case analysis. It is called `cond` (which
stands for "conditional"), and it is used as follows:
::: scheme
(define (abs x) (cond ((\> x 0) x) ((= x 0) 0) ((\< x 0) (- x))))
:::
The general form of a conditional expression is
::: scheme
(cond
( $\color{SchemeDark}\langle$ *p* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *e* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
( $\color{SchemeDark}\langle$ *p* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
$\color{SchemeDark}\langle$ *e* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ )
$\dots$
( $\color{SchemeDark}\langle$ *p* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$
$\color{SchemeDark}\langle$ *e* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ ))
:::
consisting of the symbol `cond` followed by parenthesized pairs of
expressions
::: scheme
( $\color{SchemeDark}\langle$ *p* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *e* $\color{SchemeDark}\rangle$ )
:::
called *clauses*. The first expression in each pair is a
*predicate*---that is, an expression whose value is interpreted as
either true or false.[^17]
Conditional expressions are evaluated as follows. The predicate
$\langle{p_1}\rangle$ is evaluated first. If its value is false, then
$\langle{p_2}\rangle$ is evaluated. If $\langle{p_2}\rangle$'s value is
also false, then $\langle{p_3}\rangle$ is evaluated. This process
continues until a predicate is found whose value is true, in which case
the interpreter returns the value of the corresponding *consequent
expression* $\langle{e}\rangle$ of the clause as the value of the
conditional expression. If none of the $\langle{p}\rangle$'s is found to
be true, the value of the `cond` is undefined.
The word *predicate* is used for procedures that return true or false,
as well as for expressions that evaluate to true or false. The
absolute-value procedure `abs` makes use of the primitive predicates
`>`, `<`, and `=`.[^18] These take two numbers as arguments and test
whether the first number is, respectively, greater than, less than, or
equal to the second number, returning true or false accordingly.
Another way to write the absolute-value procedure is
::: scheme
(define (abs x) (cond ((\< x 0) (- x)) (else x)))
:::
which could be expressed in English as "If $x$ is less than zero return
$-x;$ otherwise return $x$." `else` is a special symbol that can be used
in place of the $\langle{p}\rangle$ in the final clause of a `cond`.
This causes the `cond` to return as its value the value of the
corresponding $\langle{e}\rangle$ whenever all previous clauses have
been bypassed. In fact, any expression that always evaluates to a true
value could be used as the $\langle{p}\rangle$ here.
Here is yet another way to write the absolute-value procedure:
::: scheme
(define (abs x) (if (\< x 0) (- x) x))
:::
This uses the special form `if`, a restricted type of conditional that
can be used when there are precisely two cases in the case analysis. The
general form of an `if` expression is
::: scheme
(if
$\color{SchemeDark}\langle\kern0.07em$ *predicate* $\color{SchemeDark}\kern0.06em\rangle$
$\color{SchemeDark}\langle\kern0.07em$ *consequent* $\color{SchemeDark}\kern0.05em\rangle$
$\color{SchemeDark}\langle\kern0.06em$ *alternative* $\color{SchemeDark}\kern0.06em\rangle$ )
:::
To evaluate an `if` expression, the interpreter starts by evaluating the
$\langle$*predicate*$\kern0.04em\rangle$ part of the expression. If the
$\langle$*predicate*$\kern0.04em\rangle$ evaluates to a true value, the
interpreter then evaluates the $\langle$*consequent*$\kern0.04em\rangle$
and returns its value. Otherwise it evaluates the
$\langle$*alternative*$\kern0.04em\rangle$ and returns its value.[^19]
In addition to primitive predicates such as `<`, `=`, and `>`, there are
logical composition operations, which enable us to construct compound
predicates. The three most frequently used are these:
- $\hbox{\tt(and }\langle{e_1}\rangle\;\;\dots\;\;\langle{e_n}\rangle\hbox{\tt)}$
The interpreter evaluates the expressions
$\langle{e}\kern0.08em\rangle$ one at a time, in left-to-right
order. If any $\langle{e}\kern0.08em\rangle$ evaluates to false, the
value of the `and` expression is false, and the rest of the
$\langle{e}\kern0.08em\rangle$'s are not evaluated. If all
$\langle{e}\kern0.08em\rangle$'s evaluate to true values, the value
of the `and` expression is the value of the last one.
- $\hbox{\tt(or }\langle{e_1}\rangle\;\;\dots\;\;\langle{e_n}\rangle\hbox{\tt)}$
The interpreter evaluates the expressions
$\langle{e}\kern0.08em\rangle$ one at a time, in left-to-right
order. If any $\langle{e}\kern0.08em\rangle$ evaluates to a true
value, that value is returned as the value of the `or` expression,
and the rest of the $\langle{e}\kern0.08em\rangle$'s are not
evaluated. If all $\langle{e}\kern0.08em\rangle$'s evaluate to
false, the value of the `or` expression is false.
- $\hbox{\tt(not }\langle{e}\rangle\hbox{\tt)}$
The value of a `not` expression is true when the expression
$\langle{e}\kern0.08em\rangle$ evaluates to false, and false
otherwise.
Notice that `and` and `or` are special forms, not procedures, because
the subexpressions are not necessarily all evaluated. `not` is an
ordinary procedure.
As an example of how these are used, the condition that a number $x$ be
in the range $5 < x < 10$ may be expressed as
::: scheme
(and (\> x 5) (\< x 10))
:::
As another example, we can define a predicate to test whether one number
is greater than or equal to another as
::: scheme
(define (\>= x y) (or (\> x y) (= x y)))
:::
or alternatively as
::: scheme
(define (\>= x y) (not (\< x y)))
:::
> **[]{#Exercise 1.1 label="Exercise 1.1"}Exercise 1.1:** Below is a
> sequence of expressions. What is the result printed by the interpreter
> in response to each expression? Assume that the sequence is to be
> evaluated in the order in which it is presented.
>
> ::: scheme
> 10 (+ 5 3 4) (- 9 1) (/ 6 2) (+ (\* 2 4) (- 4 6)) (define a 3) (define
> b (+ a 1)) (+ a b (\* a b)) (= a b) (if (and (\> b a) (\< b (\* a b)))
> b a)
> :::
>
> ::: scheme
> (cond ((= a 4) 6) ((= b 4) (+ 6 7 a)) (else 25))
> :::
>
> ::: scheme
> (+ 2 (if (\> b a) b a))
> :::
>
> ::: scheme
> (\* (cond ((\> a b) a) ((\< a b) b) (else -1)) (+ a 1))
> :::
> **[]{#Exercise 1.2 label="Exercise 1.2"}Exercise 1.2:** Translate the
> following expression into prefix form:
>
> $${5 + 4 + (2 - (3 - (6 + {4\over5})))\over3(6 - 2)(2 - 7)}.$$
> **[]{#Exercise 1.3 label="Exercise 1.3"}Exercise 1.3:** Define a
> procedure that takes three numbers as arguments and returns the sum of
> the squares of the two larger numbers.
> **[]{#Exercise 1.4 label="Exercise 1.4"}Exercise 1.4:** Observe that
> our model of evaluation allows for combinations whose operators are
> compound expressions. Use this observation to describe the behavior of
> the following procedure:
>
> ::: scheme
> (define (a-plus-abs-b a b) ((if (\> b 0) + -) a b))
> :::
> **[]{#Exercise 1.5 label="Exercise 1.5"}Exercise 1.5:** Ben Bitdiddle
> has invented a test to determine whether the interpreter he is faced
> with is using applicative-order evaluation or normal-order evaluation.
> He defines the following two procedures:
>
> ::: scheme
> (define (p) (p)) (define (test x y) (if (= x 0) 0 y))
> :::
>
> Then he evaluates the expression
>
> ::: scheme
> (test 0 (p))
> :::
>
> What behavior will Ben observe with an interpreter that uses
> applicative-order evaluation? What behavior will he observe with an
> interpreter that uses normal-order evaluation? Explain your answer.
> (Assume that the evaluation rule for the special form `if` is the same
> whether the interpreter is using normal or applicative order: The
> predicate expression is evaluated first, and the result determines
> whether to evaluate the consequent or the alternative expression.)
### Example: Square Roots by Newton's Method {#Section 1.1.7}
Procedures, as introduced above, are much like ordinary mathematical
functions. They specify a value that is determined by one or more
parameters. But there is an important difference between mathematical
functions and computer procedures. Procedures must be effective.
As a case in point, consider the problem of computing square roots. We
can define the square-root function as
$$\sqrt{x}\;\; = {\rm\;\; the\;\;} y
{\rm\;\; such\;\; that\;\;} y \ge 0 {\rm\;\; and\;\;} y^2 = x.$$
This describes a perfectly legitimate mathematical function. We could
use it to recognize whether one number is the square root of another, or
to derive facts about square roots in general. On the other hand, the
definition does not describe a procedure. Indeed, it tells us almost
nothing about how to actually find the square root of a given number. It
will not help matters to rephrase this definition in pseudo-Lisp:
::: scheme
(define (sqrt x) (the y (and (\>= y 0) (= (square y) x))))
:::
This only begs the question.
The contrast between function and procedure is a reflection of the
general distinction between describing properties of things and
describing how to do things, or, as it is sometimes referred to, the
distinction between declarative knowledge and imperative knowledge. In
mathematics we are usually concerned with declarative (what is)
descriptions, whereas in computer science we are usually concerned with
imperative (how to) descriptions.[^20]
How does one compute square roots? The most common way is to use
Newton's method of successive approximations, which says that whenever
we have a guess $y$ for the value of the square root of a number $x$, we
can perform a simple manipulation to get a better guess (one closer to
the actual square root) by averaging $y$ with $x / y$.[^21] For example,
we can compute the square root of 2 as follows. Suppose our initial
guess is 1:
Guess Quotient Average 1 (2/1) = 2 ((2 + 1)/2) = 1.5 1.5 (2/1.5) =
1.3333 ((1.3333 + 1.5)/2) = 1.4167 1.4167 (2/1.4167) = 1.4118 ((1.4167 +
1.4118)/2) = 1.4142 1.4142 \... \...
Continuing this process, we obtain better and better approximations to
the square root.
Now let's formalize the process in terms of procedures. We start with a
value for the radicand (the number whose square root we are trying to
compute) and a value for the guess. If the guess is good enough for our
purposes, we are done; if not, we must repeat the process with an
improved guess. We write this basic strategy as a procedure:
::: scheme
(define (sqrt-iter guess x) (if (good-enough? guess x) guess (sqrt-iter
(improve guess x) x)))
:::
A guess is improved by averaging it with the quotient of the radicand
and the old guess:
::: scheme
(define (improve guess x) (average guess (/ x guess)))
:::
where
::: scheme
(define (average x y) (/ (+ x y) 2))
:::
We also have to say what we mean by "good enough." The following will do
for illustration, but it is not really a very good test. (See [Exercise
1.7](#Exercise 1.7).) The idea is to improve the answer until it is
close enough so that its square differs from the radicand by less than a
predetermined tolerance (here 0.001):[^22]
::: scheme
(define (good-enough? guess x) (\< (abs (- (square guess) x)) 0.001))
:::
Finally, we need a way to get started. For instance, we can always guess
that the square root of any number is 1:[^23]
::: scheme
(define (sqrt x) (sqrt-iter 1.0 x))
:::
If we type these definitions to the interpreter, we can use `sqrt` just
as we can use any procedure:
::: scheme
(sqrt 9) *3.00009155413138*
(sqrt (+ 100 37)) *11.704699917758145*
(sqrt (+ (sqrt 2) (sqrt 3))) *1.7739279023207892*
(square (sqrt 1000)) *1000.000369924366*
:::
The `sqrt` program also illustrates that the simple procedural language
we have introduced so far is sufficient for writing any purely numerical
program that one could write in, say, C or Pascal. This might seem
surprising, since we have not included in our language any iterative
(looping) constructs that direct the computer to do something over and
over again. `sqrt/iter`, on the other hand, demonstrates how iteration
can be accomplished using no special construct other than the ordinary
ability to call a procedure.[^24]
> **[]{#Exercise 1.6 label="Exercise 1.6"}Exercise 1.6:** Alyssa P.
> Hacker doesn't see why `if` needs to be provided as a special form.
> "Why can't I just define it as an ordinary procedure in terms of
> `cond`?" she asks. Alyssa's friend Eva Lu Ator claims this can indeed
> be done, and she defines a new version of `if`:
>
> ::: scheme
> (define (new-if predicate then-clause else-clause) (cond (predicate
> then-clause) (else else-clause)))
> :::
>
> Eva demonstrates the program for Alyssa:
>
> ::: scheme
> (new-if (= 2 3) 0 5) *5* (new-if (= 1 1) 0 5) *0*
> :::
>
> Delighted, Alyssa uses `new/if` to rewrite the square-root program:
>
> ::: scheme
> (define (sqrt-iter guess x) (new-if (good-enough? guess x) guess
> (sqrt-iter (improve guess x) x)))
> :::
>
> What happens when Alyssa attempts to use this to compute square roots?
> Explain.
> **[]{#Exercise 1.7 label="Exercise 1.7"}Exercise 1.7:** The
> `good/enough?` test used in computing square roots will not be very
> effective for finding the square roots of very small numbers. Also, in
> real computers, arithmetic operations are almost always performed with
> limited precision. This makes our test inadequate for very large
> numbers. Explain these statements, with examples showing how the test
> fails for small and large numbers. An alternative strategy for
> implementing `good/enough?` is to watch how `guess` changes from one
> iteration to the next and to stop when the change is a very small
> fraction of the guess. Design a square-root procedure that uses this
> kind of end test. Does this work better for small and large numbers?
> **[]{#Exercise 1.8 label="Exercise 1.8"}Exercise 1.8:** Newton's
> method for cube roots is based on the fact that if $y$ is an
> approximation to the cube root of $x$, then a better approximation is
> given by the value
>
> $${{x / y^2} + 2y \over 3}.$$
>
> Use this formula to implement a cube-root procedure analogous to the
> square-root procedure. (In [Section 1.3.4](#Section 1.3.4) we will see
> how to implement Newton's method in general as an abstraction of these
> square-root and cube-root procedures.)
### Procedures as Black-Box Abstractions {#Section 1.1.8}
`sqrt` is our first example of a process defined by a set of mutually
defined procedures. Notice that the definition of `sqrt/iter` is
*recursive*; that is, the procedure is defined in terms of itself. The
idea of being able to define a procedure in terms of itself may be
disturbing; it may seem unclear how such a "circular" definition could
make sense at all, much less specify a well-defined process to be
carried out by a computer. This will be addressed more carefully in
[Section 1.2](#Section 1.2). But first let's consider some other
important points illustrated by the `sqrt` example.
Observe that the problem of computing square roots breaks up naturally
into a number of subproblems: how to tell whether a guess is good
enough, how to improve a guess, and so on. Each of these tasks is
accomplished by a separate procedure. The entire `sqrt` program can be
viewed as a cluster of procedures (shown in [Figure 1.2](#Figure 1.2))
that mirrors the decomposition of the problem into subproblems.
[]{#Figure 1.2 label="Figure 1.2"}
![image](fig/chap1/Fig1.2.pdf){width="44mm"}
> **Figure 1.2:** Procedural decomposition of the `sqrt` program.
The importance of this decomposition strategy is not simply that one is
dividing the program into parts. After all, we could take any large
program and divide it into parts---the first ten lines, the next ten
lines, the next ten lines, and so on. Rather, it is crucial that each
procedure accomplishes an identifiable task that can be used as a module
in defining other procedures. For example, when we define the
`good/enough?` procedure in terms of `square`, we are able to regard the
`square` procedure as a "black box." We are not at that moment concerned
with *how* the procedure computes its result, only with the fact that it
computes the square. The details of how the square is computed can be
suppressed, to be considered at a later time. Indeed, as far as the
`good/enough?` procedure is concerned, `square` is not quite a procedure
but rather an abstraction of a procedure, a so-called *procedural
abstraction*. At this level of abstraction, any procedure that computes
the square is equally good.
Thus, considering only the values they return, the following two
procedures for squaring a number should be indistinguishable. Each takes
a numerical argument and produces the square of that number as the
value.[^25]
::: scheme
(define (square x) (\* x x)) (define (square x) (exp (double (log x))))
(define (double x) (+ x x))
:::
So a procedure definition should be able to suppress detail. The users
of the procedure may not have written the procedure themselves, but may
have obtained it from another programmer as a black box. A user should
not need to know how the procedure is implemented in order to use it.
#### Local names {#local-names .unnumbered}
One detail of a procedure's implementation that should not matter to the
user of the procedure is the implementer's choice of names for the
procedure's formal parameters. Thus, the following procedures should not
be distinguishable:
::: scheme
(define (square x) (\* x x)) (define (square y) (\* y y))
:::
This principle---that the meaning of a procedure should be independent
of the parameter names used by its author---seems on the surface to be
self-evident, but its consequences are profound. The simplest
consequence is that the parameter names of a procedure must be local to
the body of the procedure. For example, we used `square` in the
definition of `good/enough?` in our square-root procedure:
::: scheme
(define (good-enough? guess x) (\< (abs (- (square guess) x)) 0.001))
:::
The intention of the author of `good/enough?` is to determine if the
square of the first argument is within a given tolerance of the second
argument. We see that the author of `good/enough?` used the name `guess`
to refer to the first argument and `x` to refer to the second argument.
The argument of `square` is `guess`. If the author of `square` used `x`
(as above) to refer to that argument, we see that the `x` in
`good/enough?` must be a different `x` than the one in `square`. Running
the procedure `square` must not affect the value of `x` that is used by
`good/enough?`, because that value of `x` may be needed by
`good/enough?` after `square` is done computing.
If the parameters were not local to the bodies of their respective
procedures, then the parameter `x` in `square` could be confused with
the parameter `x` in `good/enough?`, and the behavior of `good/enough?`
would depend upon which version of `square` we used. Thus, `square`
would not be the black box we desired.
A formal parameter of a procedure has a very special role in the
procedure definition, in that it doesn't matter what name the formal
parameter has. Such a name is called a *bound variable*, and we say that
the procedure definition *binds* its formal parameters. The meaning of a
procedure definition is unchanged if a bound variable is consistently
renamed throughout the definition.[^26] If a variable is not bound, we
say that it is *free*. The set of expressions for which a binding
defines a name is called the *scope* of that name. In a procedure
definition, the bound variables declared as the formal parameters of the
procedure have the body of the procedure as their scope.
In the definition of `good/enough?` above, `guess` and `x` are bound
variables but `<`, `-`, `abs`, and `square` are free. The meaning of
`good/enough?` should be independent of the names we choose for `guess`
and `x` so long as they are distinct and different from `<`, `-`, `abs`,
and `square`. (If we renamed `guess` to `abs` we would have introduced a
bug by *capturing* the variable `abs`. It would have changed from free
to bound.) The meaning of `good/enough?` is not independent of the names
of its free variables, however. It surely depends upon the fact
(external to this definition) that the symbol `abs` names a procedure
for computing the absolute value of a number. `good/enough?` will
compute a different function if we substitute `cos` for `abs` in its
definition.
#### Internal definitions and block structure {#internal-definitions-and-block-structure .unnumbered}
We have one kind of name isolation available to us so far: The formal
parameters of a procedure are local to the body of the procedure. The
square-root program illustrates another way in which we would like to
control the use of names. The existing program consists of separate
procedures:
::: scheme
(define (sqrt x) (sqrt-iter 1.0 x)) (define (sqrt-iter guess x) (if
(good-enough? guess x) guess (sqrt-iter (improve guess x) x))) (define
(good-enough? guess x) (\< (abs (- (square guess) x)) 0.001)) (define
(improve guess x) (average guess (/ x guess)))
:::
The problem with this program is that the only procedure that is
important to users of `sqrt` is `sqrt`. The other procedures
(`sqrt/iter`, `good/enough?`, and `improve`) only clutter up their
minds. They may not define any other procedure called `good/enough?` as
part of another program to work together with the square-root program,
because `sqrt` needs it. The problem is especially severe in the
construction of large systems by many separate programmers. For example,
in the construction of a large library of numerical procedures, many
numerical functions are computed as successive approximations and thus
might have procedures named `good/enough?` and `improve` as auxiliary
procedures. We would like to localize the subprocedures, hiding them
inside `sqrt` so that `sqrt` could coexist with other successive
approximations, each having its own private `good/enough?` procedure. To
make this possible, we allow a procedure to have internal definitions
that are local to that procedure. For example, in the square-root
problem we can write
::: scheme
(define (sqrt x) (define (good-enough? guess x) (\< (abs (- (square
guess) x)) 0.001)) (define (improve guess x) (average guess (/ x
guess))) (define (sqrt-iter guess x) (if (good-enough? guess x) guess
(sqrt-iter (improve guess x) x))) (sqrt-iter 1.0 x))
:::
Such nesting of definitions, called *block structure*, is basically the
right solution to the simplest name-packaging problem. But there is a
better idea lurking here. In addition to internalizing the definitions
of the auxiliary procedures, we can simplify them. Since `x` is bound in
the definition of `sqrt`, the procedures `good/enough?`, `improve`, and
`sqrt/iter`, which are defined internally to `sqrt`, are in the scope of
`x`. Thus, it is not necessary to pass `x` explicitly to each of these
procedures. Instead, we allow `x` to be a free variable in the internal
definitions, as shown below. Then `x` gets its value from the argument
with which the enclosing procedure `sqrt` is called. This discipline is
called *lexical scoping*.[^27]
::: scheme
(define (sqrt x) (define (good-enough? guess) (\< (abs (- (square guess)
x)) 0.001)) (define (improve guess) (average guess (/ x guess))) (define
(sqrt-iter guess) (if (good-enough? guess) guess (sqrt-iter (improve
guess)))) (sqrt-iter 1.0))
:::
We will use block structure extensively to help us break up large
programs into tractable pieces.[^28] The idea of block structure
originated with the programming language Algol 60. It appears in most
advanced programming languages and is an important tool for helping to
organize the construction of large programs.
## Procedures and the Processes They Generate {#Section 1.2}
We have now considered the elements of programming: We have used
primitive arithmetic operations, we have combined these operations, and
we have abstracted these composite operations by defining them as
compound procedures. But that is not enough to enable us to say that we
know how to program. Our situation is analogous to that of someone who
has learned the rules for how the pieces move in chess but knows nothing
of typical openings, tactics, or strategy. Like the novice chess player,
we don't yet know the common patterns of usage in the domain. We lack
the knowledge of which moves are worth making (which procedures are
worth defining). We lack the experience to predict the consequences of
making a move (executing a procedure).
The ability to visualize the consequences of the actions under
consideration is crucial to becoming an expert programmer, just as it is
in any synthetic, creative activity. In becoming an expert photographer,
for example, one must learn how to look at a scene and know how dark
each region will appear on a print for each possible choice of exposure
and development conditions. Only then can one reason backward, planning
framing, lighting, exposure, and development to obtain the desired
effects. So it is with programming, where we are planning the course of
action to be taken by a process and where we control the process by
means of a program. To become experts, we must learn to visualize the
processes generated by various types of procedures. Only after we have
developed such a skill can we learn to reliably construct programs that
exhibit the desired behavior.
A procedure is a pattern for the *local evolution* of a computational
process. It specifies how each stage of the process is built upon the
previous stage. We would like to be able to make statements about the
overall, or *global*, behavior of a process whose local evolution has
been specified by a procedure. This is very difficult to do in general,
but we can at least try to describe some typical patterns of process
evolution.
In this section we will examine some common "shapes" for processes
generated by simple procedures. We will also investigate the rates at
which these processes consume the important computational resources of
time and space. The procedures we will consider are very simple. Their
role is like that played by test patterns in photography: as
oversimplified prototypical patterns, rather than practical examples in
their own right.
### Linear Recursion and Iteration {#Section 1.2.1}
We begin by considering the factorial function, defined by
$$n! = n \cdot (n - 1) \cdot (n - 2) \cdots 3 \cdot 2 \cdot 1.$$
There are many ways to compute factorials. One way is to make use of the
observation that $n!$ is equal to $n$ times $(n - 1)!$ for any positive
integer $n$:
$$n! = n \cdot [(n - 1) \cdot (n - 2) \cdots 3 \cdot 2 \cdot 1] = n \cdot (n - 1)!.$$
Thus, we can compute $n!$ by computing $(n - 1)!$ and multiplying the
result by $n$. If we add the stipulation that 1! is equal to 1, this
observation translates directly into a procedure:
::: scheme
(define (factorial n) (if (= n 1) 1 (\* n (factorial (- n 1)))))
:::
We can use the substitution model of [Section 1.1.5](#Section 1.1.5) to
watch this procedure in action computing 6!, as shown in [Figure
1.3](#Figure 1.3).
[]{#Figure 1.3 label="Figure 1.3"}
![image](fig/chap1/Fig1.3c.pdf){width="82mm"}
**Figure 1.3:** A linear recursive process for computing 6!.
Now let's take a different perspective on computing factorials. We could
describe a rule for computing $n!$ by specifying that we first multiply
1 by 2, then multiply the result by 3, then by 4, and so on until we
reach $n$. More formally, we maintain a running product, together with a
counter that counts from 1 up to $n$. We can describe the computation by
saying that the counter and the product simultaneously change from one
step to the next according to the rule
::: scheme
product $\color{SchemeDark}\gets$ counter \* product counter
$\color{SchemeDark}\gets$ counter + 1
:::
and stipulating that $n!$ is the value of the product when the counter
exceeds $n$.
[]{#Figure 1.4 label="Figure 1.4"}
![image](fig/chap1/Fig1.4c.pdf){width="36mm"}
**Figure 1.4:** A linear iterative process for computing 6!.
Once again, we can recast our description as a procedure for computing
factorials:[^29]
::: scheme
(define (factorial n) (fact-iter 1 1 n)) (define (fact-iter product
counter max-count) (if (\> counter max-count) product (fact-iter (\*
counter product) (+ counter 1) max-count)))
:::
As before, we can use the substitution model to visualize the process of
computing 6!, as shown in [Figure 1.4](#Figure 1.4).
Compare the two processes. From one point of view, they seem hardly
different at all. Both compute the same mathematical function on the
same domain, and each requires a number of steps proportional to $n$ to
compute $n!$. Indeed, both processes even carry out the same sequence of
multiplications, obtaining the same sequence of partial products. On the
other hand, when we consider the "shapes" of the two processes, we find
that they evolve quite differently.
Consider the first process. The substitution model reveals a shape of
expansion followed by contraction, indicated by the arrow in [Figure
1.3](#Figure 1.3). The expansion occurs as the process builds up a chain
of *deferred operations* (in this case, a chain of multiplications). The
contraction occurs as the operations are actually performed. This type
of process, characterized by a chain of deferred operations, is called a
*recursive process*. Carrying out this process requires that the
interpreter keep track of the operations to be performed later on. In
the computation of $n!$, the length of the chain of deferred
multiplications, and hence the amount of information needed to keep
track of it, grows linearly with $n$ (is proportional to $n$), just like
the number of steps. Such a process is called a *linear recursive
process*.
By contrast, the second process does not grow and shrink. At each step,
all we need to keep track of, for any $n$, are the current values of the
variables `product`, `counter`, and `max/count`. We call this an
*iterative process*. In general, an iterative process is one whose state
can be summarized by a fixed number of *state variables*, together with
a fixed rule that describes how the state variables should be updated as
the process moves from state to state and an (optional) end test that
specifies conditions under which the process should terminate. In
computing $n!$, the number of steps required grows linearly with $n$.
Such a process is called a *linear iterative process*.
The contrast between the two processes can be seen in another way. In
the iterative case, the program variables provide a complete description
of the state of the process at any point. If we stopped the computation
between steps, all we would need to do to resume the computation is to
supply the interpreter with the values of the three program variables.
Not so with the recursive process. In this case there is some additional
"hidden" information, maintained by the interpreter and not contained in
the program variables, which indicates "where the process is" in
negotiating the chain of deferred operations. The longer the chain, the
more information must be maintained.[^30]
In contrasting iteration and recursion, we must be careful not to
confuse the notion of a recursive *process* with the notion of a
recursive *procedure*. When we describe a procedure as recursive, we are
referring to the syntactic fact that the procedure definition refers
(either directly or indirectly) to the procedure itself. But when we
describe a process as following a pattern that is, say, linearly
recursive, we are speaking about how the process evolves, not about the
syntax of how a procedure is written. It may seem disturbing that we
refer to a recursive procedure such as `fact/iter` as generating an
iterative process. However, the process really is iterative: Its state
is captured completely by its three state variables, and an interpreter
need keep track of only three variables in order to execute the process.
One reason that the distinction between process and procedure may be
confusing is that most implementations of common languages (including
Ada, Pascal, and C) are designed in such a way that the interpretation
of any recursive procedure consumes an amount of memory that grows with
the number of procedure calls, even when the process described is, in
principle, iterative. As a consequence, these languages can describe
iterative processes only by resorting to special-purpose "looping
constructs" such as `do`, `repeat`, `until`, `for`, and `while`. The
implementation of Scheme we shall consider in [Chapter 5](#Chapter 5)
does not share this defect. It will execute an iterative process in
constant space, even if the iterative process is described by a
recursive procedure. An implementation with this property is called
*tail-recursive*. With a tail-recursive implementation, iteration can be
expressed using the ordinary procedure call mechanism, so that special
iteration constructs are useful only as syntactic sugar.[^31]
> **[]{#Exercise 1.9 label="Exercise 1.9"}Exercise 1.9:** Each of the
> following two procedures defines a method for adding two positive
> integers in terms of the procedures `inc`, which increments its
> argument by 1, and `dec`, which decrements its argument by 1.
>
> ::: scheme
> (define (+ a b) (if (= a 0) b (inc (+ (dec a) b)))) (define (+ a b)
> (if (= a 0) b (+ (dec a) (inc b))))
> :::
>
> Using the substitution model, illustrate the process generated by each
> procedure in evaluating `(+ 4 5)`. Are these processes iterative or
> recursive?
> **[]{#Exercise 1.10 label="Exercise 1.10"}Exercise 1.10:** The
> following procedure computes a mathematical function called
> Ackermann's function.
>
> ::: scheme
> (define (A x y) (cond ((= y 0) 0) ((= x 0) (\* 2 y)) ((= y 1) 2) (else
> (A (- x 1) (A x (- y 1))))))
> :::
>
> What are the values of the following expressions?
>
> ::: scheme
> (A 1 10) (A 2 4) (A 3 3)
> :::
>
> Consider the following procedures, where `A` is the procedure defined
> above:
>
> ::: scheme
> (define (f n) (A 0 n)) (define (g n) (A 1 n)) (define (h n) (A 2 n))
> (define (k n) (\* 5 n n))
> :::
>
> Give concise mathematical definitions for the functions computed by
> the procedures `f`, `g`, and `h` for positive integer values of $n$.
> For example, `(k n)` computes $5n^2$.
### Tree Recursion {#Section 1.2.2}
Another common pattern of computation is called *tree recursion*. As an
example, consider computing the sequence of Fibonacci numbers, in which
each number is the sum of the preceding two:
$$0,\; 1,\; 1,\; 2,\; 3,\; 5,\; 8,\; 13,\; 21,\; \dots.$$
In general, the Fibonacci numbers can be defined by the rule
$${\rm Fib}(n) =
\begin{cases}
\; 0 & {\rm if} \;\; n=0, \\
\; 1 & {\rm if} \;\; n=1, \\
\; {\rm Fib}(n-1) + {\rm Fib}(n-2) \quad & {\rm otherwise}.
\end{cases}$$
We can immediately translate this definition into a recursive procedure
for computing Fibonacci numbers:
::: scheme
(define (fib n) (cond ((= n 0) 0) ((= n 1) 1) (else (+ (fib (- n 1))
(fib (- n 2))))))
:::
Consider the pattern of this computation. To compute `(fib 5)`, we
compute `(fib 4)` and `(fib 3)`. To compute `(fib 4)`, we compute
`(fib 3)` and `(fib 2)`. In general, the evolved process looks like a
tree, as shown in [Figure 1.5](#Figure 1.5). Notice that the branches
split into two at each level (except at the bottom); this reflects the
fact that the `fib` procedure calls itself twice each time it is
invoked.
This procedure is instructive as a prototypical tree recursion, but it
is a terrible way to compute Fibonacci numbers because it does so much
redundant computation. Notice in [Figure 1.5](#Figure 1.5) that the
entire computation of `(fib 3)`---almost half the work---is duplicated.
In fact, it is not hard to show that the number of times the procedure
will compute `(fib 1)` or `(fib 0)` (the number of leaves in the above
tree, in general) is precisely Fib($n+1$). To get an idea of how bad
this is, one can show that the value of Fib($n$) grows exponentially
with $n$. More precisely (see [Exercise 1.13](#Exercise 1.13)), Fib($n$)
is the closest integer to $\varphi^n / \sqrt{5}$, where
$$\varphi = {1 + \sqrt{5}\over2} \approx 1.6180$$
is the *golden ratio*, which satisfies the equation
$$\varphi^2 = \varphi + 1.$$
[]{#Figure 1.5 label="Figure 1.5"}
![image](fig/chap1/Fig1.5c.pdf){width="90mm"}
> **Figure 1.5:** The tree-recursive process generated in computing
> `(fib 5)`.
Thus, the process uses a number of steps that grows exponentially with
the input. On the other hand, the space required grows only linearly
with the input, because we need keep track only of which nodes are above
us in the tree at any point in the computation. In general, the number
of steps required by a tree-recursive process will be proportional to
the number of nodes in the tree, while the space required will be
proportional to the maximum depth of the tree.
We can also formulate an iterative process for computing the Fibonacci
numbers. The idea is to use a pair of integers $a$ and $b$, initialized
to Fib(1) = 1 and Fib(0) = 0, and to repeatedly apply the simultaneous
transformations
$$\begin{array}{l@{\;\;\gets\;\;}l}
a & a + b, \\
b & a.
\end{array}$$
It is not hard to show that, after applying this transformation $n$
times, $a$ and $b$ will be equal, respectively, to Fib($n+1$) and
Fib($n$). Thus, we can compute Fibonacci numbers iteratively using the
procedure
::: scheme
(define (fib n) (fib-iter 1 0 n)) (define (fib-iter a b count) (if (=
count 0) b (fib-iter (+ a b) a (- count 1))))
:::
This second method for computing Fib($n$) is a linear iteration. The
difference in number of steps required by the two methods---one linear
in $n$, one growing as fast as Fib($n$) itself---is enormous, even for
small inputs.
One should not conclude from this that tree-recursive processes are
useless. When we consider processes that operate on hierarchically
structured data rather than numbers, we will find that tree recursion is
a natural and powerful tool.[^32] But even in numerical operations,
tree-recursive processes can be useful in helping us to understand and
design programs. For instance, although the first `fib` procedure is
much less efficient than the second one, it is more straightforward,
being little more than a translation into Lisp of the definition of the
Fibonacci sequence. To formulate the iterative algorithm required
noticing that the computation could be recast as an iteration with three
state variables.
#### Example: Counting change {#example-counting-change .unnumbered}
It takes only a bit of cleverness to come up with the iterative
Fibonacci algorithm. In contrast, consider the following problem: How
many different ways can we make change of \$1.00, given half-dollars,
quarters, dimes, nickels, and pennies? More generally, can we write a
procedure to compute the number of ways to change any given amount of
money?
This problem has a simple solution as a recursive procedure. Suppose we
think of the types of coins available as arranged in some order. Then
the following relation holds:
The number of ways to change amount $a$ using $n$ kinds of coins equals
- the number of ways to change amount $a$ using all but the first kind
of coin, plus
- the number of ways to change amount $a - d$ using all $n$ kinds of
coins, where $d$ is the denomination of the first kind of coin.
To see why this is true, observe that the ways to make change can be
divided into two groups: those that do not use any of the first kind of
coin, and those that do. Therefore, the total number of ways to make
change for some amount is equal to the number of ways to make change for
the amount without using any of the first kind of coin, plus the number
of ways to make change assuming that we do use the first kind of coin.
But the latter number is equal to the number of ways to make change for
the amount that remains after using a coin of the first kind.
Thus, we can recursively reduce the problem of changing a given amount
to the problem of changing smaller amounts using fewer kinds of coins.
Consider this reduction rule carefully, and convince yourself that we
can use it to describe an algorithm if we specify the following
degenerate cases:[^33]
- If $a$ is exactly 0, we should count that as 1 way to make change.
- If $a$ is less than 0, we should count that as 0 ways to make
change.
- If $n$ is 0, we should count that as 0 ways to make change.
We can easily translate this description into a recursive procedure:
::: scheme
(define (count-change amount) (cc amount 5)) (define (cc amount
kinds-of-coins) (cond ((= amount 0) 1) ((or (\< amount 0) (=
kinds-of-coins 0)) 0) (else (+ (cc amount (- kinds-of-coins 1)) (cc (-
amount (first-denomination kinds-of-coins)) kinds-of-coins))))) (define
(first-denomination kinds-of-coins) (cond ((= kinds-of-coins 1) 1) ((=
kinds-of-coins 2) 5) ((= kinds-of-coins 3) 10) ((= kinds-of-coins 4) 25)
((= kinds-of-coins 5) 50)))
:::
(The `first/denomination` procedure takes as input the number of kinds
of coins available and returns the denomination of the first kind. Here
we are thinking of the coins as arranged in order from largest to
smallest, but any order would do as well.) We can now answer our
original question about changing a dollar:
::: scheme
(count-change 100) *292*
:::
`count/change` generates a tree-recursive process with redundancies
similar to those in our first implementation of `fib`. (It will take
quite a while for that 292 to be computed.) On the other hand, it is not
obvious how to design a better algorithm for computing the result, and
we leave this problem as a challenge. The observation that a
tree-recursive process may be highly inefficient but often easy to
specify and understand has led people to propose that one could get the
best of both worlds by designing a "smart compiler" that could transform
tree-recursive procedures into more efficient procedures that compute
the same result.[^34]
> **[]{#Exercise 1.11 label="Exercise 1.11"}Exercise 1.11:** A function
> $f$ is defined by the rule that
>
> $$f(n) =
> \begin{cases}
> \;\; n \quad \text{if \; \( n < 3 \),} \\
> \;\; f(n-1) + 2\kern-0.08em f(n-2) + 3\kern-0.08em f(n-3) \quad \text{if \; \( n \ge 3 \).}
> \end{cases}$$
>
> Write a procedure that computes $f$ by means of a recursive process.
> Write a procedure that computes $f$ by means of an iterative process.
> **[]{#Exercise 1.12 label="Exercise 1.12"}Exercise 1.12:** The
> following pattern of numbers is called *Pascal's triangle*.
>
> 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 . . .
>
> The numbers at the edge of the triangle are all 1, and each number
> inside the triangle is the sum of the two numbers above it.[^35] Write
> a procedure that computes elements of Pascal's triangle by means of a
> recursive process.
> **[]{#Exercise 1.13 label="Exercise 1.13"}Exercise 1.13:** Prove that
> Fib($n$) is the closest integer to $\varphi^n / \sqrt{5}$, where
> $\varphi = (1 +
> \sqrt{5}) / 2$. Hint: Let $\psi = (1 - \sqrt{5}) / 2$. Use induction
> and the definition of the Fibonacci numbers (see [Section
> 1.2.2](#Section 1.2.2)) to prove that
> $\text{Fib}(n) = (\varphi^n - \psi^n) / \sqrt{5}$.
### Orders of Growth {#Section 1.2.3}
The previous examples illustrate that processes can differ considerably
in the rates at which they consume computational resources. One
convenient way to describe this difference is to use the notion of
*order of growth* to obtain a gross measure of the resources required by
a process as the inputs become larger.
Let $n$ be a parameter that measures the size of the problem, and let
$R(n)$ be the amount of resources the process requires for a problem of
size $n$. In our previous examples we took $n$ to be the number for
which a given function is to be computed, but there are other
possibilities. For instance, if our goal is to compute an approximation
to the square root of a number, we might take $n$ to be the number of
digits accuracy required. For matrix multiplication we might take $n$ to
be the number of rows in the matrices. In general there are a number of
properties of the problem with respect to which it will be desirable to
analyze a given process. Similarly, $R(n)$ might measure the number of
internal storage registers used, the number of elementary machine
operations performed, and so on. In computers that do only a fixed
number of operations at a time, the time required will be proportional
to the number of elementary machine operations performed.
We say that $R(n)$ has order of growth $\Theta(f(n))$, written $R(n)$ =
$\Theta(f(n))$ (pronounced "theta of $f(n)$"), if there are positive
constants $k_1$ and $k_2$ independent of $n$ such that
$k_1f(n) \le R(n) \le k_2f(n)$ for any sufficiently large value of $n$.
(In other words, for large $n$, the value $R(n)$ is sandwiched between
$k_1f(n)$ and $k_2f(n)$.)
For instance, with the linear recursive process for computing factorial
described in [Section 1.2.1](#Section 1.2.1) the number of steps grows
proportionally to the input $n$. Thus, the steps required for this
process grows as $\Theta(n)$. We also saw that the space required grows
as $\Theta(n)$. For the iterative factorial, the number of steps is
still $\Theta(n)$ but the space is $\Theta(1)$---that is, constant.[^36]
The tree-recursive Fibonacci computation requires $\Theta(\varphi^n)$
steps and space $\Theta(n)$, where $\varphi$ is the golden ratio
described in [Section 1.2.2](#Section 1.2.2).
Orders of growth provide only a crude description of the behavior of a
process. For example, a process requiring $n^2$ steps and a process
requiring $1000n^2$ steps and a process requiring $3n^2 + 10n + 17$
steps all have $\Theta(n^2)$ order of growth. On the other hand, order
of growth provides a useful indication of how we may expect the behavior
of the process to change as we change the size of the problem. For a
$\Theta(n)$ (linear) process, doubling the size will roughly double the
amount of resources used. For an exponential process, each increment in
problem size will multiply the resource utilization by a constant
factor. In the remainder of [Section 1.2](#Section 1.2) we will examine
two algorithms whose order of growth is logarithmic, so that doubling
the problem size increases the resource requirement by a constant
amount.
> **[]{#Exercise 1.14 label="Exercise 1.14"}Exercise 1.14:** Draw the
> tree illustrating the process generated by the `count/change`
> procedure of [Section 1.2.2](#Section 1.2.2) in making change for 11
> cents. What are the orders of growth of the space and number of steps
> used by this process as the amount to be changed increases?
> **[]{#Exercise 1.15 label="Exercise 1.15"}Exercise 1.15:** The sine of
> an angle (specified in radians) can be computed by making use of the
> approximation $\sin x \approx x$ if $x$ is sufficiently small, and the
> trigonometric identity
>
> $$\sin x = 3\sin {x\over3} - 4\sin^3 {x\over3}$$
>
> to reduce the size of the argument of sin. (For purposes of this
> exercise an angle is considered "sufficiently small" if its magnitude
> is not greater than 0.1 radians.) These ideas are incorporated in the
> following procedures:
>
> ::: scheme
> (define (cube x) (\* x x x)) (define (p x) (- (\* 3 x) (\* 4 (cube
> x)))) (define (sine angle) (if (not (\> (abs angle) 0.1)) angle (p
> (sine (/ angle 3.0)))))
> :::
>
> a. How many times is the procedure `p` applied when `(sine 12.15)` is
> evaluated?
>
> b. What is the order of growth in space and number of steps (as a
> function of $a$) used by the process generated by the `sine`
> procedure when `(sine a)` is evaluated?
### Exponentiation {#Section 1.2.4}
Consider the problem of computing the exponential of a given number. We
would like a procedure that takes as arguments a base $b$ and a positive
integer exponent $n$ and computes $b^n$. One way to do this is via the
recursive definition
$$\begin{array}{l@{{}={}}l}
b^n & b\cdot b^{n-1}, \\
b^0 & 1,
\end{array}$$
which translates readily into the procedure
::: scheme
(define (expt b n) (if (= n 0) 1 (\* b (expt b (- n 1)))))
:::
This is a linear recursive process, which requires $\Theta(n)$ steps and
$\Theta(n)$ space. Just as with factorial, we can readily formulate an
equivalent linear iteration:
::: scheme
(define (expt b n) (expt-iter b n 1)) (define (expt-iter b counter
product) (if (= counter 0) product (expt-iter b (- counter 1) (\* b
product))))
:::
This version requires $\Theta(n)$ steps and $\Theta(1)$ space.
We can compute exponentials in fewer steps by using successive squaring.
For instance, rather than computing $b^8$ as
$$b\cdot (b\cdot (b\cdot (b\cdot (b\cdot (b\cdot (b\cdot b))))))\,,$$
we can compute it using three multiplications:
$$\begin{array}{l@{{}={}}l}
b^2 & b\cdot b, \\
b^4 & b^2\cdot b^2, \\
b^8 & b^4\cdot b^4.
\end{array}$$
This method works fine for exponents that are powers of 2. We can also
take advantage of successive squaring in computing exponentials in
general if we use the rule
$$\begin{array}{l@{{}={}}lr@{\ n\ }l}
b^n & (b^{n / 2})^2 \;\; & \mbox{if\,} & \mbox{\,is\, even}, \\
b^n & b\cdot b^{n-1} \;\; & \mbox{if\,} & \mbox{\,is\, odd}.
\end{array}$$
We can express this method as a procedure:
::: scheme
(define (fast-expt b n) (cond ((= n 0) 1) ((even? n) (square (fast-expt
b (/ n 2)))) (else (\* b (fast-expt b (- n 1))))))
:::
where the predicate to test whether an integer is even is defined in
terms of the primitive procedure `remainder` by
::: scheme
(define (even? n) (= (remainder n 2) 0))
:::
The process evolved by `fast/expt` grows logarithmically with $n$ in
both space and number of steps. To see this, observe that computing
$b^{2n}$ using `fast/expt` requires only one more multiplication than
computing $b^n$. The size of the exponent we can compute therefore
doubles (approximately) with every new multiplication we are allowed.
Thus, the number of multiplications required for an exponent of $n$
grows about as fast as the logarithm of $n$ to the base 2. The process
has $\Theta(\log n)$ growth.[^37]
The difference between $\Theta(\log n)$ growth and $\Theta(n)$ growth
becomes striking as $n$ becomes large. For example, `fast/expt` for $n$
= 1000 requires only 14 multiplications.[^38] It is also possible to use
the idea of successive squaring to devise an iterative algorithm that
computes exponentials with a logarithmic number of steps (see [Exercise
1.16](#Exercise 1.16)), although, as is often the case with iterative
algorithms, this is not written down so straightforwardly as the
recursive algorithm.[^39]
> **[]{#Exercise 1.16 label="Exercise 1.16"}Exercise 1.16:** Design a
> procedure that evolves an iterative exponentiation process that uses
> successive squaring and uses a logarithmic number of steps, as does
> `fast/expt`. (Hint: Using the observation that
> $(b^{n / 2})^2 = (b^2)^{n / 2}$, keep, along with the exponent $n$ and
> the base $b$, an additional state variable $a$, and define the state
> transformation in such a way that the product $ab^n$ is unchanged from
> state to state. At the beginning of the process $a$ is taken to be 1,
> and the answer is given by the value of $a$ at the end of the process.
> In general, the technique of defining an *invariant quantity* that
> remains unchanged from state to state is a powerful way to think about
> the design of iterative algorithms.)
> **[]{#Exercise 1.17 label="Exercise 1.17"}Exercise 1.17:** The
> exponentiation algorithms in this section are based on performing
> exponentiation by means of repeated multiplication. In a similar way,
> one can perform integer multiplication by means of repeated addition.
> The following multiplication procedure (in which it is assumed that
> our language can only add, not multiply) is analogous to the `expt`
> procedure:
>
> ::: scheme
> (define (\* a b) (if (= b 0) 0 (+ a (\* a (- b 1)))))
> :::
>
> This algorithm takes a number of steps that is linear in `b`. Now
> suppose we include, together with addition, operations `double`, which
> doubles an integer, and `halve`, which divides an (even) integer by 2.
> Using these, design a multiplication procedure analogous to
> `fast/expt` that uses a logarithmic number of steps.
> **[]{#Exercise 1.18 label="Exercise 1.18"}Exercise 1.18:** Using the
> results of [Exercise 1.16](#Exercise 1.16) and [Exercise
> 1.17](#Exercise 1.17), devise a procedure that generates an iterative
> process for multiplying two integers in terms of adding, doubling, and
> halving and uses a logarithmic number of steps.[^40]
> **[]{#Exercise 1.19 label="Exercise 1.19"}Exercise 1.19:** There is a
> clever algorithm for computing the Fibonacci numbers in a logarithmic
> number of steps. Recall the transformation of the state variables $a$
> and $b$ in the `fib/iter` process of [Section 1.2.2](#Section 1.2.2):
> $a \gets a + b$ and $b \gets a$. Call this transformation $T$, and
> observe that applying $T$ over and over again $n$ times, starting with
> 1 and 0, produces the pair Fib($n+1$) and Fib($n$). In other words,
> the Fibonacci numbers are produced by applying $T^n$, the
> $n^{\mathrm{th}}$ power of the transformation $T$, starting with the
> pair (1, 0). Now consider $T$ to be the special case of $p=0$ and
> $q=1$ in a family of transformations $T_{pq}$, where $T_{pq}$
> transforms the pair $(a, b)$ according to $a \gets bq + aq + ap$ and
> $b \gets bp + aq$. Show that if we apply such a transformation
> $T_{pq}$ twice, the effect is the same as using a single
> transformation $T_{p'\!q'}$ of the same form, and compute $p'\!$ and
> $q'\!$ in terms of $p$ and $q$. This gives us an explicit way to
> square these transformations, and thus we can compute $T^n$ using
> successive squaring, as in the `fast/expt` procedure. Put this all
> together to complete the following procedure, which runs in a
> logarithmic number of steps:[^41]
>
> ::: scheme
> (define (fib n) (fib-iter 1 0 0 1 n)) (define (fib-iter a b p q count)
> (cond ((= count 0) b) ((even? count) (fib-iter a b
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ [;
> compute $p'$]{.roman}
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ [;
> compute $q'$]{.roman} (/ count 2))) (else (fib-iter (+ (\* b q) (\* a
> q) (\* a p)) (+ (\* b p) (\* a q)) p q (- count 1)))))
> :::
### Greatest Common Divisors {#Section 1.2.5}
The greatest common divisor (gcd) of two integers $a$ and
$b$ is defined to be the largest integer that divides both $a$ and $b$
with no remainder. For example, the gcd of 16 and 28 is 4.
In [Chapter 2](#Chapter
2), when we investigate how to implement rational-number arithmetic, we
will need to be able to compute gcds in order to reduce
rational numbers to lowest terms. (To reduce a rational number to lowest
terms, we must divide both the numerator and the denominator by their
gcd. For example, 16/28 reduces to 4/7.) One way to find
the gcd of two integers is to factor them and search for
common factors, but there is a famous algorithm that is much more
efficient.
The idea of the algorithm is based on the observation that, if $r$ is
the remainder when $a$ is divided by $b$, then the common divisors of
$a$ and $b$ are precisely the same as the common divisors of $b$ and
$r$. Thus, we can use the equation
GCD(a,b) = GCD(b,r)
to successively reduce the problem of computing a gcd to
the problem of computing the gcd of smaller and smaller
pairs of integers. For example,
GCD(206,40) = GCD(40,6) = GCD(6,4) = GCD(4,2) = GCD(2,0) = 2
reduces gcd(206, 40) to gcd(2, 0), which is
2. It is possible to show that starting with any two positive integers
and performing repeated reductions will always eventually produce a pair
where the second number is 0. Then the gcd is the other
number in the pair. This method for computing the gcd is
known as *Euclid's Algorithm*.[^42]
It is easy to express Euclid's Algorithm as a procedure:
::: scheme
(define (gcd a b) (if (= b 0) a (gcd b (remainder a b))))
:::
This generates an iterative process, whose number of steps grows as the
logarithm of the numbers involved.
The fact that the number of steps required by Euclid's Algorithm has
logarithmic growth bears an interesting relation to the Fibonacci
numbers:
> **Lamé's Theorem:** If Euclid's Algorithm requires $k$ steps to
> compute the gcd of some pair, then the smaller number in
> the pair must be greater than or equal to the $k^{\mathrm{th}}$
> Fibonacci number.[^43]
We can use this theorem to get an order-of-growth estimate for Euclid's
Algorithm. Let $n$ be the smaller of the two inputs to the procedure. If
the process takes $k$ steps, then we must have $n \ge {\rm Fib}(k)
\approx \varphi^k / \sqrt{5}$. Therefore the number of steps $k$ grows
as the logarithm (to the base $\varphi$) of $n$. Hence, the order of
growth is $\Theta(\log n)$.
> **[]{#Exercise 1.20 label="Exercise 1.20"}Exercise 1.20:** The process
> that a procedure generates is of course dependent on the rules used by
> the interpreter. As an example, consider the iterative `gcd` procedure
> given above. Suppose we were to interpret this procedure using
> normal-order evaluation, as discussed in [Section
> 1.1.5](#Section 1.1.5). (The normal-order-evaluation rule for `if` is
> described in [Exercise 1.5](#Exercise 1.5).) Using the substitution
> method (for normal order), illustrate the process generated in
> evaluating `(gcd 206 40)` and indicate the `remainder` operations that
> are actually performed. How many `remainder` operations are actually
> performed in the normal-order evaluation of `(gcd 206 40)`? In the
> applicative-order evaluation?
### Example: Testing for Primality {#Section 1.2.6}
This section describes two methods for checking the primality of an
integer $n$, one with order of growth $\Theta(\sqrt{n})$, and a
"probabilistic" algorithm with order of growth $\Theta(\log n)$. The
exercises at the end of this section suggest programming projects based
on these algorithms.
#### Searching for divisors {#searching-for-divisors .unnumbered}
Since ancient times, mathematicians have been fascinated by problems
concerning prime numbers, and many people have worked on the problem of
determining ways to test if numbers are prime. One way to test if a
number is prime is to find the number's divisors. The following program
finds the smallest integral divisor (greater than 1) of a given number
$n$. It does this in a straightforward way, by testing $n$ for
divisibility by successive integers starting with 2.
::: scheme
(define (smallest-divisor n) (find-divisor n 2)) (define (find-divisor n
test-divisor) (cond ((\> (square test-divisor) n) n) ((divides?
test-divisor n) test-divisor) (else (find-divisor n (+ test-divisor
1))))) (define (divides? a b) (= (remainder b a) 0))
:::
We can test whether a number is prime as follows: $n$ is prime if and
only if $n$ is its own smallest divisor.
::: scheme
(define (prime? n) (= n (smallest-divisor n)))
:::
The end test for `find/divisor` is based on the fact that if $n$ is not
prime it must have a divisor less than or equal to $\sqrt{n}$.[^44] This
means that the algorithm need only test divisors between 1 and
$\sqrt{n}$. Consequently, the number of steps required to identify $n$
as prime will have order of growth $\Theta(\sqrt{n})$.
#### The Fermat test {#the-fermat-test .unnumbered}
The $\Theta(\log n)$ primality test is based on a result from number
theory known as Fermat's Little Theorem.[^45]
> **Fermat's Little Theorem:** If $n$ is a prime number and $a$ is any
> positive integer less than $n$, then $a$ raised to the
> $n^{\mathrm{th}}$ power is congruent to $a$ modulo $n$.
(Two numbers are said to be *congruent modulo* $n$ if they both have the
same remainder when divided by $n$. The remainder of a number $a$ when
divided by $n$ is also referred to as the *remainder of* $a$ *modulo*
$n$, or simply as $a$ *modulo* $n$.)
If $n$ is not prime, then, in general, most of the numbers $a < n$ will
not satisfy the above relation. This leads to the following algorithm
for testing primality: Given a number $n$, pick a random number $a < n$
and compute the remainder of $a^n$ modulo $n$. If the result is not
equal to $a$, then $n$ is certainly not prime. If it is $a$, then
chances are good that $n$ is prime. Now pick another random number $a$
and test it with the same method. If it also satisfies the equation,
then we can be even more confident that $n$ is prime. By trying more and
more values of $a$, we can increase our confidence in the result. This
algorithm is known as the Fermat test.
To implement the Fermat test, we need a procedure that computes the
exponential of a number modulo another number:
::: scheme
(define (expmod base exp m) (cond ((= exp 0) 1) ((even? exp) (remainder
(square (expmod base (/ exp 2) m)) m)) (else (remainder (\* base (expmod
base (- exp 1) m)) m))))
:::
This is very similar to the `fast/expt` procedure of [Section
1.2.4](#Section 1.2.4). It uses successive squaring, so that the number
of steps grows logarithmically with the exponent.[^46]
The Fermat test is performed by choosing at random a number $a$ between
1 and $n-1$ inclusive and checking whether the remainder modulo $n$ of
the $n^{\mathrm{th}}$ power of $a$ is equal to $a$. The random number
$a$ is chosen using the procedure `random`, which we assume is included
as a primitive in Scheme. `random` returns a nonnegative integer less
than its integer input. Hence, to obtain a random number between 1 and
$n-1$, we call `random` with an input of $n-1$ and add 1 to the result:
::: scheme
(define (fermat-test n) (define (try-it a) (= (expmod a n n) a)) (try-it
(+ 1 (random (- n 1)))))
:::
The following procedure runs the test a given number of times, as
specified by a parameter. Its value is true if the test succeeds every
time, and false otherwise.
::: scheme
(define (fast-prime? n times) (cond ((= times 0) true) ((fermat-test n)
(fast-prime? n (- times 1))) (else false)))
:::
#### Probabilistic methods {#probabilistic-methods .unnumbered}
The Fermat test differs in character from most familiar algorithms, in
which one computes an answer that is guaranteed to be correct. Here, the
answer obtained is only probably correct. More precisely, if $n$ ever
fails the Fermat test, we can be certain that $n$ is not prime. But the
fact that $n$ passes the test, while an extremely strong indication, is
still not a guarantee that $n$ is prime. What we would like to say is
that for any number $n$, if we perform the test enough times and find
that $n$ always passes the test, then the probability of error in our
primality test can be made as small as we like.
Unfortunately, this assertion is not quite correct. There do exist
numbers that fool the Fermat test: numbers $n$ that are not prime and
yet have the property that $a^n$ is congruent to $a$ modulo $n$ for all
integers $a < n$. Such numbers are extremely rare, so the Fermat test is
quite reliable in practice.[^47]
There are variations of the Fermat test that cannot be fooled. In these
tests, as with the Fermat method, one tests the primality of an integer
$n$ by choosing a random integer $a < n$ and checking some condition
that depends upon $n$ and $a$. (See [Exercise 1.28](#Exercise 1.28) for
an example of such a test.) On the other hand, in contrast to the Fermat
test, one can prove that, for any $n$, the condition does not hold for
most of the integers $a < n$ unless $n$ is prime. Thus, if $n$ passes
the test for some random choice of $a$, the chances are better than even
that $n$ is prime. If $n$ passes the test for two random choices of $a$,
the chances are better than 3 out of 4 that $n$ is prime. By running the
test with more and more randomly chosen values of $a$ we can make the
probability of error as small as we like.
The existence of tests for which one can prove that the chance of error
becomes arbitrarily small has sparked interest in algorithms of this
type, which have come to be known as *probabilistic algorithms*. There
is a great deal of research activity in this area, and probabilistic
algorithms have been fruitfully applied to many fields.[^48]
> **[]{#Exercise 1.21 label="Exercise 1.21"}Exercise 1.21:** Use the
> `smallest/divisor` procedure to find the smallest divisor of each of
> the following numbers: 199, 1999, 19999.
> **[]{#Exercise 1.22 label="Exercise 1.22"}Exercise 1.22:** Most Lisp
> implementations include a primitive called `runtime` that returns an
> integer that specifies the amount of time the system has been running
> (measured, for example, in microseconds). The following
> `timed/prime/test` procedure, when called with an integer $n$, prints
> $n$ and checks to see if $n$ is prime. If $n$ is prime, the procedure
> prints three asterisks followed by the amount of time used in
> performing the test.
>
> ::: scheme
> (define (timed-prime-test n) (newline) (display n) (start-prime-test n
> (runtime))) (define (start-prime-test n start-time) (if (prime? n)
> (report-prime (- (runtime) start-time)))) (define (report-prime
> elapsed-time) (display \" \*\*\* \") (display elapsed-time))
> :::
>
> Using this procedure, write a procedure `search/for/primes` that
> checks the primality of consecutive odd integers in a specified range.
> Use your procedure to find the three smallest primes larger than 1000;
> larger than 10,000; larger than 100,000; larger than 1,000,000. Note
> the time needed to test each prime. Since the testing algorithm has
> order of growth of $\Theta(\sqrt{n})$, you should expect that testing
> for primes around 10,000 should take about $\sqrt{10}$ times as long
> as testing for primes around 1000. Do your timing data bear this out?
> How well do the data for 100,000 and 1,000,000 support the
> $\Theta(\sqrt{n})$ prediction? Is your result compatible with the
> notion that programs on your machine run in time proportional to the
> number of steps required for the computation?
> **[]{#Exercise 1.23 label="Exercise 1.23"}Exercise 1.23:** The
> `smallest/divisor` procedure shown at the start of this section does
> lots of needless testing: After it checks to see if the number is
> divisible by 2 there is no point in checking to see if it is divisible
> by any larger even numbers. This suggests that the values used for
> `test/divisor` should not be 2, 3, 4, 5, 6, $\dots$, but rather 2, 3,
> 5, 7, 9, $\dots$. To implement this change, define a procedure `next`
> that returns 3 if its input is equal to 2 and otherwise returns its
> input plus 2. Modify the `smallest/divisor` procedure to use
> `(next test/divisor)` instead of `(+ test/divisor 1)`. With
> `timed/prime/test` incorporating this modified version of
> `smallest/divisor`, run the test for each of the 12 primes found in
> [Exercise 1.22](#Exercise 1.22). Since this modification halves the
> number of test steps, you should expect it to run about twice as fast.
> Is this expectation confirmed? If not, what is the observed ratio of
> the speeds of the two algorithms, and how do you explain the fact that
> it is different from 2?
> **[]{#Exercise 1.24 label="Exercise 1.24"}Exercise 1.24:** Modify the
> `timed/prime/test` procedure of [Exercise 1.22](#Exercise 1.22) to use
> `fast/prime?` (the Fermat method), and test each of the 12 primes you
> found in that exercise. Since the Fermat test has $\Theta(\log n)$
> growth, how would you expect the time to test primes near 1,000,000 to
> compare with the time needed to test primes near 1000? Do your data
> bear this out? Can you explain any discrepancy you find?
> **[]{#Exercise 1.25 label="Exercise 1.25"}Exercise 1.25:** Alyssa P.
> Hacker complains that we went to a lot of extra work in writing
> `expmod`. After all, she says, since we already know how to compute
> exponentials, we could have simply written
>
> ::: scheme
> (define (expmod base exp m) (remainder (fast-expt base exp) m))
> :::
>
> Is she correct? Would this procedure serve as well for our fast prime
> tester? Explain.
> **[]{#Exercise 1.26 label="Exercise 1.26"}Exercise 1.26:** Louis
> Reasoner is having great difficulty doing [Exercise
> 1.24](#Exercise 1.24). His `fast/prime?` test seems to run more slowly
> than his `prime?` test. Louis calls his friend Eva Lu Ator over to
> help. When they examine Louis's code, they find that he has rewritten
> the `expmod` procedure to use an explicit multiplication, rather than
> calling `square`:
>
> ::: scheme
> (define (expmod base exp m) (cond ((= exp 0) 1) ((even? exp)
> (remainder (\* (expmod base (/ exp 2) m) (expmod base (/ exp 2) m))
> m)) (else (remainder (\* base (expmod base (- exp 1) m)) m))))
> :::
>
> "I don't see what difference that could make," says Louis. "I do."
> says Eva. "By writing the procedure like that, you have transformed
> the $\Theta(\log n)$ process into a $\Theta(n)$ process." Explain.
> **[]{#Exercise 1.27 label="Exercise 1.27"}Exercise 1.27:** Demonstrate
> that the Carmichael numbers listed in [Footnote 1.47](#Footnote 1.47)
> really do fool the Fermat test. That is, write a procedure that takes
> an integer $n$ and tests whether $a^n$ is congruent to $a$ modulo $n$
> for every $a < n$, and try your procedure on the given Carmichael
> numbers.
> **[]{#Exercise 1.28 label="Exercise 1.28"}Exercise 1.28:** One variant
> of the Fermat test that cannot be fooled is called the *Miller-Rabin
> test* ([Miller 1976](#Miller 1976); [Rabin 1980](#Rabin 1980)). This
> starts from an alternate form of Fermat's Little Theorem, which states
> that if $n$ is a prime number and $a$ is any positive integer less
> than $n$, then $a$ raised to the ($n-1$)-st power is congruent to 1
> modulo $n$. To test the primality of a number $n$ by the Miller-Rabin
> test, we pick a random number $a < n$ and raise $a$ to the ($n-1$)-st
> power modulo $n$ using the `expmod` procedure. However, whenever we
> perform the squaring step in `expmod`, we check to see if we have
> discovered a "nontrivial square root of 1 modulo $n$," that is, a
> number not equal to 1 or $n-1$ whose square is equal to 1 modulo $n$.
> It is possible to prove that if such a nontrivial square root of 1
> exists, then $n$ is not prime. It is also possible to prove that if
> $n$ is an odd number that is not prime, then, for at least half the
> numbers $a < n$, computing $a^{n-1}$ in this way will reveal a
> nontrivial square root of 1 modulo $n$. (This is why the Miller-Rabin
> test cannot be fooled.) Modify the `expmod` procedure to signal if it
> discovers a nontrivial square root of 1, and use this to implement the
> Miller-Rabin test with a procedure analogous to `fermat/test`. Check
> your procedure by testing various known primes and non-primes. Hint:
> One convenient way to make `expmod` signal is to have it return 0.
## Formulating Abstractions with Higher-Order Procedures {#Section 1.3}
We have seen that procedures are, in effect, abstractions that describe
compound operations on numbers independent of the particular numbers.
For example, when we
::: scheme
(define (cube x) (\* x x x))
:::
we are not talking about the cube of a particular number, but rather
about a method for obtaining the cube of any number. Of course we could
get along without ever defining this procedure, by always writing
expressions such as
::: scheme
(\* 3 3 3) (\* x x x) (\* y y y)
:::
and never mentioning `cube` explicitly. This would place us at a serious
disadvantage, forcing us to work always at the level of the particular
operations that happen to be primitives in the language (multiplication,
in this case) rather than in terms of higher-level operations. Our
programs would be able to compute cubes, but our language would lack the
ability to express the concept of cubing. One of the things we should
demand from a powerful programming language is the ability to build
abstractions by assigning names to common patterns and then to work in
terms of the abstractions directly. Procedures provide this ability.
This is why all but the most primitive programming languages include
mechanisms for defining procedures.
Yet even in numerical processing we will be severely limited in our
ability to create abstractions if we are restricted to procedures whose
parameters must be numbers. Often the same programming pattern will be
used with a number of different procedures. To express such patterns as
concepts, we will need to construct procedures that can accept
procedures as arguments or return procedures as values. Procedures that
manipulate procedures are called *higher-order procedures*. This section
shows how higher-order procedures can serve as powerful abstraction
mechanisms, vastly increasing the expressive power of our language.
### Procedures as Arguments {#Section 1.3.1}
Consider the following three procedures. The first computes the sum of
the integers from `a` through `b`:
::: scheme
(define (sum-integers a b) (if (\> a b) 0 (+ a (sum-integers (+ a 1)
b))))
:::
The second computes the sum of the cubes of the integers in the given
range:
::: scheme
(define (sum-cubes a b) (if (\> a b) 0 (+ (cube a) (sum-cubes (+ a 1)
b))))
:::
The third computes the sum of a sequence of terms in the series
$${1\over1\cdot 3} + {1\over5\cdot 7} + {1\over9\cdot 11} + \dots,$$
which converges to $\pi / 8$ (very slowly):[^49]
::: scheme
(define (pi-sum a b) (if (\> a b) 0 (+ (/ 1.0 (\* a (+ a 2))) (pi-sum (+
a 4) b))))
:::
These three procedures clearly share a common underlying pattern. They
are for the most part identical, differing only in the name of the
procedure, the function of `a` used to compute the term to be added, and
the function that provides the next value of `a`. We could generate each
of the procedures by filling in slots in the same template:
::: scheme
(define
( $\color{SchemeDark}\langle$ *name* $\color{SchemeDark}\rangle$ a b)
(if (\> a b) 0 (+
( $\color{SchemeDark}\langle$ *term* $\color{SchemeDark}\rangle$ a)
( $\color{SchemeDark}\langle$ *name* $\color{SchemeDark}\rangle$
( $\color{SchemeDark}\langle$ *next* $\color{SchemeDark}\rangle$ a)
b))))
:::
The presence of such a common pattern is strong evidence that there is a
useful abstraction waiting to be brought to the surface. Indeed,
mathematicians long ago identified the abstraction of *summation of a
series* and invented "sigma notation," for example
$$\sum\limits_{n=a}^b f(n) = f(a) + \dots + f(b),$$
to express this concept. The power of sigma notation is that it allows
mathematicians to deal with the concept of summation itself rather than
only with particular sums---for example, to formulate general results
about sums that are independent of the particular series being summed.
Similarly, as program designers, we would like our language to be
powerful enough so that we can write a procedure that expresses the
concept of summation itself rather than only procedures that compute
particular sums. We can do so readily in our procedural language by
taking the common template shown above and transforming the "slots" into
formal parameters:
::: scheme
(define (sum term a next b) (if (\> a b) 0 (+ (term a) (sum term (next
a) next b))))
:::
Notice that `sum` takes as its arguments the lower and upper bounds `a`
and `b` together with the procedures `term` and `next`. We can use `sum`
just as we would any procedure. For example, we can use it (along with a
procedure `inc` that increments its argument by 1) to define
`sum/cubes`:
::: scheme
(define (inc n) (+ n 1)) (define (sum-cubes a b) (sum cube a inc b))
:::
Using this, we can compute the sum of the cubes of the integers from 1
to 10:
::: scheme
(sum-cubes 1 10) *3025*
:::
With the aid of an identity procedure to compute the term, we can define
`sum/integers` in terms of `sum`:
::: scheme
(define (identity x) x) (define (sum-integers a b) (sum identity a inc
b))
:::
Then we can add up the integers from 1 to 10:
::: scheme
(sum-integers 1 10) *55*
:::
We can also define `pi/sum` in the same way:[^50]
::: scheme
(define (pi-sum a b) (define (pi-term x) (/ 1.0 (\* x (+ x 2)))) (define
(pi-next x) (+ x 4)) (sum pi-term a pi-next b))
:::
Using these procedures, we can compute an approximation to $\pi$:
::: scheme
(\* 8 (pi-sum 1 1000)) *3.139592655589783*
:::
Once we have `sum`, we can use it as a building block in formulating
further concepts. For instance, the definite integral of a function $f$
between the limits $a$ and $b$ can be approximated numerically using the
formula
$${\int_a^b \!\!\! f} = {\left[\;f\! \left(a + {dx \over 2}\right)
+ f\! \left(a + dx + {dx \over 2}\right)
+ f\! \left(a + 2dx + {dx \over 2}\right) + \,\dots \;\right]\! dx}$$
for small values of $dx$. We can express this directly as a procedure:
::: scheme
(define (integral f a b dx) (define (add-dx x) (+ x dx)) (\* (sum f (+ a
(/ dx 2.0)) add-dx b) dx))
(integral cube 0 1 0.01) *.24998750000000042*
(integral cube 0 1 0.001) *.249999875000001*
:::
(The exact value of the integral of `cube` between 0 and 1 is 1/4.)
> **[]{#Exercise 1.29 label="Exercise 1.29"}Exercise 1.29:** Simpson's
> Rule is a more accurate method of numerical integration than the
> method illustrated above. Using Simpson's Rule, the integral of a
> function $f$ between $a$ and $b$ is approximated as
>
> $${h\over 3}(y_0 + 4y_1 + 2y_2 + 4y_3 + 2y_4 + \dots + 2y_{n-2} + 4y_{n-1} + y_n),$$
>
> where $h = (b - a) / n$, for some even integer $n$, and
> $y_k = f(a + kh)$. (Increasing $n$ increases the accuracy of the
> approximation.) Define a procedure that takes as arguments $f$, $a$,
> $b$, and $n$ and returns the value of the integral, computed using
> Simpson's Rule. Use your procedure to integrate `cube` between 0 and 1
> (with $n = 100$ and $n = 1000$), and compare the results to those of
> the `integral` procedure shown above.
> **[]{#Exercise 1.30 label="Exercise 1.30"}Exercise 1.30:** The `sum`
> procedure above generates a linear recursion. The procedure can be
> rewritten so that the sum is performed iteratively. Show how to do
> this by filling in the missing expressions in the following
> definition:
>
> ::: scheme
> (define (sum term a next b) (define (iter a result) (if
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ (iter
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ))) (iter
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ))
> :::
> **[]{#Exercise 1.31 label="Exercise 1.31"}Exercise 1.31:**
>
> a. The `sum` procedure is only the simplest of a vast number of
> similar abstractions that can be captured as higher-order
> procedures.[^51] Write an analogous procedure called `product`
> that returns the product of the values of a function at points
> over a given range. Show how to define `factorial` in terms of
> `product`. Also use `product` to compute approximations to $\pi$
> using the formula[^52]
>
> $${\pi\over 4} = {2\cdot 4\cdot 4\cdot 6\cdot 6\cdot 8\cdots\over
> 3\cdot 3\cdot 5\cdot 5\cdot 7\cdot 7\cdots}\,.$$
>
> b. If your `product` procedure generates a recursive process, write
> one that generates an iterative process. If it generates an
> iterative process, write one that generates a recursive process.
> **[]{#Exercise 1.32 label="Exercise 1.32"}Exercise 1.32:**
>
> a. Show that `sum` and `product` ([Exercise 1.31](#Exercise 1.31))
> are both special cases of a still more general notion called
> `accumulate` that combines a collection of terms, using some
> general accumulation function:
>
> ::: scheme
> (accumulate combiner null-value term a next b)
> :::
>
> `accumulate` takes as arguments the same term and range
> specifications as `sum` and `product`, together with a `combiner`
> procedure (of two arguments) that specifies how the current term
> is to be combined with the accumulation of the preceding terms and
> a `null/value` that specifies what base value to use when the
> terms run out. Write `accumulate` and show how `sum` and `product`
> can both be defined as simple calls to `accumulate`.
>
> b. If your `accumulate` procedure generates a recursive process,
> write one that generates an iterative process. If it generates an
> iterative process, write one that generates a recursive process.
> **[]{#Exercise 1.33 label="Exercise 1.33"}Exercise 1.33:** You can
> obtain an even more general version of `accumulate` ([Exercise
> 1.32](#Exercise 1.32)) by introducing the notion of a *filter* on the
> terms to be combined. That is, combine only those terms derived from
> values in the range that satisfy a specified condition. The resulting
> `filtered/accumulate` abstraction takes the same arguments as
> accumulate, together with an additional predicate of one argument that
> specifies the filter. Write `filtered/accumulate` as a procedure. Show
> how to express the following using `filtered/accumulate`:
>
> a. the sum of the squares of the prime numbers in the interval $a$ to
> $b$ (assuming that you have a `prime?` predicate already written)
>
> b. the product of all the positive integers less than $n$ that are
> relatively prime to $n$ (i.e., all positive integers $i < n$ such
> that $\textsc{gcd}(i, n) = 1$).
### Constructing Procedures Using `lambda` {#Section 1.3.2}
In using `sum` as in [Section 1.3.1](#Section 1.3.1), it seems terribly
awkward to have to define trivial procedures such as `pi/term` and
`pi/next` just so we can use them as arguments to our higher-order
procedure. Rather than define `pi/next` and `pi/term`, it would be more
convenient to have a way to directly specify "the procedure that returns
its input incremented by 4" and "the procedure that returns the
reciprocal of its input times its input plus 2." We can do this by
introducing the special form `lambda`, which creates procedures. Using
`lambda` we can describe what we want as
::: scheme
(lambda (x) (+ x 4))
:::
and
::: scheme
(lambda (x) (/ 1.0 (\* x (+ x 2))))
:::
Then our `pi/sum` procedure can be expressed without defining any
auxiliary procedures as
::: scheme
(define (pi-sum a b) (sum (lambda (x) (/ 1.0 (\* x (+ x 2)))) a (lambda
(x) (+ x 4)) b))
:::
Again using `lambda`, we can write the `integral` procedure without
having to define the auxiliary procedure `add/dx`:
::: scheme
(define (integral f a b dx) (\* (sum f (+ a (/ dx 2.0)) (lambda (x) (+ x
dx)) b) dx))
:::
In general, `lambda` is used to create procedures in the same way as
`define`, except that no name is specified for the procedure:
::: scheme
(lambda
( $\color{SchemeDark}\langle$ *formal-parameters* $\color{SchemeDark}\rangle$ )
$\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
:::
The resulting procedure is just as much a procedure as one that is
created using `define`. The only difference is that it has not been
associated with any name in the environment. In fact,
::: scheme
(define (plus4 x) (+ x 4))
:::
is equivalent to
::: scheme
(define plus4 (lambda (x) (+ x 4)))
:::
We can read a `lambda` expression as follows:
::: scheme
(lambda (x) (+ x 4)) \| \| \| \| \| the procedure of an argument x that
adds x and 4
:::
Like any expression that has a procedure as its value, a `lambda`
expression can be used as the operator in a combination such as
::: scheme
((lambda (x y z) (+ x y (square z))) 1 2 3) *12*
:::
or, more generally, in any context where we would normally use a
procedure name.[^53]
#### Using `let` to create local variables {#using-let-to-create-local-variables .unnumbered}
Another use of `lambda` is in creating local variables. We often need
local variables in our procedures other than those that have been bound
as formal parameters. For example, suppose we wish to compute the
function
$$f(x,y) = x(1 + xy)^2 + y(1 - y) + (1 + xy)(1 - y),$$
which we could also express as
$$\begin{array}{r@{{}={}}l}
a & 1 + xy, \\
b & 1 - y, \\
f(x,y) & xa^2 + yb + ab.
\end{array}$$
In writing a procedure to compute $f$, we would like to include as local
variables not only $x$ and $y$ but also the names of intermediate
quantities like $a$ and $b$. One way to accomplish this is to use an
auxiliary procedure to bind the local variables:
::: scheme
(define (f x y) (define (f-helper a b) (+ (\* x (square a)) (\* y b) (\*
a b))) (f-helper (+ 1 (\* x y)) (- 1 y)))
:::
Of course, we could use a `lambda` expression to specify an anonymous
procedure for binding our local variables. The body of `f` then becomes
a single call to that procedure:
::: scheme
(define (f x y) ((lambda (a b) (+ (\* x (square a)) (\* y b) (\* a b)))
(+ 1 (\* x y)) (- 1 y)))
:::
This construct is so useful that there is a special form called `let` to
make its use more convenient. Using `let`, the `f` procedure could be
written as
::: scheme
(define (f x y) (let ((a (+ 1 (\* x y))) (b (- 1 y))) (+ (\* x (square
a)) (\* y b) (\* a b))))
:::
The general form of a `let` expression is
::: scheme
(let
(( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ )
$\dots$
( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ ))
$\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
:::
which can be thought of as saying
::: scheme
let
$\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
have the value
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
and
$\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
have the value
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
and $\dots$
$\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$
have the value
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$
in $\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$
:::
The first part of the `let` expression is a list of name-expression
pairs. When the `let` is evaluated, each name is associated with the
value of the corresponding expression. The body of the `let` is
evaluated with these names bound as local variables. The way this
happens is that the `let` expression is interpreted as an alternate
syntax for
::: scheme
((lambda
( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
$\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
:::
No new mechanism is required in the interpreter in order to provide
local variables. A `let` expression is simply syntactic sugar for the
underlying `lambda` application.
We can see from this equivalence that the scope of a variable specified
by a `let` expression is the body of the `let`. This implies that:
- `let` allows one to bind variables as locally as possible to where
they are to be used. For example, if the value of `x` is 5, the
value of the expression
::: scheme
(+ (let ((x 3)) (+ x (\* x 10))) x)
:::
is 38. Here, the `x` in the body of the `let` is 3, so the value of
the `let` expression is 33. On the other hand, the `x` that is the
second argument to the outermost `+` is still 5.
- The variables' values are computed outside the `let`. This matters
when the expressions that provide the values for the local variables
depend upon variables having the same names as the local variables
themselves. For example, if the value of `x` is 2, the expression
::: scheme
(let ((x 3) (y (+ x 2))) (\* x y))
:::
will have the value 12 because, inside the body of the `let`, `x`
will be 3 and `y` will be 4 (which is the outer `x` plus 2).
Sometimes we can use internal definitions to get the same effect as with
`let`. For example, we could have defined the procedure `f` above as
::: scheme
(define (f x y) (define a (+ 1 (\* x y))) (define b (- 1 y)) (+ (\* x
(square a)) (\* y b) (\* a b)))
:::
We prefer, however, to use `let` in situations like this and to use
internal `define` only for internal procedures.[^54]
> **[]{#Exercise 1.34 label="Exercise 1.34"}Exercise 1.34:** Suppose we
> define the procedure
>
> ::: scheme
> (define (f g) (g 2))
> :::
>
> Then we have
>
> ::: scheme
> (f square) *4* (f (lambda (z) (\* z (+ z 1)))) *6*
> :::
>
> What happens if we (perversely) ask the interpreter to evaluate the
> combination `(f f)`? Explain.
### Procedures as General Methods {#Section 1.3.3}
We introduced compound procedures in [Section 1.1.4](#Section 1.1.4) as
a mechanism for abstracting patterns of numerical operations so as to
make them independent of the particular numbers involved. With
higher-order procedures, such as the `integral` procedure of [Section
1.3.1](#Section 1.3.1), we began to see a more powerful kind of
abstraction: procedures used to express general methods of computation,
independent of the particular functions involved. In this section we
discuss two more elaborate examples---general methods for finding zeros
and fixed points of functions---and show how these methods can be
expressed directly as procedures.
#### Finding roots of equations by the half-interval method {#finding-roots-of-equations-by-the-half-interval-method .unnumbered}
The *half-interval method* is a simple but powerful technique for
finding roots of an equation $f(x) = 0$, where $f$ is a continuous
function. The idea is that, if we are given points $a$ and $b$ such that
$f(a) < 0 < f(b)$, then $f$ must have at least one zero between $a$ and
$b$. To locate a zero, let $x$ be the average of $a$ and $b$, and
compute $f(x)$. If $f(x) > 0$, then $f$ must have a zero between $a$ and
$x$. If $f(x) < 0$, then $f$ must have a zero between $x$ and $b$.
Continuing in this way, we can identify smaller and smaller intervals on
which $f$ must have a zero. When we reach a point where the interval is
small enough, the process stops. Since the interval of uncertainty is
reduced by half at each step of the process, the number of steps
required grows as $\Theta(\log(L / T))$, where $L$ is the length of the
original interval and $T$ is the error tolerance (that is, the size of
the interval we will consider "small enough"). Here is a procedure that
implements this strategy:
::: scheme
(define (search f neg-point pos-point) (let ((midpoint (average
neg-point pos-point))) (if (close-enough? neg-point pos-point) midpoint
(let ((test-value (f midpoint))) (cond ((positive? test-value) (search f
neg-point midpoint)) ((negative? test-value) (search f midpoint
pos-point)) (else midpoint))))))
:::
We assume that we are initially given the function $f$ together with
points at which its values are negative and positive. We first compute
the midpoint of the two given points. Next we check to see if the given
interval is small enough, and if so we simply return the midpoint as our
answer. Otherwise, we compute as a test value the value of $f$ at the
midpoint. If the test value is positive, then we continue the process
with a new interval running from the original negative point to the
midpoint. If the test value is negative, we continue with the interval
from the midpoint to the positive point. Finally, there is the
possibility that the test value is 0, in which case the midpoint is
itself the root we are searching for.
To test whether the endpoints are "close enough" we can use a procedure
similar to the one used in [Section 1.1.7](#Section 1.1.7) for computing
square roots:[^55]
::: scheme
(define (close-enough? x y) (\< (abs (- x y)) 0.001))
:::
`search` is awkward to use directly, because we can accidentally give it
points at which $f$'s values do not have the required sign, in which
case we get a wrong answer. Instead we will use `search` via the
following procedure, which checks to see which of the endpoints has a
negative function value and which has a positive value, and calls the
`search` procedure accordingly. If the function has the same sign on the
two given points, the half-interval method cannot be used, in which case
the procedure signals an error.[^56]
::: scheme
(define (half-interval-method f a b) (let ((a-value (f a)) (b-value (f
b))) (cond ((and (negative? a-value) (positive? b-value)) (search f a
b)) ((and (negative? b-value) (positive? a-value)) (search f b a)) (else
(error \"Values are not of opposite sign\" a b)))))
:::
The following example uses the half-interval method to approximate $\pi$
as the root between 2 and 4 of $\sin x = 0$:
::: scheme
(half-interval-method sin 2.0 4.0) *3.14111328125*
:::
Here is another example, using the half-interval method to search for a
root of the equation $x^3 - 2x - 3 = 0$ between 1 and 2:
::: scheme
(half-interval-method (lambda (x) (- (\* x x x) (\* 2 x) 3)) 1.0 2.0)
*1.89306640625*
:::
#### Finding fixed points of functions {#finding-fixed-points-of-functions .unnumbered}
A number $x$ is called a *fixed point* of a function $f$ if $x$
satisfies the equation $f(x) = x$. For some functions $f$ we can locate
a fixed point by beginning with an initial guess and applying $f$
repeatedly,
$$f(x),\quad f(f(x)),\quad f(f(f(x))), \quad\dots,$$
until the value does not change very much. Using this idea, we can
devise a procedure `fixed/point` that takes as inputs a function and an
initial guess and produces an approximation to a fixed point of the
function. We apply the function repeatedly until we find two successive
values whose difference is less than some prescribed tolerance:
::: scheme
(define tolerance 0.00001) (define (fixed-point f first-guess) (define
(close-enough? v1 v2) (\< (abs (- v1 v2)) tolerance)) (define (try
guess) (let ((next (f guess))) (if (close-enough? guess next) next (try
next)))) (try first-guess))
:::
For example, we can use this method to approximate the fixed point of
the cosine function, starting with 1 as an initial approximation:[^57]
::: scheme
(fixed-point cos 1.0) *.7390822985224023*
:::
Similarly, we can find a solution to the equation $y = \sin y + \cos y$:
::: scheme
(fixed-point (lambda (y) (+ (sin y) (cos y))) 1.0)
*1.2587315962971173*
:::
The fixed-point process is reminiscent of the process we used for
finding square roots in [Section 1.1.7](#Section 1.1.7). Both are based
on the idea of repeatedly improving a guess until the result satisfies
some criterion. In fact, we can readily formulate the square-root
computation as a fixed-point search. Computing the square root of some
number $x$ requires finding a $y$ such that $y^2 = x$. Putting this
equation into the equivalent form $y = x / y$, we recognize that we are
looking for a fixed point of the function[^58] $y \mapsto x / y$, and we
can therefore try to compute square roots as
::: scheme
(define (sqrt x) (fixed-point (lambda (y) (/ x y)) 1.0))
:::
Unfortunately, this fixed-point search does not converge. Consider an
initial guess $y_1$. The next guess is $y_2 = x / y_1$ and the next
guess is $y_3 = x / y_2 = x / (x / y_1) = y_1$. This results in an
infinite loop in which the two guesses $y_1$ and $y_2$ repeat over and
over, oscillating about the answer.
One way to control such oscillations is to prevent the guesses from
changing so much. Since the answer is always between our guess $y$ and
$x / y$, we can make a new guess that is not as far from $y$ as $x / y$
by averaging $y$ with $x / y$, so that the next guess after $y$ is
${1\over2}(y + x / y)$ instead of $x / y$. The process of making such a
sequence of guesses is simply the process of looking for a fixed point
of $y \mapsto {1\over2}(y + x / y)$:
::: scheme
(define (sqrt x) (fixed-point (lambda (y) (average y (/ x y))) 1.0))
:::
(Note that $y = {1\over2}(y + x / y)$ is a simple transformation of the
equation $y = x / y;$ to derive it, add $y$ to both sides of the
equation and divide by 2.)
With this modification, the square-root procedure works. In fact, if we
unravel the definitions, we can see that the sequence of approximations
to the square root generated here is precisely the same as the one
generated by our original square-root procedure of [Section
1.1.7](#Section 1.1.7). This approach of averaging successive
approximations to a solution, a technique that we call *average
damping*, often aids the convergence of fixed-point searches.
> **[]{#Exercise 1.35 label="Exercise 1.35"}Exercise 1.35:** Show that
> the golden ratio $\varphi$ ([Section 1.2.2](#Section 1.2.2)) is a
> fixed point of the transformation $x \mapsto 1 + 1 / x$, and use this
> fact to compute $\varphi$ by means of the `fixed/point` procedure.
> **[]{#Exercise 1.36 label="Exercise 1.36"}Exercise 1.36:** Modify
> `fixed/point` so that it prints the sequence of approximations it
> generates, using the `newline` and `display` primitives shown in
> [Exercise 1.22](#Exercise 1.22). Then find a solution to $x^x = 1000$
> by finding a fixed point of $x \mapsto
> \log(1000) / \log(x)$. (Use Scheme's primitive `log` procedure, which
> computes natural logarithms.) Compare the number of steps this takes
> with and without average damping. (Note that you cannot start
> `fixed/point` with a guess of 1, as this would cause division by
> $\log(1) = 0$.)
> **[]{#Exercise 1.37 label="Exercise 1.37"}Exercise 1.37:**
>
> a. An infinite *continued fraction* is an expression of the form
>
> $${f} = \cfrac{N_1}{D_1 + \cfrac{N_2}{D_2 + \cfrac{N_3}{D_3 + \dots}}}\,.$$
>
> As an example, one can show that the infinite continued fraction
> expansion with the $N_i$ and the $D_i$ all equal to 1 produces
> $1 / \varphi$, where $\varphi$ is the golden ratio (described in
> [Section 1.2.2](#Section 1.2.2)). One way to approximate an
> infinite continued fraction is to truncate the expansion after a
> given number of terms. Such a truncation---a so-called **k*-term
> finite continued fraction*---has the form
>
> $$\cfrac{N_1}{D_1 + \cfrac{N_2}{\ddots + \cfrac{N_k}{D_k}}}\,.$$
>
> Suppose that `n` and `d` are procedures of one argument (the term
> index $i$) that return the $N_i$ and $D_i$ of the terms of the
> continued fraction. Define a procedure `cont/frac` such that
> evaluating `(cont/frac n d k)` computes the value of the $k$-term
> finite continued fraction. Check your procedure by approximating
> $1 / \varphi$ using
>
> ::: scheme
> (cont-frac (lambda (i) 1.0) (lambda (i) 1.0) k)
> :::
>
> for successive values of `k`. How large must you make `k` in order
> to get an approximation that is accurate to 4 decimal places?
>
> b. If your `cont/frac` procedure generates a recursive process, write
> one that generates an iterative process. If it generates an
> iterative process, write one that generates a recursive process.
> **[]{#Exercise 1.38 label="Exercise 1.38"}Exercise 1.38:** In 1737,
> the Swiss mathematician Leonhard Euler published a memoir *De
> Fractionibus Continuis*, which included a continued fraction expansion
> for $e - 2$, where $e$ is the base of the natural logarithms. In this
> fraction, the $N_i$ are all 1, and the $D_i$ are successively 1, 2, 1,
> 1, 4, 1, 1, 6, 1, 1, 8, $\dots$. Write a program that uses your
> `cont/frac` procedure from [Exercise 1.37](#Exercise 1.37) to
> approximate $e$, based on Euler's expansion.
> **[]{#Exercise 1.39 label="Exercise 1.39"}Exercise 1.39:** A continued
> fraction representation of the tangent function was published in 1770
> by the German mathematician J.H. Lambert:
>
> $${\tan x} = \cfrac{x}{1 - \cfrac{x^2}{3 - \cfrac{x^2}{5 - \dots}}}\,,$$
>
> where $x$ is in radians. Define a procedure `(tan/cf x k)` that
> computes an approximation to the tangent function based on Lambert's
> formula. `k` specifies the number of terms to compute, as in [Exercise
> 1.37](#Exercise 1.37).
### Procedures as Returned Values {#Section 1.3.4}
The above examples demonstrate how the ability to pass procedures as
arguments significantly enhances the expressive power of our programming
language. We can achieve even more expressive power by creating
procedures whose returned values are themselves procedures.
We can illustrate this idea by looking again at the fixed-point example
described at the end of [Section 1.3.3](#Section 1.3.3). We formulated a
new version of the square-root procedure as a fixed-point search,
starting with the observation that $\sqrt{x}$ is a fixed-point of the
function $y \mapsto
x / y$. Then we used average damping to make the approximations
converge. Average damping is a useful general technique in itself.
Namely, given a function $f$, we consider the function whose value at
$x$ is equal to the average of $x$ and $f(x)$.
We can express the idea of average damping by means of the following
procedure:
::: scheme
(define (average-damp f) (lambda (x) (average x (f x))))
:::
`average/damp` is a procedure that takes as its argument a procedure `f`
and returns as its value a procedure (produced by the `lambda`) that,
when applied to a number `x`, produces the average of `x` and `(f x)`.
For example, applying `average/damp` to the `square` procedure produces
a procedure whose value at some number $x$ is the average of $x$ and
$x^2$. Applying this resulting procedure to 10 returns the average of 10
and 100, or 55:[^59]
::: scheme
((average-damp square) 10) *55*
:::
Using `average/damp`, we can reformulate the square-root procedure as
follows:
::: scheme
(define (sqrt x) (fixed-point (average-damp (lambda (y) (/ x y))) 1.0))
:::
Notice how this formulation makes explicit the three ideas in the
method: fixed-point search, average damping, and the function
$y \mapsto x / y$. It is instructive to compare this formulation of the
square-root method with the original version given in [Section
1.1.7](#Section 1.1.7). Bear in mind that these procedures express the
same process, and notice how much clearer the idea becomes when we
express the process in terms of these abstractions. In general, there
are many ways to formulate a process as a procedure. Experienced
programmers know how to choose procedural formulations that are
particularly perspicuous, and where useful elements of the process are
exposed as separate entities that can be reused in other applications.
As a simple example of reuse, notice that the cube root of $x$ is a
fixed point of the function $y \mapsto x / y^2$, so we can immediately
generalize our square-root procedure to one that extracts cube
roots:[^60]
::: scheme
(define (cube-root x) (fixed-point (average-damp (lambda (y) (/ x
(square y)))) 1.0))
:::
#### Newton's method {#newtons-method .unnumbered}
When we first introduced the square-root procedure, in [Section
1.1.7](#Section 1.1.7), we mentioned that this was a special case of
*Newton's method*. If $x
\mapsto g(x)$ is a differentiable function, then a solution of the
equation $g(x) = 0$ is a fixed point of the function $x \mapsto f(x)$,
where
$${f(x) = x} - {g(x)\over Dg(x)}$$
and $Dg(x)$ is the derivative of $g$ evaluated at $x$. Newton's method
is the use of the fixed-point method we saw above to approximate a
solution of the equation by finding a fixed point of the function
$f\!.$[^61]
For many functions $g$ and for sufficiently good initial guesses for
$x$, Newton's method converges very rapidly to a solution of
$g(x) = 0.$[^62]
In order to implement Newton's method as a procedure, we must first
express the idea of derivative. Note that "derivative," like average
damping, is something that transforms a function into another function.
For instance, the derivative of the function $x \mapsto x^3$ is the
function $x \mapsto 3x^2\!.$ In general, if $g$ is a function and $dx$
is a small number, then the derivative $Dg$ of $g$ is the function whose
value at any number $x$ is given (in the limit of small $dx$) by
$${Dg(x)} = {g(x + {\it dx}) - g(x) \over {\it dx}}\,.$$
Thus, we can express the idea of derivative (taking $dx$ to be, say,
0.00001) as the procedure
::: scheme
(define (deriv g) (lambda (x) (/ (- (g (+ x dx)) (g x)) dx)))
:::
along with the definition
::: scheme
(define dx 0.00001)
:::
Like `average/damp`, `deriv` is a procedure that takes a procedure as
argument and returns a procedure as value. For example, to approximate
the derivative of $x \mapsto x^3$ at 5 (whose exact value is 75) we can
evaluate
::: scheme
(define (cube x) (\* x x x)) ((deriv cube) 5) *75.00014999664018*
:::
With the aid of `deriv`, we can express Newton's method as a fixed-point
process:
::: scheme
(define (newton-transform g) (lambda (x) (- x (/ (g x) ((deriv g) x)))))
(define (newtons-method g guess) (fixed-point (newton-transform g)
guess))
:::
The `newton/transform` procedure expresses the formula at the beginning
of this section, and `newtons/method` is readily defined in terms of
this. It takes as arguments a procedure that computes the function for
which we want to find a zero, together with an initial guess. For
instance, to find the square root of $x$, we can use Newton's method to
find a zero of the function $y \mapsto y^2 - x$ starting with an initial
guess of 1.[^63]
This provides yet another form of the square-root procedure:
::: scheme
(define (sqrt x) (newtons-method (lambda (y) (- (square y) x)) 1.0))
:::
#### Abstractions and first-class procedures {#abstractions-and-first-class-procedures .unnumbered}
We've seen two ways to express the square-root computation as an
instance of a more general method, once as a fixed-point search and once
using Newton's method. Since Newton's method was itself expressed as a
fixed-point process, we actually saw two ways to compute square roots as
fixed points. Each method begins with a function and finds a fixed point
of some transformation of the function. We can express this general idea
itself as a procedure:
::: scheme
(define (fixed-point-of-transform g transform guess) (fixed-point
(transform g) guess))
:::
This very general procedure takes as its arguments a procedure `g` that
computes some function, a procedure that transforms `g`, and an initial
guess. The returned result is a fixed point of the transformed function.
Using this abstraction, we can recast the first square-root computation
from this section (where we look for a fixed point of the average-damped
version of $y \mapsto x / y$) as an instance of this general method:
::: scheme
(define (sqrt x) (fixed-point-of-transform (lambda (y) (/ x y))
average-damp 1.0))
:::
Similarly, we can express the second square-root computation from this
section (an instance of Newton's method that finds a fixed point of the
Newton transform of $y \mapsto y^2 - x$) as
::: scheme
(define (sqrt x) (fixed-point-of-transform (lambda (y) (- (square y) x))
newton-transform 1.0))
:::
We began [Section 1.3](#Section 1.3) with the observation that compound
procedures are a crucial abstraction mechanism, because they permit us
to express general methods of computing as explicit elements in our
programming language. Now we've seen how higher-order procedures permit
us to manipulate these general methods to create further abstractions.
As programmers, we should be alert to opportunities to identify the
underlying abstractions in our programs and to build upon them and
generalize them to create more powerful abstractions. This is not to say
that one should always write programs in the most abstract way possible;
expert programmers know how to choose the level of abstraction
appropriate to their task. But it is important to be able to think in
terms of these abstractions, so that we can be ready to apply them in
new contexts. The significance of higher-order procedures is that they
enable us to represent these abstractions explicitly as elements in our
programming language, so that they can be handled just like other
computational elements.
In general, programming languages impose restrictions on the ways in
which computational elements can be manipulated. Elements with the
fewest restrictions are said to have *first-class* status. Some of the
"rights and privileges" of first-class elements are:[^64]
- They may be named by variables.
- They may be passed as arguments to procedures.
- They may be returned as the results of procedures.
- They may be included in data structures.[^65]
Lisp, unlike other common programming languages, awards procedures full
first-class status. This poses challenges for efficient implementation,
but the resulting gain in expressive power is enormous.[^66]
> **[]{#Exercise 1.40 label="Exercise 1.40"}Exercise 1.40:** Define a
> procedure `cubic` that can be used together with the `newtons/method`
> procedure in expressions of the form
>
> ::: scheme
> (newtons-method (cubic a b c) 1)
> :::
>
> to approximate zeros of the cubic $x^3 + ax^2 + bx + c$.
> **[]{#Exercise 1.41 label="Exercise 1.41"}Exercise 1.41:** Define a
> procedure `double` that takes a procedure of one argument as argument
> and returns a procedure that applies the original procedure twice. For
> example, if `inc` is a procedure that adds 1 to its argument, then
> `(double inc)` should be a procedure that adds 2. What value is
> returned by
>
> ::: scheme
> (((double (double double)) inc) 5)
> :::
> **[]{#Exercise 1.42 label="Exercise 1.42"}Exercise 1.42:** Let $f$ and
> $g$ be two one-argument functions. The *composition* $f$ after $g$ is
> defined to be the function $x \mapsto f(g(x))$. Define a procedure
> `compose` that implements composition. For example, if `inc` is a
> procedure that adds 1 to its argument,
>
> ::: scheme
> ((compose square inc) 6) *49*
> :::
> **[]{#Exercise 1.43 label="Exercise 1.43"}Exercise 1.43:** If $f$ is a
> numerical function and $n$ is a positive integer, then we can form the
> $n^{\mathrm{th}}$ repeated application of $f$, which is defined to be
> the function whose value at $x$ is $f(f(\dots (f(x))\dots ))$. For
> example, if $f$ is the function $x \mapsto x + 1$, then the
> $n^{\mathrm{th}}$ repeated application of $f$ is the function
> $x \mapsto x + n$. If $f$ is the operation of squaring a number, then
> the $n^{\mathrm{th}}$ repeated application of $f$ is the function that
> raises its argument to the $2^n$-th power. Write a procedure that
> takes as inputs a procedure that computes $f$ and a positive integer
> $n$ and returns the procedure that computes the $n^{\mathrm{th}}$
> repeated application of $f$. Your procedure should be able to be used
> as follows:
>
> ::: scheme
> ((repeated square 2) 5) *625*
> :::
>
> Hint: You may find it convenient to use `compose` from [Exercise
> 1.42](#Exercise 1.42).
> **[]{#Exercise 1.44 label="Exercise 1.44"}Exercise 1.44:** The idea of
> *smoothing* a function is an important concept in signal processing.
> If $f$ is a function and $dx$ is some small number, then the smoothed
> version of $f$ is the function whose value at a point $x$ is the
> average of $f(x - dx)$, $f(x)$, and $f(x + dx)$. Write a procedure
> `smooth` that takes as input a procedure that computes $f$ and returns
> a procedure that computes the smoothed $f$. It is sometimes valuable
> to repeatedly smooth a function (that is, smooth the smoothed
> function, and so on) to obtain the **n*-fold smoothed function*. Show
> how to generate the *n*-fold smoothed function of any given function
> using `smooth` and `repeated` from [Exercise 1.43](#Exercise 1.43).
> **[]{#Exercise 1.45 label="Exercise 1.45"}Exercise 1.45:** We saw in
> [Section 1.3.3](#Section 1.3.3) that attempting to compute square
> roots by naively finding a fixed point of $y \mapsto x / y$ does not
> converge, and that this can be fixed by average damping. The same
> method works for finding cube roots as fixed points of the
> average-damped $y \mapsto x / y^2$. Unfortunately, the process does
> not work for fourth roots---a single average damp is not enough to
> make a fixed-point search for $y \mapsto x / y^3$ converge. On the
> other hand, if we average damp twice (i.e., use the average damp of
> the average damp of $y \mapsto x / y^3$) the fixed-point search does
> converge. Do some experiments to determine how many average damps are
> required to compute $n^{\mathrm{th}}$ roots as a fixed-point search
> based upon repeated average damping of $y \mapsto x / y^{n-1}$. Use
> this to implement a simple procedure for computing $n^{\mathrm{th}}$
> roots using `fixed/point`, `average/damp`, and the `repeated`
> procedure of [Exercise 1.43](#Exercise 1.43). Assume that any
> arithmetic operations you need are available as primitives.
> **[]{#Exercise 1.46 label="Exercise 1.46"}Exercise 1.46:** Several of
> the numerical methods described in this chapter are instances of an
> extremely general computational strategy known as *iterative
> improvement*. Iterative improvement says that, to compute something,
> we start with an initial guess for the answer, test if the guess is
> good enough, and otherwise improve the guess and continue the process
> using the improved guess as the new guess. Write a procedure
> `iterative/improve` that takes two procedures as arguments: a method
> for telling whether a guess is good enough and a method for improving
> a guess. `iterative/improve` should return as its value a procedure
> that takes a guess as argument and keeps improving the guess until it
> is good enough. Rewrite the `sqrt` procedure of [Section
> 1.1.7](#Section 1.1.7) and the `fixed/point` procedure of [Section
> 1.3.3](#Section 1.3.3) in terms of `iterative/improve`.
# Building Abstractions with Data {#Chapter 2}
> We now come to the decisive step of mathematical abstraction: we
> forget about what the symbols stand for. $\dots$\[The mathematician\]
> need not be idle; there are many operations which he may carry out
> with these symbols, without ever having to look at the things they
> stand for.
>
> ---Hermann Weyl, *The Mathematical Way of Thinking*
We concentrated in [Chapter 1](#Chapter 1) on
computational processes and on the role of procedures in program design.
We saw how to use primitive data (numbers) and primitive operations
(arithmetic operations), how to combine procedures to form compound
procedures through composition, conditionals, and the use of parameters,
and how to abstract procedures by using `define`. We saw that a
procedure can be regarded as a pattern for the local evolution of a
process, and we classified, reasoned about, and performed simple
algorithmic analyses of some common patterns for processes as embodied
in procedures. We also saw that higher-order procedures enhance the
power of our language by enabling us to manipulate, and thereby to
reason in terms of, general methods of computation. This is much of the
essence of programming.
In this chapter we are going to look at more complex data. All the
procedures in chapter 1 operate on simple numerical data, and simple
data are not sufficient for many of the problems we wish to address
using computation. Programs are typically designed to model complex
phenomena, and more often than not one must construct computational
objects that have several parts in order to model real-world phenomena
that have several aspects. Thus, whereas our focus in chapter 1 was on
building abstractions by combining procedures to form compound
procedures, we turn in this chapter to another key aspect of any
programming language: the means it provides for building abstractions by
combining data objects to form *compound data*.
Why do we want compound data in a programming language? For the same
reasons that we want compound procedures: to elevate the conceptual
level at which we can design our programs, to increase the modularity of
our designs, and to enhance the expressive power of our language. Just
as the ability to define procedures enables us to deal with processes at
a higher conceptual level than that of the primitive operations of the
language, the ability to construct compound data objects enables us to
deal with data at a higher conceptual level than that of the primitive
data objects of the language.
Consider the task of designing a system to perform arithmetic with
rational numbers. We could imagine an operation `add/rat` that takes two
rational numbers and produces their sum. In terms of simple data, a
rational number can be thought of as two integers: a numerator and a
denominator. Thus, we could design a program in which each rational
number would be represented by two integers (a numerator and a
denominator) and where `add/rat` would be implemented by two procedures
(one producing the numerator of the sum and one producing the
denominator). But this would be awkward, because we would then need to
explicitly keep track of which numerators corresponded to which
denominators. In a system intended to perform many operations on many
rational numbers, such bookkeeping details would clutter the programs
substantially, to say nothing of what they would do to our minds. It
would be much better if we could "glue together" a numerator and
denominator to form a pair---a *compound data object*---that our
programs could manipulate in a way that would be consistent with
regarding a rational number as a single conceptual unit.
The use of compound data also enables us to increase the modularity of
our programs. If we can manipulate rational numbers directly as objects
in their own right, then we can separate the part of our program that
deals with rational numbers per se from the details of how rational
numbers may be represented as pairs of integers. The general technique
of isolating the parts of a program that deal with how data objects are
represented from the parts of a program that deal with how data objects
are used is a powerful design methodology called *data abstraction*. We
will see how data abstraction makes programs much easier to design,
maintain, and modify.
The use of compound data leads to a real increase in the expressive
power of our programming language. Consider the idea of forming a
"linear combination" $ax + by$. We might like to write a procedure that
would accept $a$, $b$, $x$, and $y$ as arguments and return the value of
$ax + by$. This presents no difficulty if the arguments are to be
numbers, because we can readily define the procedure
::: scheme
(define (linear-combination a b x y) (+ (\* a x) (\* b y)))
:::
But suppose we are not concerned only with numbers. Suppose we would
like to express, in procedural terms, the idea that one can form linear
combinations whenever addition and multiplication are defined---for
rational numbers, complex numbers, polynomials, or whatever. We could
express this as a procedure of the form
::: scheme
(define (linear-combination a b x y) (add (mul a x) (mul b y)))
:::
where `add` and `mul` are not the primitive procedures `+` and `*` but
rather more complex things that will perform the appropriate operations
for whatever kinds of data we pass in as the arguments `a`, `b`, `x`,
and `y`. The key point is that the only thing `linear/combination`
should need to know about `a`, `b`, `x`, and `y` is that the procedures
`add` and `mul` will perform the appropriate manipulations. From the
perspective of the procedure `linear/combination`, it is irrelevant what
`a`, `b`, `x`, and `y` are and even more irrelevant how they might
happen to be represented in terms of more primitive data. This same
example shows why it is important that our programming language provide
the ability to manipulate compound objects directly: Without this, there
is no way for a procedure such as `linear/combination` to pass its
arguments along to `add` and `mul` without having to know their detailed
structure.[^67]
We begin this chapter by implementing the rational-number arithmetic
system mentioned above. This will form the background for our discussion
of compound data and data abstraction. As with compound procedures, the
main issue to be addressed is that of abstraction as a technique for
coping with complexity, and we will see how data abstraction enables us
to erect suitable *abstraction barriers* between different parts of a
program.
We will see that the key to forming compound data is that a programming
language should provide some kind of "glue" so that data objects can be
combined to form more complex data objects. There are many possible
kinds of glue. Indeed, we will discover how to form compound data using
no special "data" operations at all, only procedures. This will further
blur the distinction between "procedure" and "data," which was already
becoming tenuous toward the end of chapter 1. We will also explore some
conventional techniques for representing sequences and trees. One key
idea in dealing with compound data is the notion of *closure*---that the
glue we use for combining data objects should allow us to combine not
only primitive data objects, but compound data objects as well. Another
key idea is that compound data objects can serve as *conventional
interfaces* for combining program modules in mix-and-match ways. We
illustrate some of these ideas by presenting a simple graphics language
that exploits closure.
We will then augment the representational power of our language by
introducing *symbolic expressions*---data whose elementary parts can be
arbitrary symbols rather than only numbers. We explore various
alternatives for representing sets of objects. We will find that, just
as a given numerical function can be computed by many different
computational processes, there are many ways in which a given data
structure can be represented in terms of simpler objects, and the choice
of representation can have significant impact on the time and space
requirements of processes that manipulate the data. We will investigate
these ideas in the context of symbolic differentiation, the
representation of sets, and the encoding of information.
Next we will take up the problem of working with data that may be
represented differently by different parts of a program. This leads to
the need to implement *generic operations*, which must handle many
different types of data. Maintaining modularity in the presence of
generic operations requires more powerful abstraction barriers than can
be erected with simple data abstraction alone. In particular, we
introduce *data-directed programming* as a technique that allows
individual data representations to be designed in isolation and then
combined *additively* (i.e., without modification). To illustrate the
power of this approach to system design, we close the chapter by
applying what we have learned to the implementation of a package for
performing symbolic arithmetic on polynomials, in which the coefficients
of the polynomials can be integers, rational numbers, complex numbers,
and even other polynomials.
## Introduction to Data Abstraction {#Section 2.1}
In [Section 1.1.8](#Section 1.1.8), we noted that a procedure used as an
element in creating a more complex procedure could be regarded not only
as a collection of particular operations but also as a procedural
abstraction. That is, the details of how the procedure was implemented
could be suppressed, and the particular procedure itself could be
replaced by any other procedure with the same overall behavior. In other
words, we could make an abstraction that would separate the way the
procedure would be used from the details of how the procedure would be
implemented in terms of more primitive procedures. The analogous notion
for compound data is called *data abstraction*. Data abstraction is a
methodology that enables us to isolate how a compound data object is
used from the details of how it is constructed from more primitive data
objects.
The basic idea of data abstraction is to structure the programs that are
to use compound data objects so that they operate on "abstract data."
That is, our programs should use data in such a way as to make no
assumptions about the data that are not strictly necessary for
performing the task at hand. At the same time, a "concrete" data
representation is defined independent of the programs that use the data.
The interface between these two parts of our system will be a set of
procedures, called *selectors* and *constructors*, that implement the
abstract data in terms of the concrete representation. To illustrate
this technique, we will consider how to design a set of procedures for
manipulating rational numbers.
### Example: Arithmetic Operations for Rational Numbers {#Section 2.1.1}
Suppose we want to do arithmetic with rational numbers. We want to be
able to add, subtract, multiply, and divide them and to test whether two
rational numbers are equal.
Let us begin by assuming that we already have a way of constructing a
rational number from a numerator and a denominator. We also assume that,
given a rational number, we have a way of extracting (or selecting) its
numerator and its denominator. Let us further assume that the
constructor and selectors are available as procedures:
- $\hbox{\tt(make-rat}\;\langle{n}\rangle\;\langle{d}\kern0.06em\rangle\hbox{\tt)}$
returns the rational number whose numerator is the integer
$\langle{n}\rangle$ and whose denominator is the integer
$\langle{d}\kern0.06em\rangle$.
- $\hbox{\tt(numer}\;\;\langle{x}\rangle\hbox{\tt)}$ returns the
numerator of the rational number $\langle{x}\rangle$.
- $\hbox{\tt(denom}\;\;\langle{x}\rangle\hbox{\tt)}$ returns the
denominator of the rational number $\langle{x}\rangle$.
We are using here a powerful strategy of synthesis: *wishful thinking*.
We haven't yet said how a rational number is represented, or how the
procedures `numer`, `denom`, and `make/rat` should be implemented. Even
so, if we did have these three procedures, we could then add, subtract,
multiply, divide, and test equality by using the following relations:
$$\begin{aligned}
{n_1 \over d_1} + {n_2 \over d_2} &= {n_1 d_2 + n_2 d_1 \over d_1 d_2}, \\
{n_1 \over d_1} - {n_2 \over d_2} &= {n_1 d_2 - n_2 d_1 \over d_1 d_2}, \\
{n_1 \over d_1} \cdot {n_2 \over d_2} &= {n_1 n_2 \over d_1 d_2}, \\
{n_1 / d_1} \over {n_2 / d_2} &= {n_1 d_2 \over d_1 n_2}, \\
{n_1 \over d_1} &= {n_2 \over d_2} \quad
{\rm\ if\ and\ only\ if\quad}
n_1 d_2 = n_2 d_1.
\end{aligned}$$ We can express these rules as procedures:
::: scheme
(define (add-rat x y) (make-rat (+ (\* (numer x) (denom y)) (\* (numer
y) (denom x))) (\* (denom x) (denom y)))) (define (sub-rat x y)
(make-rat (- (\* (numer x) (denom y)) (\* (numer y) (denom x))) (\*
(denom x) (denom y)))) (define (mul-rat x y) (make-rat (\* (numer x)
(numer y)) (\* (denom x) (denom y)))) (define (div-rat x y) (make-rat
(\* (numer x) (denom y)) (\* (denom x) (numer y))))
(define (equal-rat? x y) (= (\* (numer x) (denom y)) (\* (numer y)
(denom x))))
:::
Now we have the operations on rational numbers defined in terms of the
selector and constructor procedures `numer`, `denom`, and `make/rat`.
But we haven't yet defined these. What we need is some way to glue
together a numerator and a denominator to form a rational number.
#### Pairs {#pairs .unnumbered}
To enable us to implement the concrete level of our data abstraction,
our language provides a compound structure called a *pair*, which can be
constructed with the primitive procedure `cons`. This procedure takes
two arguments and returns a compound data object that contains the two
arguments as parts. Given a pair, we can extract the parts using the
primitive procedures `car` and `cdr`.[^68] Thus, we can use `cons`,
`car`, and `cdr` as follows:
::: scheme
(define x (cons 1 2)) (car x) *1* (cdr x) *2*
:::
Notice that a pair is a data object that can be given a name and
manipulated, just like a primitive data object. Moreover, `cons` can be
used to form pairs whose elements are pairs, and so on:
::: scheme
(define x (cons 1 2)) (define y (cons 3 4)) (define z (cons x y)) (car
(car z)) *1* (car (cdr z)) *3*
:::
In [Section 2.2](#Section 2.2) we will see how this ability to combine
pairs means that pairs can be used as general-purpose building blocks to
create all sorts of complex data structures. The single compound-data
primitive *pair*, implemented by the procedures `cons`, `car`, and
`cdr`, is the only glue we need. Data objects constructed from pairs are
called *list-structured* data.
#### Representing rational numbers {#representing-rational-numbers .unnumbered}
Pairs offer a natural way to complete the rational-number system. Simply
represent a rational number as a pair of two integers: a numerator and a
denominator. Then `make/rat`, `numer`, and `denom` are readily
implemented as follows:[^69]
::: scheme
(define (make-rat n d) (cons n d)) (define (numer x) (car x)) (define
(denom x) (cdr x))
:::
Also, in order to display the results of our computations, we can print
rational numbers by printing the numerator, a slash, and the
denominator:[^70]
::: scheme
(define (print-rat x) (newline) (display (numer x)) (display \"/\")
(display (denom x)))
:::
Now we can try our rational-number procedures:
::: scheme
(define one-half (make-rat 1 2)) (print-rat one-half) *1/2* (define
one-third (make-rat 1 3)) (print-rat (add-rat one-half one-third))
*5/6* (print-rat (mul-rat one-half one-third)) *1/6* (print-rat
(add-rat one-third one-third)) *6/9*
:::
As the final example shows, our rational-number implementation does not
reduce rational numbers to lowest terms. We can remedy this by changing
`make/rat`. If we have a `gcd` procedure like the one in [Section
1.2.5](#Section 1.2.5) that produces the greatest common divisor of two
integers, we can use `gcd` to reduce the numerator and the denominator
to lowest terms before constructing the pair:
::: scheme
(define (make-rat n d) (let ((g (gcd n d))) (cons (/ n g) (/ d g))))
:::
Now we have
::: scheme
(print-rat (add-rat one-third one-third)) *2/3*
:::
as desired. This modification was accomplished by changing the
constructor `make/rat` without changing any of the procedures (such as
`add/rat` and `mul/rat`) that implement the actual operations.
> **[]{#Exercise 2.1 label="Exercise 2.1"}Exercise 2.1:** Define a
> better version of `make/rat` that handles both positive and negative
> arguments. `make/rat` should normalize the sign so that if the
> rational number is positive, both the numerator and denominator are
> positive, and if the rational number is negative, only the numerator
> is negative.
### Abstraction Barriers {#Section 2.1.2}
Before continuing with more examples of compound data and data
abstraction, let us consider some of the issues raised by the
rational-number example. We defined the rational-number operations in
terms of a constructor `make/rat` and selectors `numer` and `denom`. In
general, the underlying idea of data abstraction is to identify for each
type of data object a basic set of operations in terms of which all
manipulations of data objects of that type will be expressed, and then
to use only those operations in manipulating the data.
[]{#Figure 2.1 label="Figure 2.1"}
![image](fig/chap2/Fig2.1c.pdf){width="91mm"}
> **Figure 2.1:** Data-abstraction barriers in the rational-number
> package.
We can envision the structure of the rational-number system as shown in
[Figure 2.1](#Figure 2.1). The horizontal lines represent *abstraction
barriers* that isolate different "levels" of the system. At each level,
the barrier separates the programs (above) that use the data abstraction
from the programs (below) that implement the data abstraction. Programs
that use rational numbers manipulate them solely in terms of the
procedures supplied "for public use" by the rational-number package:
`add/rat`, `sub/rat`, `mul/rat`, `div/rat`, and `equal/rat?`. These, in
turn, are implemented solely in terms of the constructor and selectors
`make/rat`, `numer`, and `denom`, which themselves are implemented in
terms of pairs. The details of how pairs are implemented are irrelevant
to the rest of the rational-number package so long as pairs can be
manipulated by the use of `cons`, `car`, and `cdr`. In effect,
procedures at each level are the interfaces that define the abstraction
barriers and connect the different levels.
This simple idea has many advantages. One advantage is that it makes
programs much easier to maintain and to modify. Any complex data
structure can be represented in a variety of ways with the primitive
data structures provided by a programming language. Of course, the
choice of representation influences the programs that operate on it;
thus, if the representation were to be changed at some later time, all
such programs might have to be modified accordingly. This task could be
time-consuming and expensive in the case of large programs unless the
dependence on the representation were to be confined by design to a very
few program modules.
For example, an alternate way to address the problem of reducing
rational numbers to lowest terms is to perform the reduction whenever we
access the parts of a rational number, rather than when we construct it.
This leads to different constructor and selector procedures:
::: scheme
(define (make-rat n d) (cons n d)) (define (numer x) (let ((g (gcd (car
x) (cdr x)))) (/ (car x) g))) (define (denom x) (let ((g (gcd (car x)
(cdr x)))) (/ (cdr x) g)))
:::
The difference between this implementation and the previous one lies in
when we compute the `gcd`. If in our typical use of rational numbers we
access the numerators and denominators of the same rational numbers many
times, it would be preferable to compute the `gcd` when the rational
numbers are constructed. If not, we may be better off waiting until
access time to compute the `gcd`. In any case, when we change from one
representation to the other, the procedures `add/rat`, `sub/rat`, and so
on do not have to be modified at all.
Constraining the dependence on the representation to a few interface
procedures helps us design programs as well as modify them, because it
allows us to maintain the flexibility to consider alternate
implementations. To continue with our simple example, suppose we are
designing a rational-number package and we can't decide initially
whether to perform the `gcd` at construction time or at selection time.
The data-abstraction methodology gives us a way to defer that decision
without losing the ability to make progress on the rest of the system.
> **[]{#Exercise 2.2 label="Exercise 2.2"}Exercise 2.2:** Consider the
> problem of representing line segments in a plane. Each segment is
> represented as a pair of points: a starting point and an ending point.
> Define a constructor `make/segment` and selectors `start/segment` and
> `end/segment` that define the representation of segments in terms of
> points. Furthermore, a point can be represented as a pair of numbers:
> the $x$ coordinate and the $y$ coordinate. Accordingly, specify a
> constructor `make/point` and selectors `x/point` and `y/point` that
> define this representation. Finally, using your selectors and
> constructors, define a procedure `midpoint/segment` that takes a line
> segment as argument and returns its midpoint (the point whose
> coordinates are the average of the coordinates of the endpoints). To
> try your procedures, you'll need a way to print points:
>
> ::: scheme
> (define (print-point p) (newline) (display \"(\") (display (x-point
> p)) (display \",\") (display (y-point p)) (display \")\"))
> :::
> **[]{#Exercise 2.3 label="Exercise 2.3"}Exercise 2.3:** Implement a
> representation for rectangles in a plane. (Hint: You may want to make
> use of [Exercise 2.2](#Exercise 2.2).) In terms of your constructors
> and selectors, create procedures that compute the perimeter and the
> area of a given rectangle. Now implement a different representation
> for rectangles. Can you design your system with suitable abstraction
> barriers, so that the same perimeter and area procedures will work
> using either representation?
### What Is Meant by Data? {#Section 2.1.3}
We began the rational-number implementation in [Section
2.1.1](#Section 2.1.1) by implementing the rational-number operations
`add/rat`, `sub/rat`, and so on in terms of three unspecified
procedures: `make/rat`, `numer`, and `denom`. At that point, we could
think of the operations as being defined in terms of data
objects---numerators, denominators, and rational numbers---whose
behavior was specified by the latter three procedures.
But exactly what is meant by *data*? It is not enough to say "whatever
is implemented by the given selectors and constructors." Clearly, not
every arbitrary set of three procedures can serve as an appropriate
basis for the rational-number implementation. We need to guarantee that,
if we construct a rational number `x` from a pair of integers `n` and
`d`, then extracting the `numer` and the `denom` of `x` and dividing
them should yield the same result as dividing `n` by `d`. In other
words, `make/rat`, `numer`, and `denom` must satisfy the condition that,
for any integer `n` and any non-zero integer `d`, if `x` is
`(make/rat n d)`, then
$${\hbox{\tt(numer x)} \over \hbox{\tt(denom x)}} = {{\tt n} \over {\tt d}}\,.$$
In fact, this is the only condition `make/rat`, `numer`, and `denom`
must fulfill in order to form a suitable basis for a rational-number
representation. In general, we can think of data as defined by some
collection of selectors and constructors, together with specified
conditions that these procedures must fulfill in order to be a valid
representation.[^71]
This point of view can serve to define not only "high-level" data
objects, such as rational numbers, but lower-level objects as well.
Consider the notion of a pair, which we used in order to define our
rational numbers. We never actually said what a pair was, only that the
language supplied procedures `cons`, `car`, and `cdr` for operating on
pairs. But the only thing we need to know about these three operations
is that if we glue two objects together using `cons` we can retrieve the
objects using `car` and `cdr`. That is, the operations satisfy the
condition that, for any objects `x` and `y`, if `z` is `(cons x y)` then
`(car z)` is `x` and `(cdr z)` is `y`. Indeed, we mentioned that these
three procedures are included as primitives in our language. However,
any triple of procedures that satisfies the above condition can be used
as the basis for implementing pairs. This point is illustrated
strikingly by the fact that we could implement `cons`, `car`, and `cdr`
without using any data structures at all but only using procedures. Here
are the definitions:
::: scheme
(define (cons x y) (define (dispatch m) (cond ((= m 0) x) ((= m 1) y)
(else (error \"Argument not 0 or 1: CONS\" m)))) dispatch) (define (car
z) (z 0)) (define (cdr z) (z 1))
:::
This use of procedures corresponds to nothing like our intuitive notion
of what data should be. Nevertheless, all we need to do to show that
this is a valid way to represent pairs is to verify that these
procedures satisfy the condition given above.
The subtle point to notice is that the value returned by `(cons x y)` is
a procedure---namely the internally defined procedure `dispatch`, which
takes one argument and returns either `x` or `y` depending on whether
the argument is 0 or 1. Correspondingly, `(car z)` is defined to apply
`z` to 0. Hence, if `z` is the procedure formed by `(cons x y)`, then
`z` applied to 0 will yield `x`. Thus, we have shown that
`(car (cons x y))` yields `x`, as desired. Similarly, `(cdr (cons x y))`
applies the procedure returned by `(cons x y)` to 1, which returns `y`.
Therefore, this procedural implementation of pairs is a valid
implementation, and if we access pairs using only `cons`, `car`, and
`cdr` we cannot distinguish this implementation from one that uses
"real" data structures.
The point of exhibiting the procedural representation of pairs is not
that our language works this way (Scheme, and Lisp systems in general,
implement pairs directly, for efficiency reasons) but that it could work
this way. The procedural representation, although obscure, is a
perfectly adequate way to represent pairs, since it fulfills the only
conditions that pairs need to fulfill. This example also demonstrates
that the ability to manipulate procedures as objects automatically
provides the ability to represent compound data. This may seem a
curiosity now, but procedural representations of data will play a
central role in our programming repertoire. This style of programming is
often called *message passing*, and we will be using it as a basic tool
in [Chapter 3](#Chapter 3) when we address the issues of modeling and
simulation.
> **[]{#Exercise 2.4 label="Exercise 2.4"}Exercise 2.4:** Here is an
> alternative procedural representation of pairs. For this
> representation, verify that `(car (cons x y))` yields `x` for any
> objects `x` and `y`.
>
> ::: scheme
> (define (cons x y) (lambda (m) (m x y))) (define (car z) (z (lambda (p
> q) p)))
> :::
>
> What is the corresponding definition of `cdr`? (Hint: To verify that
> this works, make use of the substitution model of [Section
> 1.1.5](#Section 1.1.5).)
> **[]{#Exercise 2.5 label="Exercise 2.5"}Exercise 2.5:** Show that we
> can represent pairs of nonnegative integers using only numbers and
> arithmetic operations if we represent the pair $a$ and $b$ as the
> integer that is the product $2^a 3^b$. Give the corresponding
> definitions of the procedures `cons`, `car`, and `cdr`.
> **[]{#Exercise 2.6 label="Exercise 2.6"}Exercise 2.6:** In case
> representing pairs as procedures wasn't mind-boggling enough, consider
> that, in a language that can manipulate procedures, we can get by
> without numbers (at least insofar as nonnegative integers are
> concerned) by implementing 0 and the operation of adding 1 as
>
> ::: scheme
> (define zero (lambda (f) (lambda (x) x))) (define (add-1 n) (lambda
> (f) (lambda (x) (f ((n f) x)))))
> :::
>
> This representation is known as *Church numerals*, after its inventor,
> Alonzo Church, the logician who invented the λ-calculus.
>
> Define `one` and `two` directly (not in terms of `zero` and `add/1`).
> (Hint: Use substitution to evaluate `(add/1 zero)`). Give a direct
> definition of the addition procedure `+` (not in terms of repeated
> application of `add/1`).
### Extended Exercise: Interval Arithmetic {#Section 2.1.4}
Alyssa P. Hacker is designing a system to help people solve engineering
problems. One feature she wants to provide in her system is the ability
to manipulate inexact quantities (such as measured parameters of
physical devices) with known precision, so that when computations are
done with such approximate quantities the results will be numbers of
known precision.
Electrical engineers will be using Alyssa's system to compute electrical
quantities. It is sometimes necessary for them to compute the value of a
parallel equivalent resistance $R_p$ of two resistors $R_1$, $R_2$ using
the formula
$$R_p = {1 \over 1 / R_1 + 1 / R_2}.$$
Resistance values are usually known only up to some tolerance guaranteed
by the manufacturer of the resistor. For example, if you buy a resistor
labeled "6.8 ohms with 10% tolerance" you can only be sure that the
resistor has a resistance between $6.8 - 0.68 = 6.12$ and
$6.8 + 0.68 = 7.48$ ohms. Thus, if you have a 6.8-ohm 10% resistor in
parallel with a 4.7-ohm 5% resistor, the resistance of the combination
can range from about 2.58 ohms (if the two resistors are at the lower
bounds) to about 2.97 ohms (if the two resistors are at the upper
bounds).
Alyssa's idea is to implement "interval arithmetic" as a set of
arithmetic operations for combining "intervals" (objects that represent
the range of possible values of an inexact quantity). The result of
adding, subtracting, multiplying, or dividing two intervals is itself an
interval, representing the range of the result.
Alyssa postulates the existence of an abstract object called an
"interval" that has two endpoints: a lower bound and an upper bound. She
also presumes that, given the endpoints of an interval, she can
construct the interval using the data constructor `make/interval`.
Alyssa first writes a procedure for adding two intervals. She reasons
that the minimum value the sum could be is the sum of the two lower
bounds and the maximum value it could be is the sum of the two upper
bounds:
::: scheme
(define (add-interval x y) (make-interval (+ (lower-bound x)
(lower-bound y)) (+ (upper-bound x) (upper-bound y))))
:::
Alyssa also works out the product of two intervals by finding the
minimum and the maximum of the products of the bounds and using them as
the bounds of the resulting interval. (`min` and `max` are primitives
that find the minimum or maximum of any number of arguments.)
::: scheme
(define (mul-interval x y) (let ((p1 (\* (lower-bound x) (lower-bound
y))) (p2 (\* (lower-bound x) (upper-bound y))) (p3 (\* (upper-bound x)
(lower-bound y))) (p4 (\* (upper-bound x) (upper-bound y))))
(make-interval (min p1 p2 p3 p4) (max p1 p2 p3 p4))))
:::
To divide two intervals, Alyssa multiplies the first by the reciprocal
of the second. Note that the bounds of the reciprocal interval are the
reciprocal of the upper bound and the reciprocal of the lower bound, in
that order.
::: scheme
(define (div-interval x y) (mul-interval x (make-interval (/ 1.0
(upper-bound y)) (/ 1.0 (lower-bound y)))))
:::
> **[]{#Exercise 2.7 label="Exercise 2.7"}Exercise 2.7:** Alyssa's
> program is incomplete because she has not specified the implementation
> of the interval abstraction. Here is a definition of the interval
> constructor:
>
> ::: scheme
> (define (make-interval a b) (cons a b))
> :::
>
> Define selectors `upper/bound` and `lower/bound` to complete the
> implementation.
> **[]{#Exercise 2.8 label="Exercise 2.8"}Exercise 2.8:** Using
> reasoning analogous to Alyssa's, describe how the difference of two
> intervals may be computed. Define a corresponding subtraction
> procedure, called `sub/interval`.
> **[]{#Exercise 2.9 label="Exercise 2.9"}Exercise 2.9:** The *width* of
> an interval is half of the difference between its upper and lower
> bounds. The width is a measure of the uncertainty of the number
> specified by the interval. For some arithmetic operations the width of
> the result of combining two intervals is a function only of the widths
> of the argument intervals, whereas for others the width of the
> combination is not a function of the widths of the argument intervals.
> Show that the width of the sum (or difference) of two intervals is a
> function only of the widths of the intervals being added (or
> subtracted). Give examples to show that this is not true for
> multiplication or division.
> **[]{#Exercise 2.10 label="Exercise 2.10"}Exercise 2.10:** Ben
> Bitdiddle, an expert systems programmer, looks over Alyssa's shoulder
> and comments that it is not clear what it means to divide by an
> interval that spans zero. Modify Alyssa's code to check for this
> condition and to signal an error if it occurs.
> **[]{#Exercise 2.11 label="Exercise 2.11"}Exercise 2.11:** In passing,
> Ben also cryptically comments: "By testing the signs of the endpoints
> of the intervals, it is possible to break `mul/interval` into nine
> cases, only one of which requires more than two multiplications."
> Rewrite this procedure using Ben's suggestion.
>
> After debugging her program, Alyssa shows it to a potential user, who
> complains that her program solves the wrong problem. He wants a
> program that can deal with numbers represented as a center value and
> an additive tolerance; for example, he wants to work with intervals
> such as $3.5 \pm 0.15$ rather than \[3.35, 3.65\]. Alyssa returns to
> her desk and fixes this problem by supplying an alternate constructor
> and alternate selectors:
>
> ::: scheme
> (define (make-center-width c w) (make-interval (- c w) (+ c w)))
> (define (center i) (/ (+ (lower-bound i) (upper-bound i)) 2)) (define
> (width i) (/ (- (upper-bound i) (lower-bound i)) 2))
> :::
>
> Unfortunately, most of Alyssa's users are engineers. Real engineering
> situations usually involve measurements with only a small uncertainty,
> measured as the ratio of the width of the interval to the midpoint of
> the interval. Engineers usually specify percentage tolerances on the
> parameters of devices, as in the resistor specifications given
> earlier.
> **[]{#Exercise 2.12 label="Exercise 2.12"}Exercise 2.12:** Define a
> constructor `make/center/percent` that takes a center and a percentage
> tolerance and produces the desired interval. You must also define a
> selector `percent` that produces the percentage tolerance for a given
> interval. The `center` selector is the same as the one shown above.
> **[]{#Exercise 2.13 label="Exercise 2.13"}Exercise 2.13:** Show that
> under the assumption of small percentage tolerances there is a simple
> formula for the approximate percentage tolerance of the product of two
> intervals in terms of the tolerances of the factors. You may simplify
> the problem by assuming that all numbers are positive.
>
> After considerable work, Alyssa P. Hacker delivers her finished
> system. Several years later, after she has forgotten all about it, she
> gets a frenzied call from an irate user, Lem E. Tweakit. It seems that
> Lem has noticed that the formula for parallel resistors can be written
> in two algebraically equivalent ways:
>
> $$R_1 R_2 \over R_1 + R_2$$
>
> and
>
> $${1 \over 1 / R_1 + 1 / R_2}.$$
>
> He has written the following two programs, each of which computes the
> parallel-resistors formula differently:
>
> ::: scheme
> (define (par1 r1 r2) (div-interval (mul-interval r1 r2) (add-interval
> r1 r2)))
> :::
>
> ::: scheme
> (define (par2 r1 r2) (let ((one (make-interval 1 1))) (div-interval
> one (add-interval (div-interval one r1) (div-interval one r2)))))
> :::
>
> Lem complains that Alyssa's program gives different answers for the
> two ways of computing. This is a serious complaint.
> **[]{#Exercise 2.14 label="Exercise 2.14"}Exercise 2.14:** Demonstrate
> that Lem is right. Investigate the behavior of the system on a variety
> of arithmetic expressions. Make some intervals $A$ and $B$, and use
> them in computing the expressions $A / A$ and $A / B$. You will get
> the most insight by using intervals whose width is a small percentage
> of the center value. Examine the results of the computation in
> center-percent form (see [Exercise 2.12](#Exercise 2.12)).
> **[]{#Exercise 2.15 label="Exercise 2.15"}Exercise 2.15:** Eva Lu
> Ator, another user, has also noticed the different intervals computed
> by different but algebraically equivalent expressions. She says that a
> formula to compute with intervals using Alyssa's system will produce
> tighter error bounds if it can be written in such a form that no
> variable that represents an uncertain number is repeated. Thus, she
> says, `par2` is a "better" program for parallel resistances than
> `par1`. Is she right? Why?
> **[]{#Exercise 2.16 label="Exercise 2.16"}Exercise 2.16:** Explain, in
> general, why equivalent algebraic expressions may lead to different
> answers. Can you devise an interval-arithmetic package that does not
> have this shortcoming, or is this task impossible? (Warning: This
> problem is very difficult.)
## Hierarchical Data and the Closure Property {#Section 2.2}
As we have seen, pairs provide a primitive "glue" that we can use to
construct compound data objects. [Figure 2.2](#Figure 2.2) shows a
standard way to visualize a pair---in this case, the pair formed by
`(cons 1 2)`. In this representation, which is called *box-and-pointer
notation*, each object is shown as a *pointer* to a box. The box for a
primitive object contains a representation of the object. For example,
the box for a number contains a numeral. The box for a pair is actually
a double box, the left part containing (a pointer to) the `car` of the
pair and the right part containing the `cdr`.
We have already seen that `cons` can be used to combine not only numbers
but pairs as well. (You made use of this fact, or should have, in doing
[Exercise 2.2](#Exercise 2.2) and [Exercise 2.3](#Exercise 2.3).) As a
consequence, pairs provide a universal building block from which we can
construct all sorts of data structures. [Figure 2.3](#Figure 2.3) shows
two ways to use pairs to combine the numbers 1, 2, 3, and 4.
[]{#Figure 2.2 label="Figure 2.2"}
![image](fig/chap2/Fig2.2c.pdf){width="34mm"}
> **Figure 2.2:** Box-and-pointer representation of `(cons 1 2)`.
[]{#Figure 2.3 label="Figure 2.3"}
![image](fig/chap2/Fig2.3c.pdf){width="96mm"}
> **Figure 2.3:** Two ways to combine 1, 2, 3, and 4 using pairs.
The ability to create pairs whose elements are pairs is the essence of
list structure's importance as a representational tool. We refer to this
ability as the *closure property* of `cons`. In general, an operation
for combining data objects satisfies the closure property if the results
of combining things with that operation can themselves be combined using
the same operation.[^72] Closure is the key to power in any means of
combination because it permits us to create *hierarchical*
structures---structures made up of parts, which themselves are made up
of parts, and so on.
From the outset of [Chapter 1](#Chapter 1), we've made essential use of
closure in dealing with procedures, because all but the very simplest
programs rely on the fact that the elements of a combination can
themselves be combinations. In this section, we take up the consequences
of closure for compound data. We describe some conventional techniques
for using pairs to represent sequences and trees, and we exhibit a
graphics language that illustrates closure in a vivid way.[^73]
### Representing Sequences {#Section 2.2.1}
One of the useful structures we can build with pairs is a
*sequence*---an ordered collection of data objects. There are, of
course, many ways to represent sequences in terms of pairs. One
particularly straightforward representation is illustrated in [Figure
2.4](#Figure 2.4), where the sequence 1, 2, 3, 4 is represented as a
chain of pairs. The `car` of each pair is the corresponding item in the
chain, and the `cdr` of the pair is the next pair in the chain. The
`cdr` of the final pair signals the end of the sequence by pointing to a
distinguished value that is not a pair, represented in box-and-pointer
diagrams as a diagonal line and in programs as the value of the variable
`nil`. The entire sequence is constructed by nested `cons` operations:
::: scheme
(cons 1 (cons 2 (cons 3 (cons 4 nil))))
:::
[]{#Figure 2.4 label="Figure 2.4"}
![image](fig/chap2/Fig2.4c.pdf){width="76mm"}
> **Figure 2.4:** The sequence 1, 2, 3, 4 represented as a chain of
> pairs.
Such a sequence of pairs, formed by nested `cons`es, is called a *list*,
and Scheme provides a primitive called `list` to help in constructing
lists.[^74] The above sequence could be produced by `(list 1 2 3 4)`. In
general,
::: scheme
(list
$\color{SchemeDark}\langle$ *a* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *a* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *a* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
:::
is equivalent to
::: scheme
(cons
$\color{SchemeDark}\langle$ *a* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
(cons
$\color{SchemeDark}\langle$ *a* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
(cons $\dots$ (cons
$\color{SchemeDark}\langle$ *a* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$
nil) $\dots$ )))
:::
Lisp systems conventionally print lists by printing the sequence of
elements, enclosed in parentheses. Thus, the data object in [Figure
2.4](#Figure 2.4) is printed as `(1 2 3 4)`:
::: scheme
(define one-through-four (list 1 2 3 4)) one-through-four *(1 2 3 4)*
:::
Be careful not to confuse the expression `(list 1 2 3 4)` with the list
`(1 2 3 4)`, which is the result obtained when the expression is
evaluated. Attempting to evaluate the expression `(1 2 3 4)` will signal
an error when the interpreter tries to apply the procedure `1` to
arguments `2`, `3`, and `4`.
We can think of `car` as selecting the first item in the list, and of
`cdr` as selecting the sublist consisting of all but the first item.
Nested applications of `car` and `cdr` can be used to extract the
second, third, and subsequent items in the list.[^75] The constructor
`cons` makes a list like the original one, but with an additional item
at the beginning.
::: scheme
(car one-through-four) *1* (cdr one-through-four) *(2 3 4)* (car
(cdr one-through-four)) *2* (cons 10 one-through-four) *(10 1 2 3
4)* (cons 5 one-through-four) *(5 1 2 3 4)*
:::
The value of `nil`, used to terminate the chain of pairs, can be thought
of as a sequence of no elements, the *empty list*. The word *nil* is a
contraction of the Latin word *nihil*, which means "nothing."[^76]
#### List operations {#list-operations .unnumbered}
The use of pairs to represent sequences of elements as lists is
accompanied by conventional programming techniques for manipulating
lists by successively "`cdr`ing down" the lists. For example, the
procedure `list/ref` takes as arguments a list and a number $n$ and
returns the $n^{\mathrm{th}}$ item of the list. It is customary to
number the elements of the list beginning with 0. The method for
computing `list/ref` is the following:
- For $n = 0$, `list/ref` should return the `car` of the list.
- Otherwise, `list/ref` should return the $(n - 1)$-st item of the
`cdr` of the list.
::: scheme
(define (list-ref items n) (if (= n 0) (car items) (list-ref (cdr items)
(- n 1)))) (define squares (list 1 4 9 16 25)) (list-ref squares 3)
*16*
:::
Often we `cdr` down the whole list. To aid in this, Scheme includes a
primitive predicate `null?`, which tests whether its argument is the
empty list. The procedure `length`, which returns the number of items in
a list, illustrates this typical pattern of use:
::: scheme
(define (length items) (if (null? items) 0 (+ 1 (length (cdr items)))))
(define odds (list 1 3 5 7)) (length odds) *4*
:::
The `length` procedure implements a simple recursive plan. The reduction
step is:
- The `length` of any list is 1 plus the `length` of the `cdr` of the
list.
This is applied successively until we reach the base case:
- The `length` of the empty list is 0.
We could also compute `length` in an iterative style:
::: scheme
(define (length items) (define (length-iter a count) (if (null? a) count
(length-iter (cdr a) (+ 1 count)))) (length-iter items 0))
:::
Another conventional programming technique is to "`cons` up" an answer
list while `cdr`ing down a list, as in the procedure `append`, which
takes two lists as arguments and combines their elements to make a new
list:
::: scheme
(append squares odds) *(1 4 9 16 25 1 3 5 7)* (append odds squares)
*(1 3 5 7 1 4 9 16 25)*
:::
`append` is also implemented using a recursive plan. To `append` lists
`list1` and `list2`, do the following:
- If `list1` is the empty list, then the result is just `list2`.
- Otherwise, `append` the `cdr` of `list1` and `list2`, and `cons` the
`car` of `list1` onto the result:
::: scheme
(define (append list1 list2) (if (null? list1) list2 (cons (car list1)
(append (cdr list1) list2))))
:::
> **[]{#Exercise 2.17 label="Exercise 2.17"}Exercise 2.17:** Define a
> procedure `last/pair` that returns the list that contains only the
> last element of a given (nonempty) list:
>
> ::: scheme
> (last-pair (list 23 72 149 34)) *(34)*
> :::
> **[]{#Exercise 2.18 label="Exercise 2.18"}Exercise 2.18:** Define a
> procedure `reverse` that takes a list as argument and returns a list
> of the same elements in reverse order:
>
> ::: scheme
> (reverse (list 1 4 9 16 25)) *(25 16 9 4 1)*
> :::
> **[]{#Exercise 2.19 label="Exercise 2.19"}Exercise 2.19:** Consider
> the change-counting program of [Section 1.2.2](#Section 1.2.2). It
> would be nice to be able to easily change the currency used by the
> program, so that we could compute the number of ways to change a
> British pound, for example. As the program is written, the knowledge
> of the currency is distributed partly into the procedure
> `first/denomination` and partly into the procedure `count/change`
> (which knows that there are five kinds of U.S. coins). It would be
> nicer to be able to supply a list of coins to be used for making
> change.
>
> We want to rewrite the procedure `cc` so that its second argument is a
> list of the values of the coins to use rather than an integer
> specifying which coins to use. We could then have lists that defined
> each kind of currency:
>
> ::: scheme
> (define us-coins (list 50 25 10 5 1)) (define uk-coins (list 100 50 20
> 10 5 2 1 0.5))
> :::
>
> We could then call `cc` as follows:
>
> ::: scheme
> (cc 100 us-coins) *292*
> :::
>
> To do this will require changing the program `cc` somewhat. It will
> still have the same form, but it will access its second argument
> differently, as follows:
>
> ::: scheme
> (define (cc amount coin-values) (cond ((= amount 0) 1) ((or (\< amount
> 0) (no-more? coin-values)) 0) (else (+ (cc amount
> (except-first-denomination coin-values)) (cc (- amount
> (first-denomination coin-values)) coin-values)))))
> :::
>
> Define the procedures `first/denomination`,
> `except/first/denomination`, and `no/more?` in terms of primitive
> operations on list structures. Does the order of the list
> `coin/values` affect the answer produced by `cc`? Why or why not?
> **[]{#Exercise 2.20 label="Exercise 2.20"}Exercise 2.20:** The
> procedures `+`, `*`, and `list` take arbitrary numbers of arguments.
> One way to define such procedures is to use `define` with *dotted-tail
> notation*. In a procedure definition, a parameter list that has a dot
> before the last parameter name indicates that, when the procedure is
> called, the initial parameters (if any) will have as values the
> initial arguments, as usual, but the final parameter's value will be a
> *list* of any remaining arguments. For instance, given the definition
>
> ::: scheme
> (define (f x y . z)
> $\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
> :::
>
> the procedure `f` can be called with two or more arguments. If we
> evaluate
>
> ::: scheme
> (f 1 2 3 4 5 6)
> :::
>
> then in the body of `f`, `x` will be 1, `y` will be 2, and `z` will be
> the list `(3 4 5 6)`. Given the definition
>
> ::: scheme
> (define (g . w)
> $\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
> :::
>
> the procedure `g` can be called with zero or more arguments. If we
> evaluate
>
> ::: scheme
> (g 1 2 3 4 5 6)
> :::
>
> then in the body of `g`, `w` will be the list `(1 2 3 4 5 6)`.[^77]
>
> Use this notation to write a procedure `same/parity` that takes one or
> more integers and returns a list of all the arguments that have the
> same even-odd parity as the first argument. For example,
>
> ::: scheme
> (same-parity 1 2 3 4 5 6 7) *(1 3 5 7)* (same-parity 2 3 4 5 6 7)
> *(2 4 6)*
> :::
#### Mapping over lists {#mapping-over-lists .unnumbered}
One extremely useful operation is to apply some transformation to each
element in a list and generate the list of results. For instance, the
following procedure scales each number in a list by a given factor:
::: scheme
(define (scale-list items factor) (if (null? items) nil (cons (\* (car
items) factor) (scale-list (cdr items) factor)))) (scale-list (list 1 2
3 4 5) 10) *(10 20 30 40 50)*
:::
We can abstract this general idea and capture it as a common pattern
expressed as a higher-order procedure, just as in [Section
1.3](#Section 1.3). The higher-order procedure here is called `map`.
`map` takes as arguments a procedure of one argument and a list, and
returns a list of the results produced by applying the procedure to each
element in the list:[^78]
::: scheme
(define (map proc items) (if (null? items) nil (cons (proc (car items))
(map proc (cdr items))))) (map abs (list -10 2.5 -11.6 17)) *(10 2.5
11.6 17)* (map (lambda (x) (\* x x)) (list 1 2 3 4)) *(1 4 9 16)*
:::
Now we can give a new definition of `scale/list` in terms of `map`:
::: scheme
(define (scale-list items factor) (map (lambda (x) (\* x factor))
items))
:::
`map` is an important construct, not only because it captures a common
pattern, but because it establishes a higher level of abstraction in
dealing with lists. In the original definition of `scale/list`, the
recursive structure of the program draws attention to the
element-by-element processing of the list. Defining `scale/list` in
terms of `map` suppresses that level of detail and emphasizes that
scaling transforms a list of elements to a list of results. The
difference between the two definitions is not that the computer is
performing a different process (it isn't) but that we think about the
process differently. In effect, `map` helps establish an abstraction
barrier that isolates the implementation of procedures that transform
lists from the details of how the elements of the list are extracted and
combined. Like the barriers shown in [Figure 2.1](#Figure 2.1), this
abstraction gives us the flexibility to change the low-level details of
how sequences are implemented, while preserving the conceptual framework
of operations that transform sequences to sequences. [Section
2.2.3](#Section 2.2.3) expands on this use of sequences as a framework
for organizing programs.
> **[]{#Exercise 2.21 label="Exercise 2.21"}Exercise 2.21:** The
> procedure `square/list` takes a list of numbers as argument and
> returns a list of the squares of those numbers.
>
> ::: scheme
> (square-list (list 1 2 3 4)) *(1 4 9 16)*
> :::
>
> Here are two different definitions of `square/list`. Complete both of
> them by filling in the missing expressions:
>
> ::: scheme
> (define (square-list items) (if (null? items) nil (cons
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ )))
> (define (square-list items) (map
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ))
> :::
> **[]{#Exercise 2.22 label="Exercise 2.22"}Exercise 2.22:** Louis
> Reasoner tries to rewrite the first `square/list` procedure of
> [Exercise 2.21](#Exercise 2.21) so that it evolves an iterative
> process:
>
> ::: scheme
> (define (square-list items) (define (iter things answer) (if (null?
> things) answer (iter (cdr things) (cons (square (car things))
> answer)))) (iter items nil))
> :::
>
> Unfortunately, defining `square/list` this way produces the answer
> list in the reverse order of the one desired. Why?
>
> Louis then tries to fix his bug by interchanging the arguments to
> `cons`:
>
> ::: scheme
> (define (square-list items) (define (iter things answer) (if (null?
> things) answer (iter (cdr things) (cons answer (square (car
> things)))))) (iter items nil))
> :::
>
> This doesn't work either. Explain.
> **[]{#Exercise 2.23 label="Exercise 2.23"}Exercise 2.23:** The
> procedure `for/each` is similar to `map`. It takes as arguments a
> procedure and a list of elements. However, rather than forming a list
> of the results, `for/each` just applies the procedure to each of the
> elements in turn, from left to right. The values returned by applying
> the procedure to the elements are not used at all---`for/each` is used
> with procedures that perform an action, such as printing. For example,
>
> ::: scheme
> (for-each (lambda (x) (newline) (display x)) (list 57 321 88)) *57*
> *321* *88*
> :::
>
> The value returned by the call to `for/each` (not illustrated above)
> can be something arbitrary, such as true. Give an implementation of
> `for/each`.
### Hierarchical Structures {#Section 2.2.2}
The representation of sequences in terms of lists generalizes naturally
to represent sequences whose elements may themselves be sequences. For
example, we can regard the object `((1 2) 3 4)` constructed by
::: scheme
(cons (list 1 2) (list 3 4))
:::
as a list of three items, the first of which is itself a list, `(1 2)`.
Indeed, this is suggested by the form in which the result is printed by
the interpreter. [Figure 2.5](#Figure 2.5) shows the representation of
this structure in terms of pairs.
[]{#Figure 2.5 label="Figure 2.5"}
![image](fig/chap2/Fig2.5c.pdf){width="91mm"}
> **Figure 2.5:** Structure formed by `(cons (list 1 2) (list 3 4))`.
Another way to think of sequences whose elements are sequences is as
*trees*. The elements of the sequence are the branches of the tree, and
elements that are themselves sequences are subtrees. [Figure
2.6](#Figure 2.6) shows the structure in [Figure 2.5](#Figure 2.5)
viewed as a tree.
[]{#Figure 2.6 label="Figure 2.6"}
![image](fig/chap2/Fig2.6a.pdf){width="22mm"}
> **Figure 2.6:** The list structure in [Figure 2.5](#Figure 2.5) viewed
> as a tree.
Recursion is a natural tool for dealing with tree structures, since we
can often reduce operations on trees to operations on their branches,
which reduce in turn to operations on the branches of the branches, and
so on, until we reach the leaves of the tree. As an example, compare the
`length` procedure of [Section 2.2.1](#Section 2.2.1) with the
`count/leaves` procedure, which returns the total number of leaves of a
tree:
::: scheme
(define x (cons (list 1 2) (list 3 4))) (length x) *3* (count-leaves
x) *4* (list x x) *(((1 2) 3 4) ((1 2) 3 4))* (length (list x x))
*2* (count-leaves (list x x)) *8*
:::
To implement `count/leaves`, recall the recursive plan for computing
`length`:
- `length` of a list `x` is 1 plus `length` of the `cdr` of `x`.
- `length` of the empty list is 0.
`count/leaves` is similar. The value for the empty list is the same:
- `count/leaves` of the empty list is 0.
But in the reduction step, where we strip off the `car` of the list, we
must take into account that the `car` may itself be a tree whose leaves
we need to count. Thus, the appropriate reduction step is
- `count/leaves` of a tree `x` is `count/leaves` of the `car` of `x`
plus `count/leaves` of the `cdr` of `x`.
Finally, by taking `car`s we reach actual leaves, so we need another
base case:
- `count/leaves` of a leaf is 1.
To aid in writing recursive procedures on trees, Scheme provides the
primitive predicate `pair?`, which tests whether its argument is a pair.
Here is the complete procedure:[^79]
::: scheme
(define (count-leaves x) (cond ((null? x) 0) ((not (pair? x)) 1) (else
(+ (count-leaves (car x)) (count-leaves (cdr x))))))
:::
> **[]{#Exercise 2.24 label="Exercise 2.24"}Exercise 2.24:** Suppose we
> evaluate the expression `(list 1 (list 2 (list 3 4)))`. Give the
> result printed by the interpreter, the corresponding box-and-pointer
> structure, and the interpretation of this as a tree (as in [Figure
> 2.6](#Figure 2.6)).
> **[]{#Exercise 2.25 label="Exercise 2.25"}Exercise 2.25:** Give
> combinations of `car`s and `cdr`s that will pick 7 from each of the
> following lists:
>
> ::: scheme
> (1 3 (5 7) 9) ((7)) (1 (2 (3 (4 (5 (6 7))))))
> :::
> **[]{#Exercise 2.26 label="Exercise 2.26"}Exercise 2.26:** Suppose we
> define `x` and `y` to be two lists:
>
> ::: scheme
> (define x (list 1 2 3)) (define y (list 4 5 6))
> :::
>
> What result is printed by the interpreter in response to evaluating
> each of the following expressions:
>
> ::: scheme
> (append x y) (cons x y) (list x y)
> :::
> **[]{#Exercise 2.27 label="Exercise 2.27"}Exercise 2.27:** Modify your
> `reverse` procedure of [Exercise 2.18](#Exercise 2.18) to produce a
> `deep/reverse` procedure that takes a list as argument and returns as
> its value the list with its elements reversed and with all sublists
> deep-reversed as well. For example,
>
> ::: scheme
> (define x (list (list 1 2) (list 3 4))) x *((1 2) (3 4))* (reverse
> x) *((3 4) (1 2))* (deep-reverse x) *((4 3) (2 1))*
> :::
> **[]{#Exercise 2.28 label="Exercise 2.28"}Exercise 2.28:** Write a
> procedure `fringe` that takes as argument a tree (represented as a
> list) and returns a list whose elements are all the leaves of the tree
> arranged in left-to-right order. For example,
>
> ::: scheme
> (define x (list (list 1 2) (list 3 4))) (fringe x) *(1 2 3 4)*
> (fringe (list x x)) *(1 2 3 4 1 2 3 4)*
> :::
> **[]{#Exercise 2.29 label="Exercise 2.29"}Exercise 2.29:** A binary
> mobile consists of two branches, a left branch and a right branch.
> Each branch is a rod of a certain length, from which hangs either a
> weight or another binary mobile. We can represent a binary mobile
> using compound data by constructing it from two branches (for example,
> using `list`):
>
> ::: scheme
> (define (make-mobile left right) (list left right))
> :::
>
> A branch is constructed from a `length` (which must be a number)
> together with a `structure`, which may be either a number
> (representing a simple weight) or another mobile:
>
> ::: scheme
> (define (make-branch length structure) (list length structure))
> :::
>
> a. Write the corresponding selectors `left/branch` and
> `right/branch`, which return the branches of a mobile, and
> `branch/length` and `branch/structure`, which return the
> components of a branch.
>
> b. Using your selectors, define a procedure `total/weight` that
> returns the total weight of a mobile.
>
> c. A mobile is said to be *balanced* if the torque applied by its
> top-left branch is equal to that applied by its top-right branch
> (that is, if the length of the left rod multiplied by the weight
> hanging from that rod is equal to the corresponding product for
> the right side) and if each of the submobiles hanging off its
> branches is balanced. Design a predicate that tests whether a
> binary mobile is balanced.
>
> d. Suppose we change the representation of mobiles so that the
> constructors are
>
> ::: scheme
> (define (make-mobile left right) (cons left right)) (define
> (make-branch length structure) (cons length structure))
> :::
>
> How much do you need to change your programs to convert to the new
> representation?
#### Mapping over trees {#mapping-over-trees .unnumbered}
Just as `map` is a powerful abstraction for dealing with sequences,
`map` together with recursion is a powerful abstraction for dealing with
trees. For instance, the `scale/tree` procedure, analogous to
`scale/list` of [Section 2.2.1](#Section 2.2.1), takes as arguments a
numeric factor and a tree whose leaves are numbers. It returns a tree of
the same shape, where each number is multiplied by the factor. The
recursive plan for `scale/tree` is similar to the one for
`count/leaves`:
::: scheme
(define (scale-tree tree factor) (cond ((null? tree) nil) ((not (pair?
tree)) (\* tree factor)) (else (cons (scale-tree (car tree) factor)
(scale-tree (cdr tree) factor))))) (scale-tree (list 1 (list 2 (list 3
4) 5) (list 6 7)) 10) *(10 (20 (30 40) 50) (60 70))*
:::
Another way to implement `scale/tree` is to regard the tree as a
sequence of sub-trees and use `map`. We map over the sequence, scaling
each sub-tree in turn, and return the list of results. In the base case,
where the tree is a leaf, we simply multiply by the factor:
::: scheme
(define (scale-tree tree factor) (map (lambda (sub-tree) (if (pair?
sub-tree) (scale-tree sub-tree factor) (\* sub-tree factor))) tree))
:::
Many tree operations can be implemented by similar combinations of
sequence operations and recursion.
> **[]{#Exercise 2.30 label="Exercise 2.30"}Exercise 2.30:** Define a
> procedure `square/tree` analogous to the `square/list` procedure of
> [Exercise 2.21](#Exercise 2.21). That is, `square/tree` should behave
> as follows:
>
> ::: scheme
> (square-tree (list 1 (list 2 (list 3 4) 5) (list 6 7))) *(1 (4 (9 16)
> 25) (36 49))*
> :::
>
> Define `square/tree` both directly (i.e., without using any
> higher-order procedures) and also by using `map` and recursion.
> **[]{#Exercise 2.31 label="Exercise 2.31"}Exercise 2.31:** Abstract
> your answer to [Exercise 2.30](#Exercise 2.30) to produce a procedure
> `tree/map` with the property that `square/tree` could be defined as
>
> ::: scheme
> (define (square-tree tree) (tree-map square tree))
> :::
> **[]{#Exercise 2.32 label="Exercise 2.32"}Exercise 2.32:** We can
> represent a set as a list of distinct elements, and we can represent
> the set of all subsets of the set as a list of lists. For example, if
> the set is `(1 2 3)`, then the set of all subsets is
> `(() (3) (2) (2 3) (1) (1 3) (1 2) (1 2 3))`. Complete the following
> definition of a procedure that generates the set of subsets of a set
> and give a clear explanation of why it works:
>
> ::: scheme
> (define (subsets s) (if (null? s) (list nil) (let ((rest (subsets (cdr
> s)))) (append rest (map
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ rest)))))
> :::
### Sequences as Conventional Interfaces {#Section 2.2.3}
In working with compound data, we've stressed how data abstraction
permits us to design programs without becoming enmeshed in the details
of data representations, and how abstraction preserves for us the
flexibility to experiment with alternative representations. In this
section, we introduce another powerful design principle for working with
data structures---the use of *conventional interfaces*.
In [Section 1.3](#Section 1.3) we saw how program abstractions,
implemented as higher-order procedures, can capture common patterns in
programs that deal with numerical data. Our ability to formulate
analogous operations for working with compound data depends crucially on
the style in which we manipulate our data structures. Consider, for
example, the following procedure, analogous to the `count/leaves`
procedure of [Section 2.2.2](#Section 2.2.2), which takes a tree as
argument and computes the sum of the squares of the leaves that are odd:
::: scheme
(define (sum-odd-squares tree) (cond ((null? tree) 0) ((not (pair?
tree)) (if (odd? tree) (square tree) 0)) (else (+ (sum-odd-squares (car
tree)) (sum-odd-squares (cdr tree))))))
:::
On the surface, this procedure is very different from the following one,
which constructs a list of all the even Fibonacci numbers
${\rm Fib}(k)$, where $k$ is less than or equal to a given integer $n$:
::: scheme
(define (even-fibs n) (define (next k) (if (\> k n) nil (let ((f (fib
k))) (if (even? f) (cons f (next (+ k 1))) (next (+ k 1)))))) (next 0))
:::
Despite the fact that these two procedures are structurally very
different, a more abstract description of the two computations reveals a
great deal of similarity. The first program
- enumerates the leaves of a tree;
- filters them, selecting the odd ones;
- squares each of the selected ones; and
- accumulates the results using `+`, starting with 0.
The second program
- enumerates the integers from 0 to $n$;
- computes the Fibonacci number for each integer;
- filters them, selecting the even ones; and
- accumulates the results using `cons`, starting with the empty list.
[]{#Figure 2.7 label="Figure 2.7"}
![image](fig/chap2/Fig2.7d.pdf){width="111mm"}
> **Figure 2.7:** The signal-flow plans for the procedures
> `sum/odd/squares` (top) and `even/fibs` (bottom) reveal the
> commonality between the two programs.
A signal-processing engineer would find it natural to conceptualize
these processes in terms of signals flowing through a cascade of stages,
each of which implements part of the program plan, as shown in [Figure
2.7](#Figure 2.7). In `sum/odd/squares`, we begin with an *enumerator*,
which generates a "signal" consisting of the leaves of a given tree.
This signal is passed through a *filter*, which eliminates all but the
odd elements. The resulting signal is in turn passed through a *map*,
which is a "transducer" that applies the `square` procedure to each
element. The output of the map is then fed to an *accumulator*, which
combines the elements using `+`, starting from an initial 0. The plan
for `even/fibs` is analogous.
Unfortunately, the two procedure definitions above fail to exhibit this
signal-flow structure. For instance, if we examine the `sum/odd/squares`
procedure, we find that the enumeration is implemented partly by the
`null?` and `pair?` tests and partly by the tree-recursive structure of
the procedure. Similarly, the accumulation is found partly in the tests
and partly in the addition used in the recursion. In general, there are
no distinct parts of either procedure that correspond to the elements in
the signal-flow description. Our two procedures decompose the
computations in a different way, spreading the enumeration over the
program and mingling it with the map, the filter, and the accumulation.
If we could organize our programs to make the signal-flow structure
manifest in the procedures we write, this would increase the conceptual
clarity of the resulting code.
#### Sequence Operations {#sequence-operations .unnumbered}
The key to organizing programs so as to more clearly reflect the
signal-flow structure is to concentrate on the "signals" that flow from
one stage in the process to the next. If we represent these signals as
lists, then we can use list operations to implement the processing at
each of the stages. For instance, we can implement the mapping stages of
the signal-flow diagrams using the `map` procedure from [Section
2.2.1](#Section 2.2.1):
::: scheme
(map square (list 1 2 3 4 5)) *(1 4 9 16 25)*
:::
Filtering a sequence to select only those elements that satisfy a given
predicate is accomplished by
::: scheme
(define (filter predicate sequence) (cond ((null? sequence) nil)
((predicate (car sequence)) (cons (car sequence) (filter predicate (cdr
sequence)))) (else (filter predicate (cdr sequence)))))
:::
For example,
::: scheme
(filter odd? (list 1 2 3 4 5)) *(1 3 5)*
:::
Accumulations can be implemented by
::: scheme
(define (accumulate op initial sequence) (if (null? sequence) initial
(op (car sequence) (accumulate op initial (cdr sequence)))))
(accumulate + 0 (list 1 2 3 4 5)) *15* (accumulate \* 1 (list 1 2 3 4
5)) *120* (accumulate cons nil (list 1 2 3 4 5)) *(1 2 3 4 5)*
:::
All that remains to implement signal-flow diagrams is to enumerate the
sequence of elements to be processed. For `even/fibs`, we need to
generate the sequence of integers in a given range, which we can do as
follows:
::: scheme
(define (enumerate-interval low high) (if (\> low high) nil (cons low
(enumerate-interval (+ low 1) high)))) (enumerate-interval 2 7) *(2 3 4
5 6 7)*
:::
To enumerate the leaves of a tree, we can use[^80]
::: scheme
(define (enumerate-tree tree) (cond ((null? tree) nil) ((not (pair?
tree)) (list tree)) (else (append (enumerate-tree (car tree))
(enumerate-tree (cdr tree)))))) (enumerate-tree (list 1 (list 2 (list 3
4)) 5)) *(1 2 3 4 5)*
:::
Now we can reformulate `sum/odd/squares` and `even/fibs` as in the
signal-flow diagrams. For `sum/odd/squares`, we enumerate the sequence
of leaves of the tree, filter this to keep only the odd numbers in the
sequence, square each element, and sum the results:
::: scheme
(define (sum-odd-squares tree) (accumulate + 0 (map square (filter odd?
(enumerate-tree tree)))))
:::
For `even/fibs`, we enumerate the integers from 0 to $n$, generate the
Fibonacci number for each of these integers, filter the resulting
sequence to keep only the even elements, and accumulate the results into
a list:
::: scheme
(define (even-fibs n) (accumulate cons nil (filter even? (map fib
(enumerate-interval 0 n)))))
:::
The value of expressing programs as sequence operations is that this
helps us make program designs that are modular, that is, designs that
are constructed by combining relatively independent pieces. We can
encourage modular design by providing a library of standard components
together with a conventional interface for connecting the components in
flexible ways.
Modular construction is a powerful strategy for controlling complexity
in engineering design. In real signal-processing applications, for
example, designers regularly build systems by cascading elements
selected from standardized families of filters and transducers.
Similarly, sequence operations provide a library of standard program
elements that we can mix and match. For instance, we can reuse pieces
from the `sum/odd/squares` and `even/fibs` procedures in a program that
constructs a list of the squares of the first $n + 1$ Fibonacci numbers:
::: scheme
(define (list-fib-squares n) (accumulate cons nil (map square (map fib
(enumerate-interval 0 n))))) (list-fib-squares 10) *(0 1 1 4 9 25 64
169 441 1156 3025)*
:::
We can rearrange the pieces and use them in computing the product of the
squares of the odd integers in a sequence:
::: scheme
(define (product-of-squares-of-odd-elements sequence) (accumulate \* 1
(map square (filter odd? sequence))))
(product-of-squares-of-odd-elements (list 1 2 3 4 5)) *225*
:::
We can also formulate conventional data-processing applications in terms
of sequence operations. Suppose we have a sequence of personnel records
and we want to find the salary of the highest-paid programmer. Assume
that we have a selector `salary` that returns the salary of a record,
and a predicate `programmer?` that tests if a record is for a
programmer. Then we can write
::: scheme
(define (salary-of-highest-paid-programmer records) (accumulate max 0
(map salary (filter programmer? records))))
:::
These examples give just a hint of the vast range of operations that can
be expressed as sequence operations.[^81]
Sequences, implemented here as lists, serve as a conventional interface
that permits us to combine processing modules. Additionally, when we
uniformly represent structures as sequences, we have localized the
data-structure dependencies in our programs to a small number of
sequence operations. By changing these, we can experiment with
alternative representations of sequences, while leaving the overall
design of our programs intact. We will exploit this capability in
[Section 3.5](#Section 3.5), when we generalize the sequence-processing
paradigm to admit infinite sequences.
> **[]{#Exercise 2.33 label="Exercise 2.33"}Exercise 2.33:** Fill in the
> missing expressions to complete the following definitions of some
> basic list-manipulation operations as accumulations:
>
> ::: scheme
> (define (map p sequence) (accumulate (lambda (x y)
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ) nil
> sequence)) (define (append seq1 seq2) (accumulate cons
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ )) (define
> (length sequence) (accumulate
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ 0
> sequence))
> :::
> **[]{#Exercise 2.34 label="Exercise 2.34"}Exercise 2.34:** Evaluating
> a polynomial in $x$ at a given value of $x$ can be formulated as an
> accumulation. We evaluate the polynomial
>
> $$a_n x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0$$
>
> using a well-known algorithm called *Horner's rule*, which structures
> the computation as
>
> $$(\dots (a_n x + a_{n-1}) x + \dots + a_1) x + a_0.$$
>
> In other words, we start with $a_n$, multiply by $x$, add $a_{n-1}$,
> multiply by $x$, and so on, until we reach $a_0$.[^82]
>
> Fill in the following template to produce a procedure that evaluates a
> polynomial using Horner's rule. Assume that the coefficients of the
> polynomial are arranged in a sequence, from $a_0$ through $a_n$.
>
> ::: scheme
> (define (horner-eval x coefficient-sequence) (accumulate (lambda
> (this-coeff higher-terms)
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ) 0
> coefficient-sequence))
> :::
>
> For example, to compute $1 + 3x + 5x^3 + x^5$ at $x = 2$ you would
> evaluate
>
> ::: scheme
> (horner-eval 2 (list 1 3 0 5 0 1))
> :::
> **[]{#Exercise 2.35 label="Exercise 2.35"}Exercise 2.35:** Redefine
> `count/leaves` from [Section 2.2.2](#Section 2.2.2) as an
> accumulation:
>
> ::: scheme
> (define (count-leaves t) (accumulate
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ (map
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ )))
> :::
> **[]{#Exercise 2.36 label="Exercise 2.36"}Exercise 2.36:** The
> procedure `accumulate/n` is similar to `accumulate` except that it
> takes as its third argument a sequence of sequences, which are all
> assumed to have the same number of elements. It applies the designated
> accumulation procedure to combine all the first elements of the
> sequences, all the second elements of the sequences, and so on, and
> returns a sequence of the results. For instance, if `s` is a sequence
> containing four sequences, `((1 2 3) (4 5 6) (7 8 9) (10 11 12)),`
> then the value of `(accumulate/n + 0 s)` should be the sequence
> `(22 26 30)`. Fill in the missing expressions in the following
> definition of `accumulate/n`:
>
> ::: scheme
> (define (accumulate-n op init seqs) (if (null? (car seqs)) nil (cons
> (accumulate op init
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ )
> (accumulate-n op init
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ))))
> :::
> **[]{#Exercise 2.37 label="Exercise 2.37"}Exercise 2.37:** Suppose we
> represent vectors $\hbox{\bf v} = (v_i)$ as sequences of numbers, and
> matrices $\hbox{\bf m} = (m_{i\!j})$ as sequences of vectors (the rows
> of the matrix). For example, the matrix
>
> $$\left(
> \begin{array}{cccc}
> 1 & 2 & 3 & 4 \\
> 4 & 5 & 6 & 6 \\
> 6 & 7 & 8 & 9
> \end{array}
> \right)$$
>
> is represented as the sequence `((1 2 3 4) (4 5 6 6) (6 7 8 9))`. With
> this representation, we can use sequence operations to concisely
> express the basic matrix and vector operations. These operations
> (which are described in any book on matrix algebra) are the following:
>
> $$\begin{array}{rl}
> \hbox{\tt (dot-product v w)} & {\rm returns\;the\;sum\;} \Sigma_i v_i w_i; \\
> \hbox{\tt (matrix-*-vector m v)} & {\rm returns\;the\;vector\;} \hbox{\bf t}, \\
> & {\rm where\;} t_i = \Sigma_{\kern-0.1em j} m_{i\!j} v_{\kern-0.1em j}; \\
> \hbox{\tt (matrix-*-matrix m n)} & {\rm returns\;the\;matrix\;} \hbox{\bf p}, \\
> & {\rm where\;} p_{i\!j} = \Sigma_k m_{ik} n_{k\!j}; \\
> \hbox{\tt (transpose m)} & {\rm returns\;the\;matrix\;} \hbox{\bf n}, \\
> & {\rm where\;} n_{i\!j} = m_{\kern-0.1em ji}.
> \end{array}$$
>
> We can define the dot product as[^83]
>
> ::: scheme
> (define (dot-product v w) (accumulate + 0 (map \* v w)))
> :::
>
> Fill in the missing expressions in the following procedures for
> computing the other matrix operations. (The procedure `accumulate/n`
> is defined in [Exercise 2.36](#Exercise 2.36).)
>
> ::: scheme
> (define (matrix-\*-vector m v) (map
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ m))
> (define (transpose mat) (accumulate-n
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ mat))
> (define (matrix-\*-matrix m n) (let ((cols (transpose n))) (map
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ m)))
> :::
> **[]{#Exercise 2.38 label="Exercise 2.38"}Exercise 2.38:** The
> `accumulate` procedure is also known as `fold/right`, because it
> combines the first element of the sequence with the result of
> combining all the elements to the right. There is also a `fold/left`,
> which is similar to `fold/right`, except that it combines elements
> working in the opposite direction:
>
> ::: scheme
> (define (fold-left op initial sequence) (define (iter result rest) (if
> (null? rest) result (iter (op result (car rest)) (cdr rest)))) (iter
> initial sequence))
> :::
>
> What are the values of
>
> ::: scheme
> (fold-right / 1 (list 1 2 3)) (fold-left / 1 (list 1 2 3)) (fold-right
> list nil (list 1 2 3)) (fold-left list nil (list 1 2 3))
> :::
>
> Give a property that `op` should satisfy to guarantee that
> `fold/right` and `fold/left` will produce the same values for any
> sequence.
> **[]{#Exercise 2.39 label="Exercise 2.39"}Exercise 2.39:** Complete
> the following definitions of `reverse` ([Exercise
> 2.18](#Exercise 2.18)) in terms of `fold/right` and `fold/left` from
> [Exercise 2.38](#Exercise 2.38):
>
> ::: scheme
> (define (reverse sequence) (fold-right (lambda (x y)
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ) nil
> sequence)) (define (reverse sequence) (fold-left (lambda (x y)
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ) nil
> sequence))
> :::
#### Nested Mappings {#nested-mappings .unnumbered}
We can extend the sequence paradigm to include many computations that
are commonly expressed using nested loops.[^84] Consider this problem:
Given a positive integer $n$, find all ordered pairs of distinct
positive integers $i$ and $j$, where $1 \le j < i \le n$, such that
$i + j$ is prime. For example, if $n$ is 6, then the pairs are the
following:
$$\vbox{
\offinterlineskip
\halign{
\strut \hfil \quad #\quad \hfil & \vrule
\hfil \quad #\quad \hfil &
\hfil \quad #\quad \hfil &
\hfil \quad #\quad \hfil &
\hfil \quad #\quad \hfil &
\hfil \quad #\quad \hfil &
\hfil \quad #\quad \hfil &
\hfil \quad #\quad \hfil \cr
$i$ & 2 & 3 & 4 & 4 & 5 & 6 & 6 \cr
$j$ & 1 & 2 & 1 & 3 & 2 & 1 & 5 \cr
\noalign{\hrule}
$i + j$ & 3 & 5 & 5 & 7 & 7 & 7 & 11 \cr}
}$$
A natural way to organize this computation is to generate the sequence
of all ordered pairs of positive integers less than or equal to $n$,
filter to select those pairs whose sum is prime, and then, for each pair
$(i, j)$ that passes through the filter, produce the triple
$(i, j, i + j)$.
Here is a way to generate the sequence of pairs: For each integer
$i \le n$, enumerate the integers $j < i$, and for each such $i$ and $j$
generate the pair $(i, j)$. In terms of sequence operations, we map
along the sequence `(enumerate/interval 1 n)`. For each $i$ in this
sequence, we map along the sequence `(enumerate/interval 1 (- i 1))`.
For each $j$ in this latter sequence, we generate the pair `(list i j)`.
This gives us a sequence of pairs for each $i$. Combining all the
sequences for all the $i$ (by accumulating with `append`) produces the
required sequence of pairs:[^85]
::: scheme
(accumulate append nil (map (lambda (i) (map (lambda (j) (list i j))
(enumerate-interval 1 (- i 1)))) (enumerate-interval 1 n)))
:::
The combination of mapping and accumulating with `append` is so common
in this sort of program that we will isolate it as a separate procedure:
::: scheme
(define (flatmap proc seq) (accumulate append nil (map proc seq)))
:::
Now filter this sequence of pairs to find those whose sum is prime. The
filter predicate is called for each element of the sequence; its
argument is a pair and it must extract the integers from the pair. Thus,
the predicate to apply to each element in the sequence is
::: scheme
(define (prime-sum? pair) (prime? (+ (car pair) (cadr pair))))
:::
Finally, generate the sequence of results by mapping over the filtered
pairs using the following procedure, which constructs a triple
consisting of the two elements of the pair along with their sum:
::: scheme
(define (make-pair-sum pair) (list (car pair) (cadr pair) (+ (car pair)
(cadr pair))))
:::
Combining all these steps yields the complete procedure:
::: smallscheme
(define (prime-sum-pairs n) (map make-pair-sum (filter prime-sum?
(flatmap (lambda (i) (map (lambda (j) (list i j)) (enumerate-interval 1
(- i 1)))) (enumerate-interval 1 n)))))
:::
Nested mappings are also useful for sequences other than those that
enumerate intervals. Suppose we wish to generate all the permutations of
a set $S;$ that is, all the ways of ordering the items in the set. For
instance, the permutations of $\{1, 2, 3\}$ are $\{1, 2, 3\}$,
$\{1, 3, 2\}$, $\{2, 1, 3\}$, $\{2, 3, 1\}$, $\{3, 1, 2\}$, and
$\{3, 2, 1\}$. Here is a plan for generating the permutations of $S$:
For each item $x$ in $S$, recursively generate the sequence of
permutations of $S - x$,[^86] and adjoin $x$ to the front of each one.
This yields, for each $x$ in $S$, the sequence of permutations of $S$
that begin with $x$. Combining these sequences for all $x$ gives all the
permutations of $S$:[^87]
::: scheme
(define (permutations s) (if (null? s) [; empty set?]{.roman} (list
nil) [; sequence containing empty set]{.roman} (flatmap (lambda (x)
(map (lambda (p) (cons x p)) (permutations (remove x s)))) s)))
:::
Notice how this strategy reduces the problem of generating permutations
of $S$ to the problem of generating the permutations of sets with fewer
elements than $S$. In the terminal case, we work our way down to the
empty list, which represents a set of no elements. For this, we generate
`(list nil)`, which is a sequence with one item, namely the set with no
elements. The `remove` procedure used in `permutations` returns all the
items in a given sequence except for a given item. This can be expressed
as a simple filter:
::: scheme
(define (remove item sequence) (filter (lambda (x) (not (= x item)))
sequence))
:::
> **[]{#Exercise 2.40 label="Exercise 2.40"}Exercise 2.40:** Define a
> procedure `unique/pairs` that, given an integer $n$, generates the
> sequence of pairs $(i, j)$ with $1 \le j < i \le n$. Use
> `unique/pairs` to simplify the definition of `prime/sum/pairs` given
> above.
> **[]{#Exercise 2.41 label="Exercise 2.41"}Exercise 2.41:** Write a
> procedure to find all ordered triples of distinct positive integers
> $i$, $j$, and $k$ less than or equal to a given integer $n$ that sum
> to a given integer $s$.
> **[]{#Exercise 2.42 label="Exercise 2.42"}Exercise 2.42:** The
> "eight-queens puzzle" asks how to place eight queens on a chessboard
> so that no queen is in check from any other (i.e., no two queens are
> in the same row, column, or diagonal). One possible solution is shown
> in [Figure 2.8](#Figure 2.8). One way to solve the puzzle is to work
> across the board, placing a queen in each column. Once we have placed
> $k - 1$ queens, we must place the $k^{\mathrm{th}}$ queen in a
> position where it does not check any of the queens already on the
> board. We can formulate this approach recursively: Assume that we have
> already generated the sequence of all possible ways to place $k - 1$
> queens in the first $k - 1$ columns of the board. For each of these
> ways, generate an extended set of positions by placing a queen in each
> row of the $k^{\mathrm{th}}$ column. Now filter these, keeping only
> the positions for which the queen in the $k^{\mathrm{th}}$ column is
> safe with respect to the other queens. This produces the sequence of
> all ways to place $k$ queens in the first $k$ columns. By continuing
> this process, we will produce not only one solution, but all solutions
> to the puzzle.
>
> []{#Figure 2.8 label="Figure 2.8"}
>
> ![image](fig/chap2/Fig2.8c.pdf){width="48mm"}
>
> **Figure 2.8:** A solution to the eight-queens puzzle.
>
> We implement this solution as a procedure `queens`, which returns a
> sequence of all solutions to the problem of placing $n$ queens on an
> $n \times n$ chessboard. `queens` has an internal procedure
> `queen/cols` that returns the sequence of all ways to place queens in
> the first $k$ columns of the board.
>
> ::: scheme
> (define (queens board-size) (define (queen-cols k) (if (= k 0) (list
> empty-board) (filter (lambda (positions) (safe? k positions)) (flatmap
> (lambda (rest-of-queens) (map (lambda (new-row) (adjoin-position
> new-row k rest-of-queens)) (enumerate-interval 1 board-size)))
> (queen-cols (- k 1)))))) (queen-cols board-size))
> :::
>
> In this procedure `rest/of/queens` is a way to place $k - 1$ queens in
> the first $k - 1$ columns, and `new/row` is a proposed row in which to
> place the queen for the $k^{\mathrm{th}}$ column. Complete the program
> by implementing the representation for sets of board positions,
> including the procedure `adjoin/position`, which adjoins a new
> row-column position to a set of positions, and `empty/board`, which
> represents an empty set of positions. You must also write the
> procedure `safe?`, which determines for a set of positions, whether
> the queen in the $k^{\mathrm{th}}$ column is safe with respect to the
> others. (Note that we need only check whether the new queen is
> safe---the other queens are already guaranteed safe with respect to
> each other.)
> **[]{#Exercise 2.43 label="Exercise 2.43"}Exercise 2.43:** Louis
> Reasoner is having a terrible time doing [Exercise
> 2.42](#Exercise 2.42). His `queens` procedure seems to work, but it
> runs extremely slowly. (Louis never does manage to wait long enough
> for it to solve even the $6\times6$ case.) When Louis asks Eva Lu Ator
> for help, she points out that he has interchanged the order of the
> nested mappings in the `flatmap`, writing it as
>
> ::: scheme
> (flatmap (lambda (new-row) (map (lambda (rest-of-queens)
> (adjoin-position new-row k rest-of-queens)) (queen-cols (- k 1))))
> (enumerate-interval 1 board-size))
> :::
>
> Explain why this interchange makes the program run slowly. Estimate
> how long it will take Louis's program to solve the eight-queens
> puzzle, assuming that the program in [Exercise 2.42](#Exercise 2.42)
> solves the puzzle in time $T$.
### Example: A Picture Language {#Section 2.2.4}
This section presents a simple language for drawing pictures that
illustrates the power of data abstraction and closure, and also exploits
higher-order procedures in an essential way. The language is designed to
make it easy to experiment with patterns such as the ones in [Figure
2.9](#Figure 2.9), which are composed of repeated elements that are
shifted and scaled.[^88] In this language, the data objects being
combined are represented as procedures rather than as list structure.
Just as `cons`, which satisfies the closure property, allowed us to
easily build arbitrarily complicated list structure, the operations in
this language, which also satisfy the closure property, allow us to
easily build arbitrarily complicated patterns.
#### The picture language {#the-picture-language .unnumbered}
When we began our study of programming in [Section 1.1](#Section 1.1),
we emphasized the importance of describing a language by focusing on the
language's primitives, its means of combination, and its means of
abstraction. We'll follow that framework here.
[]{#Figure 2.9 label="Figure 2.9"}
![image](fig/chap2/Fig2.9-bigger.png){width="111mm"}
**Figure 2.9:** Designs generated with the picture language.
[]{#Figure 2.10 label="Figure 2.10"}
![image](fig/chap2/Fig2.10.pdf){width="50mm"}
> **Figure 2.10:** Images produced by the `wave` painter, with respect
> to four different frames. The frames, shown with dotted lines, are not
> part of the images.
Part of the elegance of this picture language is that there is only one
kind of element, called a *painter*. A painter draws an image that is
shifted and scaled to fit within a designated parallelogram-shaped
frame. For example, there's a primitive painter we'll call `wave` that
makes a crude line drawing, as shown in [Figure 2.10](#Figure 2.10). The
actual shape of the drawing depends on the frame---all four images in
figure 2.10 are produced by the same `wave` painter, but with respect to
four different frames. Painters can be more elaborate than this: The
primitive painter called `rogers` paints a picture of
mit's founder, William Barton Rogers, as shown in [Figure
2.11](#Figure 2.11).[^89] The four images in figure 2.11 are drawn with
respect to the same four frames as the `wave` images in figure 2.10.
[]{#Figure 2.11 label="Figure 2.11"}
![image](fig/chap2/Fig2.11.pdf){width="48mm"}
> **Figure 2.11:** Images of William Barton Rogers, founder and first
> president of mit, painted with respect to the same four
> frames as in [Figure 2.10](#Figure 2.10) (original image from
> Wikimedia Commons).
To combine images, we use various operations that construct new painters
from given painters. For example, the `beside` operation takes two
painters and produces a new, compound painter that draws the first
painter's image in the left half of the frame and the second painter's
image in the right half of the frame. Similarly, `below` takes two
painters and produces a compound painter that draws the first painter's
image below the second painter's image. Some operations transform a
single painter to produce a new painter. For example, `flip/vert` takes
a painter and produces a painter that draws its image upside-down, and
`flip/horiz` produces a painter that draws the original painter's image
left-to-right reversed.
[Figure 2.12](#Figure 2.12) shows the drawing of a painter called
`wave4` that is built up in two stages starting from `wave`:
::: scheme
(define wave2 (beside wave (flip-vert wave))) (define wave4 (below wave2
wave2))
:::
[]{#Figure 2.12 label="Figure 2.12"}
![image](fig/chap2/Fig2.12.pdf){width="50mm"}
> **Figure 2.12:** Creating a complex figure, starting from the `wave`
> painter of [Figure 2.10](#Figure 2.10).
In building up a complex image in this manner we are exploiting the fact
that painters are closed under the language's means of combination. The
`beside` or `below` of two painters is itself a painter; therefore, we
can use it as an element in making more complex painters. As with
building up list structure using `cons`, the closure of our data under
the means of combination is crucial to the ability to create complex
structures while using only a few operations.
Once we can combine painters, we would like to be able to abstract
typical patterns of combining painters. We will implement the painter
operations as Scheme procedures. This means that we don't need a special
abstraction mechanism in the picture language: Since the means of
combination are ordinary Scheme procedures, we automatically have the
capability to do anything with painter operations that we can do with
procedures. For example, we can abstract the pattern in `wave4` as
::: scheme
(define (flipped-pairs painter) (let ((painter2 (beside painter
(flip-vert painter)))) (below painter2 painter2)))
:::
and define `wave4` as an instance of this pattern:
::: scheme
(define wave4 (flipped-pairs wave))
:::
[]{#Figure 2.13 label="Figure 2.13"}
![image](fig/chap2/Fig2.13a.pdf){width="111mm"}
**Figure 2.13:** Recursive plans for `right/split` and `corner/split`.
We can also define recursive operations. Here's one that makes painters
split and branch towards the right as shown in [Figure
2.13](#Figure 2.13) and [Figure 2.14](#Figure 2.14):
::: scheme
(define (right-split painter n) (if (= n 0) painter (let ((smaller
(right-split painter (- n 1)))) (beside painter (below smaller
smaller)))))
:::
We can produce balanced patterns by branching upwards as well as towards
the right (see exercise [Exercise 2.44](#Exercise 2.44) and figures
[Figure 2.13](#Figure 2.13) and [Figure 2.14](#Figure 2.14)):
::: scheme
(define (corner-split painter n) (if (= n 0) painter (let ((up (up-split
painter (- n 1))) (right (right-split painter (- n 1)))) (let ((top-left
(beside up up)) (bottom-right (below right right)) (corner (corner-split
painter (- n 1)))) (beside (below painter top-left) (below bottom-right
corner))))))
:::
By placing four copies of a `corner/split` appropriately, we obtain a
pattern called `square/limit`, whose application to `wave` and `rogers`
is shown in [Figure 2.9](#Figure 2.9):
::: scheme
(define (square-limit painter n) (let ((quarter (corner-split painter
n))) (let ((half (beside (flip-horiz quarter) quarter))) (below
(flip-vert half) half))))
:::
> **[]{#Exercise 2.44 label="Exercise 2.44"}Exercise 2.44:** Define the
> procedure `up/split` used by `corner/split`. It is similar to
> `right/split`, except that it switches the roles of `below` and
> `beside`.
[]{#Figure 2.14 label="Figure 2.14"}
![image](fig/chap2/Fig2.14b.pdf){width="91mm"}
> **Figure 2.14:** The recursive operations `right/split` and
> `corner/split` applied to the painters `wave` and `rogers`. Combining
> four `corner/split` figures produces symmetric `square/limit` designs
> as shown in [Figure 2.9](#Figure 2.9).
#### Higher-order operations {#higher-order-operations .unnumbered}
In addition to abstracting patterns of combining painters, we can work
at a higher level, abstracting patterns of combining painter operations.
That is, we can view the painter operations as elements to manipulate
and can write means of combination for these elements---procedures that
take painter operations as arguments and create new painter operations.
For example, `flipped/pairs` and `square/limit` each arrange four copies
of a painter's image in a square pattern; they differ only in how they
orient the copies. One way to abstract this pattern of painter
combination is with the following procedure, which takes four
one-argument painter operations and produces a painter operation that
transforms a given painter with those four operations and arranges the
results in a square. `tl`, `tr`, `bl`, and `br` are the transformations
to apply to the top left copy, the top right copy, the bottom left copy,
and the bottom right copy, respectively.
::: scheme
(define (square-of-four tl tr bl br) (lambda (painter) (let ((top
(beside (tl painter) (tr painter))) (bottom (beside (bl painter) (br
painter)))) (below bottom top))))
:::
Then `flipped/pairs` can be defined in terms of `square/of/four` as
follows:[^90]
::: scheme
(define (flipped-pairs painter) (let ((combine4 (square-of-four identity
flip-vert identity flip-vert))) (combine4 painter)))
:::
and `square/limit` can be expressed as[^91]
::: scheme
(define (square-limit painter n) (let ((combine4 (square-of-four
flip-horiz identity rotate180 flip-vert))) (combine4 (corner-split
painter n))))
:::
> **[]{#Exercise 2.45 label="Exercise 2.45"}Exercise 2.45:**
> `right/split` and `up/split` can be expressed as instances of a
> general splitting operation. Define a procedure `split` with the
> property that evaluating
>
> ::: scheme
> (define right-split (split beside below)) (define up-split (split
> below beside))
> :::
>
> produces procedures `right/split` and `up/split` with the same
> behaviors as the ones already defined.
#### Frames {#frames .unnumbered}
Before we can show how to implement painters and their means of
combination, we must first consider frames. A frame can be described by
three vectors---an origin vector and two edge vectors. The origin vector
specifies the offset of the frame's origin from some absolute origin in
the plane, and the edge vectors specify the offsets of the frame's
corners from its origin. If the edges are perpendicular, the frame will
be rectangular. Otherwise the frame will be a more general
parallelogram.
[Figure 2.15](#Figure 2.15) shows a frame and its associated vectors. In
accordance with data abstraction, we need not be specific yet about how
frames are represented, other than to say that there is a constructor
`make/frame`, which takes three vectors and produces a frame, and three
corresponding selectors `origin/frame`, `edge1/frame`, and `edge2/frame`
(see [Exercise 2.47](#Exercise 2.47)).
[]{#Figure 2.15 label="Figure 2.15"}
![image](fig/chap2/Fig2.15a.pdf){width="51mm"}
> **Figure 2.15:** A frame is described by three vectors --- an origin
> and two edges.
We will use coordinates in the unit square $(0 \le x, y \le 1)$ to
specify images. With each frame, we associate a *frame coordinate map*,
which will be used to shift and scale images to fit the frame. The map
transforms the unit square into the frame by mapping the vector
$\hbox{\bf v} = (x, y)$ to the vector sum
$${\rm Origin(Frame)} + x \cdot {\rm Edge_1(Frame)} + y \cdot {\rm Edge_2(Frame)}.$$
For example, (0, 0) is mapped to the origin of the frame, (1, 1) to the
vertex diagonally opposite the origin, and (0.5, 0.5) to the center of
the frame. We can create a frame's coordinate map with the following
procedure:[^92]
::: scheme
(define (frame-coord-map frame) (lambda (v) (add-vect (origin-frame
frame) (add-vect (scale-vect (xcor-vect v) (edge1-frame frame))
(scale-vect (ycor-vect v) (edge2-frame frame))))))
:::
Observe that applying `frame/coord/map` to a frame returns a procedure
that, given a vector, returns a vector. If the argument vector is in the
unit square, the result vector will be in the frame. For example,
::: scheme
((frame-coord-map a-frame) (make-vect 0 0))
:::
returns the same vector as
::: scheme
(origin-frame a-frame)
:::
> **[]{#Exercise 2.46 label="Exercise 2.46"}Exercise 2.46:** A
> two-dimensional vector $\hbox{\bf v}$ running from the origin to a
> point can be represented as a pair consisting of an $x$-coordinate and
> a $y$-coordinate. Implement a data abstraction for vectors by giving a
> constructor `make/vect` and corresponding selectors `xcor/vect` and
> `ycor/vect`. In terms of your selectors and constructor, implement
> procedures `add/vect`, `sub/vect`, and `scale/vect` that perform the
> operations vector addition, vector subtraction, and multiplying a
> vector by a scalar:
>
> $$\begin{array}{r@{{}={}}l}
> (x_1, y_1) + (x_2, y_2) & (x_1 + x_2, y_1 + y_2), \\
> (x_1, y_1) - (x_2, y_2) & (x_1 - x_2, y_1 - y_2), \\
> s \cdot (x, y) & (sx, sy).
> \end{array}$$
> **[]{#Exercise 2.47 label="Exercise 2.47"}Exercise 2.47:** Here are
> two possible constructors for frames:
>
> ::: scheme
> (define (make-frame origin edge1 edge2) (list origin edge1 edge2))
> (define (make-frame origin edge1 edge2) (cons origin (cons edge1
> edge2)))
> :::
>
> For each constructor supply the appropriate selectors to produce an
> implementation for frames.
#### Painters {#painters .unnumbered}
A painter is represented as a procedure that, given a frame as argument,
draws a particular image shifted and scaled to fit the frame. That is to
say, if `p` is a painter and `f` is a frame, then we produce `p`'s image
in `f` by calling `p` with `f` as argument.
The details of how primitive painters are implemented depend on the
particular characteristics of the graphics system and the type of image
to be drawn. For instance, suppose we have a procedure `draw/line` that
draws a line on the screen between two specified points. Then we can
create painters for line drawings, such as the `wave` painter in [Figure
2.10](#Figure 2.10), from lists of line segments as follows:[^93]
::: scheme
(define (segments-\>painter segment-list) (lambda (frame) (for-each
(lambda (segment) (draw-line ((frame-coord-map frame) (start-segment
segment)) ((frame-coord-map frame) (end-segment segment))))
segment-list)))
:::
The segments are given using coordinates with respect to the unit
square. For each segment in the list, the painter transforms the segment
endpoints with the frame coordinate map and draws a line between the
transformed points.
Representing painters as procedures erects a powerful abstraction
barrier in the picture language. We can create and intermix all sorts of
primitive painters, based on a variety of graphics capabilities. The
details of their implementation do not matter. Any procedure can serve
as a painter, provided that it takes a frame as argument and draws
something scaled to fit the frame.[^94]
> **[]{#Exercise 2.48 label="Exercise 2.48"}Exercise 2.48:** A directed
> line segment in the plane can be represented as a pair of
> vectors---the vector running from the origin to the start-point of the
> segment, and the vector running from the origin to the end-point of
> the segment. Use your vector representation from [Exercise
> 2.46](#Exercise 2.46) to define a representation for segments with a
> constructor `make/segment` and selectors `start/segment` and
> `end/segment`.
> **[]{#Exercise 2.49 label="Exercise 2.49"}Exercise 2.49:** Use
> `segments/>painter` to define the following primitive painters:
>
> a. The painter that draws the outline of the designated frame.
>
> b. The painter that draws an "X" by connecting opposite corners of
> the frame.
>
> c. The painter that draws a diamond shape by connecting the midpoints
> of the sides of the frame.
>
> d. The `wave` painter.
#### Transforming and combining painters {#transforming-and-combining-painters .unnumbered}
An operation on painters (such as `flip/vert` or `beside`) works by
creating a painter that invokes the original painters with respect to
frames derived from the argument frame. Thus, for example, `flip/vert`
doesn't have to know how a painter works in order to flip it---it just
has to know how to turn a frame upside down: The flipped painter just
uses the original painter, but in the inverted frame.
Painter operations are based on the procedure `transform/painter`, which
takes as arguments a painter and information on how to transform a frame
and produces a new painter. The transformed painter, when called on a
frame, transforms the frame and calls the original painter on the
transformed frame. The arguments to `transform/painter` are points
(represented as vectors) that specify the corners of the new frame: When
mapped into the frame, the first point specifies the new frame's origin
and the other two specify the ends of its edge vectors. Thus, arguments
within the unit square specify a frame contained within the original
frame.
::: scheme
(define (transform-painter painter origin corner1 corner2) (lambda
(frame) (let ((m (frame-coord-map frame))) (let ((new-origin (m
origin))) (painter (make-frame new-origin (sub-vect (m corner1)
new-origin) (sub-vect (m corner2) new-origin)))))))
:::
Here's how to flip painter images vertically:
::: scheme
(define (flip-vert painter) (transform-painter painter (make-vect 0.0
1.0) [; new `origin`]{.roman} (make-vect 1.0 1.0) [; new end of
`edge1`]{.roman} (make-vect 0.0 0.0))) [; new end of `edge2`]{.roman}
:::
Using `transform/painter`, we can easily define new transformations. For
example, we can define a painter that shrinks its image to the
upper-right quarter of the frame it is given:
::: scheme
(define (shrink-to-upper-right painter) (transform-painter painter
(make-vect 0.5 0.5) (make-vect 1.0 0.5) (make-vect 0.5 1.0)))
:::
Other transformations rotate images counterclockwise by 90 degrees[^95]
::: scheme
(define (rotate90 painter) (transform-painter painter (make-vect 1.0
0.0) (make-vect 1.0 1.0) (make-vect 0.0 0.0)))
:::
or squash images towards the center of the frame:[^96]
::: scheme
(define (squash-inwards painter) (transform-painter painter (make-vect
0.0 0.0) (make-vect 0.65 0.35) (make-vect 0.35 0.65)))
:::
Frame transformation is also the key to defining means of combining two
or more painters. The `beside` procedure, for example, takes two
painters, transforms them to paint in the left and right halves of an
argument frame respectively, and produces a new, compound painter. When
the compound painter is given a frame, it calls the first transformed
painter to paint in the left half of the frame and calls the second
transformed painter to paint in the right half of the frame:
::: scheme
(define (beside painter1 painter2) (let ((split-point (make-vect 0.5
0.0))) (let ((paint-left (transform-painter painter1 (make-vect 0.0 0.0)
split-point (make-vect 0.0 1.0))) (paint-right (transform-painter
painter2 split-point (make-vect 1.0 0.0) (make-vect 0.5 1.0)))) (lambda
(frame) (paint-left frame) (paint-right frame)))))
:::
Observe how the painter data abstraction, and in particular the
representation of painters as procedures, makes `beside` easy to
implement. The `beside` procedure need not know anything about the
details of the component painters other than that each painter will draw
something in its designated frame.
> **[]{#Exercise 2.50 label="Exercise 2.50"}Exercise 2.50:** Define the
> transformation `flip/horiz`, which flips painters horizontally, and
> transformations that rotate painters counterclockwise by 180 degrees
> and 270 degrees.
> **[]{#Exercise 2.51 label="Exercise 2.51"}Exercise 2.51:** Define the
> `below` operation for painters. `below` takes two painters as
> arguments. The resulting painter, given a frame, draws with the first
> painter in the bottom of the frame and with the second painter in the
> top. Define `below` in two different ways---first by writing a
> procedure that is analogous to the `beside` procedure given above, and
> again in terms of `beside` and suitable rotation operations (from
> [Exercise 2.50](#Exercise 2.50)).
#### Levels of language for robust design {#levels-of-language-for-robust-design .unnumbered}
The picture language exercises some of the critical ideas we've
introduced about abstraction with procedures and data. The fundamental
data abstractions, painters, are implemented using procedural
representations, which enables the language to handle different basic
drawing capabilities in a uniform way. The means of combination satisfy
the closure property, which permits us to easily build up complex
designs. Finally, all the tools for abstracting procedures are available
to us for abstracting means of combination for painters.
We have also obtained a glimpse of another crucial idea about languages
and program design. This is the approach of *stratified design*, the
notion that a complex system should be structured as a sequence of
levels that are described using a sequence of languages. Each level is
constructed by combining parts that are regarded as primitive at that
level, and the parts constructed at each level are used as primitives at
the next level. The language used at each level of a stratified design
has primitives, means of combination, and means of abstraction
appropriate to that level of detail.
Stratified design pervades the engineering of complex systems. For
example, in computer engineering, resistors and transistors are combined
(and described using a language of analog circuits) to produce parts
such as and-gates and or-gates, which form the primitives of a language
for digital-circuit design.[^97] These parts are combined to build
processors, bus structures, and memory systems, which are in turn
combined to form computers, using languages appropriate to computer
architecture. Computers are combined to form distributed systems, using
languages appropriate for describing network interconnections, and so
on.
As a tiny example of stratification, our picture language uses primitive
elements (primitive painters) that are created using a language that
specifies points and lines to provide the lists of line segments for
`segments/>painter`, or the shading details for a painter like `rogers`.
The bulk of our description of the picture language focused on combining
these primitives, using geometric combiners such as `beside` and
`below`. We also worked at a higher level, regarding `beside` and
`below` as primitives to be manipulated in a language whose operations,
such as `square/of/four`, capture common patterns of combining geometric
combiners.
Stratified design helps make programs *robust*, that is, it makes it
likely that small changes in a specification will require
correspondingly small changes in the program. For instance, suppose we
wanted to change the image based on `wave` shown in [Figure
2.9](#Figure 2.9). We could work at the lowest level to change the
detailed appearance of the `wave` element; we could work at the middle
level to change the way `corner/split` replicates the `wave`; we could
work at the highest level to change how `square/limit` arranges the four
copies of the corner. In general, each level of a stratified design
provides a different vocabulary for expressing the characteristics of
the system, and a different kind of ability to change it.
> **[]{#Exercise 2.52 label="Exercise 2.52"}Exercise 2.52:** Make
> changes to the square limit of `wave` shown in [Figure
> 2.9](#Figure 2.9) by working at each of the levels described above. In
> particular:
>
> a. Add some segments to the primitive `wave` painter of [Exercise
> 2.49](#Exercise 2.49) (to add a smile, for example).
>
> b. Change the pattern constructed by `corner/split` (for example, by
> using only one copy of the `up/split` and `right/split` images
> instead of two).
>
> c. Modify the version of `square/limit` that uses `square/of/four` so
> as to assemble the corners in a different pattern. (For example,
> you might make the big Mr. Rogers look outward from each corner of
> the square.)
## Symbolic Data {#Section 2.3}
All the compound data objects we have used so far were constructed
ultimately from numbers. In this section we extend the representational
capability of our language by introducing the ability to work with
arbitrary symbols as data.
### Quotation {#Section 2.3.1}
If we can form compound data using symbols, we can have lists such as
::: scheme
(a b c d) (23 45 17) ((Norah 12) (Molly 9) (Anna 7) (Lauren 6)
(Charlotte 4))
:::
Lists containing symbols can look just like the expressions of our
language:
::: scheme
(\* (+ 23 45) (+ x 9)) (define (fact n) (if (= n 1) 1 (\* n (fact (- n
1)))))
:::
In order to manipulate symbols we need a new element in our language:
the ability to *quote* a data object. Suppose we want to construct the
list `(a b)`. We can't accomplish this with `(list a b)`, because this
expression constructs a list of the *values* of `a` and `b` rather than
the symbols themselves. This issue is well known in the context of
natural languages, where words and sentences may be regarded either as
semantic entities or as character strings (syntactic entities). The
common practice in natural languages is to use quotation marks to
indicate that a word or a sentence is to be treated literally as a
string of characters. For instance, the first letter of "John" is
clearly "J." If we tell somebody "say your name aloud," we expect to
hear that person's name. However, if we tell somebody "say 'your name'
aloud," we expect to hear the words "your name." Note that we are forced
to nest quotation marks to describe what somebody else might say.[^98]
We can follow this same practice to identify lists and symbols that are
to be treated as data objects rather than as expressions to be
evaluated. However, our format for quoting differs from that of natural
languages in that we place a quotation mark (traditionally, the single
quote symbol `’`) only at the beginning of the object to be quoted. We
can get away with this in Scheme syntax because we rely on blanks and
parentheses to delimit objects. Thus, the meaning of the single quote
character is to quote the next object.[^99]
Now we can distinguish between symbols and their values:
::: scheme
(define a 1) (define b 2) (list a b) *(1 2)* (list 'a 'b) *(a b)*
(list 'a b) *(a 2)*
:::
Quotation also allows us to type in compound objects, using the
conventional printed representation for lists:[^100]
::: scheme
(car '(a b c)) *a* (cdr '(a b c)) *(b c)*
:::
In keeping with this, we can obtain the empty list by evaluating `’()`,
and thus dispense with the variable `nil`.
One additional primitive used in manipulating symbols is `eq?`, which
takes two symbols as arguments and tests whether they are the
same.[^101] Using `eq?`, we can implement a useful procedure called
`memq`. This takes two arguments, a symbol and a list. If the symbol is
not contained in the list (i.e., is not `eq?` to any item in the list),
then `memq` returns false. Otherwise, it returns the sublist of the list
beginning with the first occurrence of the symbol:
::: scheme
(define (memq item x) (cond ((null? x) false) ((eq? item (car x)) x)
(else (memq item (cdr x)))))
:::
For example, the value of
::: scheme
(memq 'apple '(pear banana prune))
:::
is false, whereas the value of
::: scheme
(memq 'apple '(x (apple sauce) y apple pear))
:::
is `(apple pear)`.
> **[]{#Exercise 2.53 label="Exercise 2.53"}Exercise 2.53:** What would
> the interpreter print in response to evaluating each of the following
> expressions?
>
> ::: scheme
> (list 'a 'b 'c) (list (list 'george)) (cdr '((x1 x2) (y1 y2))) (cadr
> '((x1 x2) (y1 y2))) (pair? (car '(a short list))) (memq 'red '((red
> shoes) (blue socks))) (memq 'red '(red shoes blue socks))
> :::
> **[]{#Exercise 2.54 label="Exercise 2.54"}Exercise 2.54:** Two lists
> are said to be `equal?` if they contain equal elements arranged in the
> same order. For example,
>
> ::: scheme
> (equal? '(this is a list) '(this is a list))
> :::
>
> is true, but
>
> ::: scheme
> (equal? '(this is a list) '(this (is a) list))
> :::
>
> is false. To be more precise, we can define `equal?` recursively in
> terms of the basic `eq?` equality of symbols by saying that `a` and
> `b` are `equal?` if they are both symbols and the symbols are `eq?`,
> or if they are both lists such that `(car a)` is `equal?` to `(car b)`
> and `(cdr a)` is `equal?` to `(cdr b)`. Using this idea, implement
> `equal?` as a procedure.[^102]
> **[]{#Exercise 2.55 label="Exercise 2.55"}Exercise 2.55:** Eva Lu Ator
> types to the interpreter the expression
>
> ::: scheme
> (car "abracadabra)
> :::
>
> To her surprise, the interpreter prints back `quote`. Explain.
### Example: Symbolic Differentiation {#Section 2.3.2}
As an illustration of symbol manipulation and a further illustration of
data abstraction, consider the design of a procedure that performs
symbolic differentiation of algebraic expressions. We would like the
procedure to take as arguments an algebraic expression and a variable
and to return the derivative of the expression with respect to the
variable. For example, if the arguments to the procedure are
$ax^2 + bx + c$ and $x$, the procedure should return $2ax + b$. Symbolic
differentiation is of special historical significance in Lisp. It was
one of the motivating examples behind the development of a computer
language for symbol manipulation. Furthermore, it marked the beginning
of the line of research that led to the development of powerful systems
for symbolic mathematical work, which are currently being used by a
growing number of applied mathematicians and physicists.
In developing the symbolic-differentiation program, we will follow the
same strategy of data abstraction that we followed in developing the
rational-number system of [Section 2.1.1](#Section 2.1.1). That is, we
will first define a differentiation algorithm that operates on abstract
objects such as "sums," "products," and "variables" without worrying
about how these are to be represented. Only afterward will we address
the representation problem.
#### The differentiation program with abstract data {#the-differentiation-program-with-abstract-data .unnumbered}
In order to keep things simple, we will consider a very simple
symbolic-differentiation program that handles expressions that are built
up using only the operations of addition and multiplication with two
arguments. Differentiation of any such expression can be carried out by
applying the following reduction rules:
$${{\it dc} \over {\it dx}} = 0,
\quad {\rm for\ } c\ {\rm a\ constant\ or\ a\ variable\ different\ from\ } x,$$
$${{\it dx} \over {\it dx}} = 1,$$
$${{\it d\,(u + v\,)} \over {\it dx}} = {{\it du} \over {\it dx}} + {{\it dv} \over {\it dx}},$$
$${{\it d\,(uv\,)} \over {\it dx}} = u {{\it dv} \over {\it dx}} + v {{\it du} \over {\it dx}}.$$
Observe that the latter two rules are recursive in nature. That is, to
obtain the derivative of a sum we first find the derivatives of the
terms and add them. Each of the terms may in turn be an expression that
needs to be decomposed. Decomposing into smaller and smaller pieces will
eventually produce pieces that are either constants or variables, whose
derivatives will be either 0 or 1.
To embody these rules in a procedure we indulge in a little wishful
thinking, as we did in designing the rational-number implementation. If
we had a means for representing algebraic expressions, we should be able
to tell whether an expression is a sum, a product, a constant, or a
variable. We should be able to extract the parts of an expression. For a
sum, for example we want to be able to extract the addend (first term)
and the augend (second term). We should also be able to construct
expressions from parts. Let us assume that we already have procedures to
implement the following selectors, constructors, and predicates:
::: scheme
(variable? e) [Is `e` a variable?]{.roman} (same-variable? v1 v2)
[Are `v1` and `v2` the same variable?]{.roman} (sum? e) [Is `e` a
sum?]{.roman} (addend e) [Addend of the sum `e`.]{.roman} (augend e)
[Augend of the sum `e`.]{.roman} (make-sum a1 a2) [Construct the sum
of `a1` and `a2`.]{.roman} (product? e) [Is `e` a product?]{.roman}
(multiplier e) [Multiplier of the product `e`.]{.roman} (multiplicand
e) [Multiplicand of the product `e`.]{.roman} (make-product m1 m2)
[Construct the product of `m1` and `m2`.]{.roman}
:::
Using these, and the primitive predicate `number?`, which identifies
numbers, we can express the differentiation rules as the following
procedure:
::: scheme
(define (deriv exp var) (cond ((number? exp) 0) ((variable? exp) (if
(same-variable? exp var) 1 0)) ((sum? exp) (make-sum (deriv (addend exp)
var) (deriv (augend exp) var))) ((product? exp) (make-sum (make-product
(multiplier exp) (deriv (multiplicand exp) var)) (make-product (deriv
(multiplier exp) var) (multiplicand exp)))) (else (error \"unknown
expression type: DERIV\" exp))))
:::
This `deriv` procedure incorporates the complete differentiation
algorithm. Since it is expressed in terms of abstract data, it will work
no matter how we choose to represent algebraic expressions, as long as
we design a proper set of selectors and constructors. This is the issue
we must address next.
#### Representing algebraic expressions {#representing-algebraic-expressions .unnumbered}
We can imagine many ways to use list structure to represent algebraic
expressions. For example, we could use lists of symbols that mirror the
usual algebraic notation, representing $ax + b$ as the list
`(a * x + b)`. However, one especially straightforward choice is to use
the same parenthesized prefix notation that Lisp uses for combinations;
that is, to represent $ax + b$ as `(+ (* a x) b)`. Then our data
representation for the differentiation problem is as follows:
- The variables are symbols. They are identified by the primitive
predicate `symbol?`:
::: scheme
(define (variable? x) (symbol? x))
:::
- Two variables are the same if the symbols representing them are
`eq?`:
::: scheme
(define (same-variable? v1 v2) (and (variable? v1) (variable? v2)
(eq? v1 v2)))
:::
- Sums and products are constructed as lists:
::: scheme
(define (make-sum a1 a2) (list '+ a1 a2)) (define (make-product m1
m2) (list '\* m1 m2))
:::
- A sum is a list whose first element is the symbol `+`:
::: scheme
(define (sum? x) (and (pair? x) (eq? (car x) '+)))
:::
- The addend is the second item of the sum list:
::: scheme
(define (addend s) (cadr s))
:::
- The augend is the third item of the sum list:
::: scheme
(define (augend s) (caddr s))
:::
- A product is a list whose first element is the symbol `*`:
::: scheme
(define (product? x) (and (pair? x) (eq? (car x) '\*)))
:::
- The multiplier is the second item of the product list:
::: scheme
(define (multiplier p) (cadr p))
:::
- The multiplicand is the third item of the product list:
::: scheme
(define (multiplicand p) (caddr p))
:::
Thus, we need only combine these with the algorithm as embodied by
`deriv` in order to have a working symbolic-differentiation program. Let
us look at some examples of its behavior:
::: scheme
(deriv '(+ x 3) 'x) *(+ 1 0)* (deriv '(\* x y) 'x) *(+ (\* x 0) (\* 1
y))* (deriv '(\* (\* x y) (+ x 3)) 'x) *(+ (\* (\* x y) (+ 1 0))*
*(\* (+ (\* x 0) (\* 1 y))* *(+ x 3)))*
:::
The program produces answers that are correct; however, they are
unsimplified. It is true that
$${{\it d\,(xy\,)} \over {\it dx}} = x \cdot 0 + 1 \cdot y,$$
but we would like the program to know that $x \cdot 0 = 0$,
$1 \cdot y = y$, and $0 + y = y$. The answer for the second example
should have been simply `y`. As the third example shows, this becomes a
serious issue when the expressions are complex.
Our difficulty is much like the one we encountered with the
rational-number implementation: we haven't reduced answers to simplest
form. To accomplish the rational-number reduction, we needed to change
only the constructors and the selectors of the implementation. We can
adopt a similar strategy here. We won't change `deriv` at all. Instead,
we will change `make/sum` so that if both summands are numbers,
`make/sum` will add them and return their sum. Also, if one of the
summands is 0, then `make/sum` will return the other summand.
::: scheme
(define (make-sum a1 a2) (cond ((=number? a1 0) a2) ((=number? a2 0) a1)
((and (number? a1) (number? a2)) (+ a1 a2)) (else (list '+ a1 a2))))
:::
This uses the procedure `=number?`, which checks whether an expression
is equal to a given number:
::: scheme
(define (=number? exp num) (and (number? exp) (= exp num)))
:::
Similarly, we will change `make/product` to build in the rules that 0
times anything is 0 and 1 times anything is the thing itself:
::: scheme
(define (make-product m1 m2) (cond ((or (=number? m1 0) (=number? m2 0))
0) ((=number? m1 1) m2) ((=number? m2 1) m1) ((and (number? m1) (number?
m2)) (\* m1 m2)) (else (list '\* m1 m2))))
:::
Here is how this version works on our three examples:
::: scheme
(deriv '(+ x 3) 'x) *1* (deriv '(\* x y) 'x) *y* (deriv '(\* (\* x
y) (+ x 3)) 'x) *(+ (\* x y) (\* y (+ x 3)))*
:::
Although this is quite an improvement, the third example shows that
there is still a long way to go before we get a program that puts
expressions into a form that we might agree is "simplest." The problem
of algebraic simplification is complex because, among other reasons, a
form that may be simplest for one purpose may not be for another.
> **[]{#Exercise 2.56 label="Exercise 2.56"}Exercise 2.56:** Show how to
> extend the basic differentiator to handle more kinds of expressions.
> For instance, implement the differentiation rule
>
> $${{\it d\,(u^n\,)} \over {\it dx}} = nu^{n-1} {{\it du} \over {\it dx}}$$
>
> by adding a new clause to the `deriv` program and defining appropriate
> procedures `exponentiation?`, `base`, `exponent`, and
> `make/exponentiation`. (You may use the symbol `**` to denote
> exponentiation.) Build in the rules that anything raised to the power
> 0 is 1 and anything raised to the power 1 is the thing itself.
> **[]{#Exercise 2.57 label="Exercise 2.57"}Exercise 2.57:** Extend the
> differentiation program to handle sums and products of arbitrary
> numbers of (two or more) terms. Then the last example above could be
> expressed as
>
> ::: scheme
> (deriv '(\* x y (+ x 3)) 'x)
> :::
>
> Try to do this by changing only the representation for sums and
> products, without changing the `deriv` procedure at all. For example,
> the `addend` of a sum would be the first term, and the `augend` would
> be the sum of the rest of the terms.
> **[]{#Exercise 2.58 label="Exercise 2.58"}Exercise 2.58:** Suppose we
> want to modify the differentiation program so that it works with
> ordinary mathematical notation, in which `+` and `*` are infix rather
> than prefix operators. Since the differentiation program is defined in
> terms of abstract data, we can modify it to work with different
> representations of expressions solely by changing the predicates,
> selectors, and constructors that define the representation of the
> algebraic expressions on which the differentiator is to operate.
>
> a. Show how to do this in order to differentiate algebraic
> expressions presented in infix form, such as
> `(x + (3 * (x + (y + 2))))`. To simplify the task, assume that `+`
> and `*` always take two arguments and that expressions are fully
> parenthesized.
>
> b. The problem becomes substantially harder if we allow standard
> algebraic notation, such as `(x + 3 * (x + y + 2))`, which drops
> unnecessary parentheses and assumes that multiplication is done
> before addition. Can you design appropriate predicates, selectors,
> and constructors for this notation such that our derivative
> program still works?
### Example: Representing Sets {#Section 2.3.3}
In the previous examples we built representations for two kinds of
compound data objects: rational numbers and algebraic expressions. In
one of these examples we had the choice of simplifying (reducing) the
expressions at either construction time or selection time, but other
than that the choice of a representation for these structures in terms
of lists was straightforward. When we turn to the representation of
sets, the choice of a representation is not so obvious. Indeed, there
are a number of possible representations, and they differ significantly
from one another in several ways.
Informally, a set is simply a collection of distinct objects. To give a
more precise definition we can employ the method of data abstraction.
That is, we define "set" by specifying the operations that are to be
used on sets. These are `union/set`, `intersection/set`,
`element/of/set?`, and `adjoin/set`. `element/of/set?` is a predicate
that determines whether a given element is a member of a set.
`adjoin/set` takes an object and a set as arguments and returns a set
that contains the elements of the original set and also the adjoined
element. `union/set` computes the union of two sets, which is the set
containing each element that appears in either argument.
`intersection/set` computes the intersection of two sets, which is the
set containing only elements that appear in both arguments. From the
viewpoint of data abstraction, we are free to design any representation
that implements these operations in a way consistent with the
interpretations given above.[^103]
#### Sets as unordered lists {#sets-as-unordered-lists .unnumbered}
One way to represent a set is as a list of its elements in which no
element appears more than once. The empty set is represented by the
empty list. In this representation, `element/of/set?` is similar to the
procedure `memq` of [Section 2.3.1](#Section 2.3.1). It uses `equal?`
instead of `eq?` so that the set elements need not be symbols:
::: scheme
(define (element-of-set? x set) (cond ((null? set) false) ((equal? x
(car set)) true) (else (element-of-set? x (cdr set)))))
:::
Using this, we can write `adjoin/set`. If the object to be adjoined is
already in the set, we just return the set. Otherwise, we use `cons` to
add the object to the list that represents the set:
::: scheme
(define (adjoin-set x set) (if (element-of-set? x set) set (cons x
set)))
:::
For `intersection/set` we can use a recursive strategy. If we know how
to form the intersection of `set2` and the `cdr` of `set1`, we only need
to decide whether to include the `car` of `set1` in this. But this
depends on whether `(car set1)` is also in `set2`. Here is the resulting
procedure:
::: scheme
(define (intersection-set set1 set2) (cond ((or (null? set1) (null?
set2)) '()) ((element-of-set? (car set1) set2) (cons (car set1)
(intersection-set (cdr set1) set2))) (else (intersection-set (cdr set1)
set2))))
:::
In designing a representation, one of the issues we should be concerned
with is efficiency. Consider the number of steps required by our set
operations. Since they all use `element/of/set?`, the speed of this
operation has a major impact on the efficiency of the set implementation
as a whole. Now, in order to check whether an object is a member of a
set, `element/of/set?` may have to scan the entire set. (In the worst
case, the object turns out not to be in the set.) Hence, if the set has
$n$ elements, `element/of/set?` might take up to $n$ steps. Thus, the
number of steps required grows as $\Theta(n)$. The number of steps
required by `adjoin/set`, which uses this operation, also grows as
$\Theta(n)$. For `intersection/set`, which does an `element/of/set?`
check for each element of `set1`, the number of steps required grows as
the product of the sizes of the sets involved, or $\Theta(n^2)$ for two
sets of size $n$. The same will be true of `union/set`.
> **[]{#Exercise 2.59 label="Exercise 2.59"}Exercise 2.59:** Implement
> the `union/set` operation for the unordered-list representation of
> sets.
> **[]{#Exercise 2.60 label="Exercise 2.60"}Exercise 2.60:** We
> specified that a set would be represented as a list with no
> duplicates. Now suppose we allow duplicates. For instance, the set
> $\{1, 2, 3\}$ could be represented as the list `(2 3 2 1 3 2 2)`.
> Design procedures `element/of/set?`, `adjoin/set`, `union/set`, and
> `intersection/set` that operate on this representation. How does the
> efficiency of each compare with the corresponding procedure for the
> non-duplicate representation? Are there applications for which you
> would use this representation in preference to the non-duplicate one?
#### Sets as ordered lists {#sets-as-ordered-lists .unnumbered}
One way to speed up our set operations is to change the representation
so that the set elements are listed in increasing order. To do this, we
need some way to compare two objects so that we can say which is bigger.
For example, we could compare symbols lexicographically, or we could
agree on some method for assigning a unique number to an object and then
compare the elements by comparing the corresponding numbers. To keep our
discussion simple, we will consider only the case where the set elements
are numbers, so that we can compare elements using `>` and `<`. We will
represent a set of numbers by listing its elements in increasing order.
Whereas our first representation above allowed us to represent the set
$\{1, 3, 6, 10\}$ by listing the elements in any order, our new
representation allows only the list `(1 3 6 10)`.
One advantage of ordering shows up in `element/of/set?`: In checking for
the presence of an item, we no longer have to scan the entire set. If we
reach a set element that is larger than the item we are looking for,
then we know that the item is not in the set:
::: scheme
(define (element-of-set? x set) (cond ((null? set) false) ((= x (car
set)) true) ((\< x (car set)) false) (else (element-of-set? x (cdr
set)))))
:::
How many steps does this save? In the worst case, the item we are
looking for may be the largest one in the set, so the number of steps is
the same as for the unordered representation. On the other hand, if we
search for items of many different sizes we can expect that sometimes we
will be able to stop searching at a point near the beginning of the list
and that other times we will still need to examine most of the list. On
the average we should expect to have to examine about half of the items
in the set. Thus, the average number of steps required will be about
$n / 2$. This is still $\Theta(n)$ growth, but it does save us, on the
average, a factor of 2 in number of steps over the previous
implementation.
We obtain a more impressive speedup with `intersection/set`. In the
unordered representation this operation required $\Theta(n^2)$ steps,
because we performed a complete scan of `set2` for each element of
`set1`. But with the ordered representation, we can use a more clever
method. Begin by comparing the initial elements, `x1` and `x2`, of the
two sets. If `x1` equals `x2`, then that gives an element of the
intersection, and the rest of the intersection is the intersection of
the `cdr`-s of the two sets. Suppose, however, that `x1` is less than
`x2`. Since `x2` is the smallest element in `set2`, we can immediately
conclude that `x1` cannot appear anywhere in `set2` and hence is not in
the intersection. Hence, the intersection is equal to the intersection
of `set2` with the `cdr` of `set1`. Similarly, if `x2` is less than
`x1`, then the intersection is given by the intersection of `set1` with
the `cdr` of `set2`. Here is the procedure:
::: scheme
(define (intersection-set set1 set2) (if (or (null? set1) (null? set2))
'() (let ((x1 (car set1)) (x2 (car set2))) (cond ((= x1 x2) (cons x1
(intersection-set (cdr set1) (cdr set2)))) ((\< x1 x2) (intersection-set
(cdr set1) set2)) ((\< x2 x1) (intersection-set set1 (cdr set2)))))))
:::
To estimate the number of steps required by this process, observe that
at each step we reduce the intersection problem to computing
intersections of smaller sets---removing the first element from `set1`
or `set2` or both. Thus, the number of steps required is at most the sum
of the sizes of `set1` and `set2`, rather than the product of the sizes
as with the unordered representation. This is $\Theta(n)$ growth rather
than $\Theta(n^2)$---a considerable speedup, even for sets of moderate
size.
> **[]{#Exercise 2.61 label="Exercise 2.61"}Exercise 2.61:** Give an
> implementation of `adjoin/set` using the ordered representation. By
> analogy with `element/of/set?` show how to take advantage of the
> ordering to produce a procedure that requires on the average about
> half as many steps as with the unordered representation.
> **[]{#Exercise 2.62 label="Exercise 2.62"}Exercise 2.62:** Give a
> $\Theta(n)$ implementation of `union/set` for sets represented as
> ordered lists.
#### Sets as binary trees {#sets-as-binary-trees .unnumbered}
We can do better than the ordered-list representation by arranging the
set elements in the form of a tree. Each node of the tree holds one
element of the set, called the "entry" at that node, and a link to each
of two other (possibly empty) nodes. The "left" link points to elements
smaller than the one at the node, and the "right" link to elements
greater than the one at the node. [Figure 2.16](#Figure 2.16) shows some
trees that represent the set $\{1, 3, 5, 7, 9, 11\}$. The same set may
be represented by a tree in a number of different ways. The only thing
we require for a valid representation is that all elements in the left
subtree be smaller than the node entry and that all elements in the
right subtree be larger.
[]{#Figure 2.16 label="Figure 2.16"}
![image](fig/chap2/Fig2.16b.pdf){width="70mm"}
> **Figure 2.16:** Various binary trees that represent the set
> $\{1, 3, 5, 7, 9, 11\}$.
The advantage of the tree representation is this: Suppose we want to
check whether a number $x$ is contained in a set. We begin by comparing
$x$ with the entry in the top node. If $x$ is less than this, we know
that we need only search the left subtree; if $x$ is greater, we need
only search the right subtree. Now, if the tree is "balanced," each of
these subtrees will be about half the size of the original. Thus, in one
step we have reduced the problem of searching a tree of size $n$ to
searching a tree of size $n / 2$. Since the size of the tree is halved
at each step, we should expect that the number of steps needed to search
a tree of size $n$ grows as $\Theta(\log n)$.[^104] For large sets, this
will be a significant speedup over the previous representations.
We can represent trees by using lists. Each node will be a list of three
items: the entry at the node, the left subtree, and the right subtree. A
left or a right subtree of the empty list will indicate that there is no
subtree connected there. We can describe this representation by the
following procedures:[^105]
::: scheme
(define (entry tree) (car tree)) (define (left-branch tree) (cadr tree))
(define (right-branch tree) (caddr tree)) (define (make-tree entry left
right) (list entry left right))
:::
Now we can write the `element/of/set?` procedure using the strategy
described above:
::: scheme
(define (element-of-set? x set) (cond ((null? set) false) ((= x (entry
set)) true) ((\< x (entry set)) (element-of-set? x (left-branch set)))
((\> x (entry set)) (element-of-set? x (right-branch set)))))
:::
Adjoining an item to a set is implemented similarly and also requires
$\Theta(\log n)$ steps. To adjoin an item `x`, we compare `x` with the
node entry to determine whether `x` should be added to the right or to
the left branch, and having adjoined `x` to the appropriate branch we
piece this newly constructed branch together with the original entry and
the other branch. If `x` is equal to the entry, we just return the node.
If we are asked to adjoin `x` to an empty tree, we generate a tree that
has `x` as the entry and empty right and left branches. Here is the
procedure:
::: scheme
(define (adjoin-set x set) (cond ((null? set) (make-tree x '() '())) ((=
x (entry set)) set) ((\< x (entry set)) (make-tree (entry set)
(adjoin-set x (left-branch set)) (right-branch set))) ((\> x (entry
set)) (make-tree (entry set) (left-branch set) (adjoin-set x
(right-branch set))))))
:::
The above claim that searching the tree can be performed in a
logarithmic number of steps rests on the assumption that the tree is
"balanced," i.e., that the left and the right subtree of every tree have
approximately the same number of elements, so that each subtree contains
about half the elements of its parent. But how can we be certain that
the trees we construct will be balanced? Even if we start with a
balanced tree, adding elements with `adjoin/set` may produce an
unbalanced result. Since the position of a newly adjoined element
depends on how the element compares with the items already in the set,
we can expect that if we add elements "randomly" the tree will tend to
be balanced on the average. But this is not a guarantee. For example, if
we start with an empty set and adjoin the numbers 1 through 7 in
sequence we end up with the highly unbalanced tree shown in [Figure
2.17](#Figure 2.17). In this tree all the left subtrees are empty, so it
has no advantage over a simple ordered list. One way to solve this
problem is to define an operation that transforms an arbitrary tree into
a balanced tree with the same elements. Then we can perform this
transformation after every few `adjoin/set` operations to keep our set
in balance. There are also other ways to solve this problem, most of
which involve designing new data structures for which searching and
insertion both can be done in $\Theta(\log n)$ steps.[^106]
[]{#Figure 2.17 label="Figure 2.17"}
![image](fig/chap2/Fig2.17a.pdf){width="40mm"}
> **Figure 2.17:** Unbalanced tree produced by adjoining 1 through 7 in
> sequence.
> **[]{#Exercise 2.63 label="Exercise 2.63"}Exercise 2.63:** Each of the
> following two procedures converts a binary tree to a list.
>
> ::: scheme
> (define (tree-\>list-1 tree) (if (null? tree) '() (append
> (tree-\>list-1 (left-branch tree)) (cons (entry tree) (tree-\>list-1
> (right-branch tree)))))) (define (tree-\>list-2 tree) (define
> (copy-to-list tree result-list) (if (null? tree) result-list
> (copy-to-list (left-branch tree) (cons (entry tree) (copy-to-list
> (right-branch tree) result-list))))) (copy-to-list tree '()))
> :::
>
> a. Do the two procedures produce the same result for every tree? If
> not, how do the results differ? What lists do the two procedures
> produce for the trees in [Figure 2.16](#Figure 2.16)?
>
> b. Do the two procedures have the same order of growth in the number
> of steps required to convert a balanced tree with $n$ elements to
> a list? If not, which one grows more slowly?
> **[]{#Exercise 2.64 label="Exercise 2.64"}Exercise 2.64:** The
> following procedure `list/>tree` converts an ordered list to a
> balanced binary tree. The helper procedure `partial/tree` takes as
> arguments an integer $n$ and list of at least $n$ elements and
> constructs a balanced tree containing the first $n$ elements of the
> list. The result returned by `partial/tree` is a pair (formed with
> `cons`) whose `car` is the constructed tree and whose `cdr` is the
> list of elements not included in the tree.
>
> ::: scheme
> (define (list-\>tree elements) (car (partial-tree elements (length
> elements)))) (define (partial-tree elts n) (if (= n 0) (cons '() elts)
> (let ((left-size (quotient (- n 1) 2))) (let ((left-result
> (partial-tree elts left-size))) (let ((left-tree (car left-result))
> (non-left-elts (cdr left-result)) (right-size (- n (+ left-size 1))))
> (let ((this-entry (car non-left-elts)) (right-result (partial-tree
> (cdr non-left-elts) right-size))) (let ((right-tree (car
> right-result)) (remaining-elts (cdr right-result))) (cons (make-tree
> this-entry left-tree right-tree) remaining-elts))))))))
> :::
>
> a. Write a short paragraph explaining as clearly as you can how
> `partial/tree` works. Draw the tree produced by `list/>tree` for
> the list `(1 3 5 7 9 11)`.
>
> b. What is the order of growth in the number of steps required by
> `list/>tree` to convert a list of $n$ elements?
> **[]{#Exercise 2.65 label="Exercise 2.65"}Exercise 2.65:** Use the
> results of [Exercise 2.63](#Exercise 2.63) and [Exercise
> 2.64](#Exercise 2.64) to give $\Theta(n)$ implementations of
> `union/set` and `intersection/set` for sets implemented as (balanced)
> binary trees.[^107]
#### Sets and information retrieval {#sets-and-information-retrieval .unnumbered}
We have examined options for using lists to represent sets and have seen
how the choice of representation for a data object can have a large
impact on the performance of the programs that use the data. Another
reason for concentrating on sets is that the techniques discussed here
appear again and again in applications involving information retrieval.
Consider a data base containing a large number of individual records,
such as the personnel files for a company or the transactions in an
accounting system. A typical data-management system spends a large
amount of time accessing or modifying the data in the records and
therefore requires an efficient method for accessing records. This is
done by identifying a part of each record to serve as an identifying
*key*. A key can be anything that uniquely identifies the record. For a
personnel file, it might be an employee's id number. For
an accounting system, it might be a transaction number. Whatever the key
is, when we define the record as a data structure we should include a
`key` selector procedure that retrieves the key associated with a given
record.
Now we represent the data base as a set of records. To locate the record
with a given key we use a procedure `lookup`, which takes as arguments a
key and a data base and which returns the record that has that key, or
false if there is no such record. `lookup` is implemented in almost the
same way as `element/of/set?`. For example, if the set of records is
implemented as an unordered list, we could use
::: scheme
(define (lookup given-key set-of-records) (cond ((null? set-of-records)
false) ((equal? given-key (key (car set-of-records))) (car
set-of-records)) (else (lookup given-key (cdr set-of-records)))))
:::
Of course, there are better ways to represent large sets than as
unordered lists. Information-retrieval systems in which records have to
be "randomly accessed" are typically implemented by a tree-based method,
such as the binary-tree representation discussed previously. In
designing such a system the methodology of data abstraction can be a
great help. The designer can create an initial implementation using a
simple, straightforward representation such as unordered lists. This
will be unsuitable for the eventual system, but it can be useful in
providing a "quick and dirty" data base with which to test the rest of
the system. Later on, the data representation can be modified to be more
sophisticated. If the data base is accessed in terms of abstract
selectors and constructors, this change in representation will not
require any changes to the rest of the system.
> **[]{#Exercise 2.66 label="Exercise 2.66"}Exercise 2.66:** Implement
> the `lookup` procedure for the case where the set of records is
> structured as a binary tree, ordered by the numerical values of the
> keys.
### Example: Huffman Encoding Trees {#Section 2.3.4}
This section provides practice in the use of list structure and data
abstraction to manipulate sets and trees. The application is to methods
for representing data as sequences of ones and zeros (bits). For
example, the ascii standard code used to represent text in
computers encodes each character as a sequence of seven bits. Using
seven bits allows us to distinguish $2^7$, or 128, possible different
characters. In general, if we want to distinguish $n$ different symbols,
we will need to use $\log_2\!n$ bits per symbol. If all our messages are
made up of the eight symbols A, B, C, D, E, F, G, and H, we can choose a
code with three bits per character, for example
A 000 C 010 E 100 G 110 B 001 D 011 F 101 H 111
With this code, the message
BACADAEAFABBAAAGAH
is encoded as the string of 54 bits
001000010000011000100000101000001001000000000110000111
Codes such as ascii and the A-through-H code above are
known as *fixed-length* codes, because they represent each symbol in the
message with the same number of bits. It is sometimes advantageous to
use *variable-length* codes, in which different symbols may be
represented by different numbers of bits. For example, Morse code does
not use the same number of dots and dashes for each letter of the
alphabet. In particular, E, the most frequent letter, is represented by
a single dot. In general, if our messages are such that some symbols
appear very frequently and some very rarely, we can encode data more
efficiently (i.e., using fewer bits per message) if we assign shorter
codes to the frequent symbols. Consider the following alternative code
for the letters A through H:
A 0 C 1010 E 1100 G 1110 B 100 D 1011 F 1101 H 1111
With this code, the same message as above is encoded as the string
100010100101101100011010100100000111001111
This string contains 42 bits, so it saves more than 20% in space in
comparison with the fixed-length code shown above.
One of the difficulties of using a variable-length code is knowing when
you have reached the end of a symbol in reading a sequence of zeros and
ones. Morse code solves this problem by using a special *separator code*
(in this case, a pause) after the sequence of dots and dashes for each
letter. Another solution is to design the code in such a way that no
complete code for any symbol is the beginning (or *prefix*) of the code
for another symbol. Such a code is called a *prefix code*. In the
example above, A is encoded by 0 and B is encoded by 100, so no other
symbol can have a code that begins with 0 or with 100.
In general, we can attain significant savings if we use variable-length
prefix codes that take advantage of the relative frequencies of the
symbols in the messages to be encoded. One particular scheme for doing
this is called the Huffman encoding method, after its discoverer, David
Huffman. A Huffman code can be represented as a binary tree whose leaves
are the symbols that are encoded. At each non-leaf node of the tree
there is a set containing all the symbols in the leaves that lie below
the node. In addition, each symbol at a leaf is assigned a weight (which
is its relative frequency), and each non-leaf node contains a weight
that is the sum of all the weights of the leaves lying below it. The
weights are not used in the encoding or the decoding process. We will
see below how they are used to help construct the tree.
[Figure 2.18](#Figure 2.18) shows the Huffman tree for the A-through-H
code given above. The weights at the leaves indicate that the tree was
designed for messages in which A appears with relative frequency 8, B
with relative frequency 3, and the other letters each with relative
frequency 1.
[]{#Figure 2.18 label="Figure 2.18"}
![image](fig/chap2/Fig2.18a.pdf){width="81mm"}
**Figure 2.18:** A Huffman encoding tree.
Given a Huffman tree, we can find the encoding of any symbol by starting
at the root and moving down until we reach the leaf that holds the
symbol. Each time we move down a left branch we add a 0 to the code, and
each time we move down a right branch we add a 1. (We decide which
branch to follow by testing to see which branch either is the leaf node
for the symbol or contains the symbol in its set.) For example, starting
from the root of the tree in [Figure 2.18](#Figure 2.18), we arrive at
the leaf for D by following a right branch, then a left branch, then a
right branch, then a right branch; hence, the code for D is 1011.
To decode a bit sequence using a Huffman tree, we begin at the root and
use the successive zeros and ones of the bit sequence to determine
whether to move down the left or the right branch. Each time we come to
a leaf, we have generated a new symbol in the message, at which point we
start over from the root of the tree to find the next symbol. For
example, suppose we are given the tree above and the sequence 10001010.
Starting at the root, we move down the right branch, (since the first
bit of the string is 1), then down the left branch (since the second bit
is 0), then down the left branch (since the third bit is also 0). This
brings us to the leaf for B, so the first symbol of the decoded message
is B. Now we start again at the root, and we make a left move because
the next bit in the string is 0. This brings us to the leaf for A. Then
we start again at the root with the rest of the string 1010, so we move
right, left, right, left and reach C. Thus, the entire message is BAC.
#### Generating Huffman trees {#generating-huffman-trees .unnumbered}
Given an "alphabet" of symbols and their relative frequencies, how do we
construct the "best" code? (In other words, which tree will encode
messages with the fewest bits?) Huffman gave an algorithm for doing this
and showed that the resulting code is indeed the best variable-length
code for messages where the relative frequency of the symbols matches
the frequencies with which the code was constructed. We will not prove
this optimality of Huffman codes here, but we will show how Huffman
trees are constructed.[^108]
The algorithm for generating a Huffman tree is very simple. The idea is
to arrange the tree so that the symbols with the lowest frequency appear
farthest away from the root. Begin with the set of leaf nodes,
containing symbols and their frequencies, as determined by the initial
data from which the code is to be constructed. Now find two leaves with
the lowest weights and merge them to produce a node that has these two
nodes as its left and right branches. The weight of the new node is the
sum of the two weights. Remove the two leaves from the original set and
replace them by this new node. Now continue this process. At each step,
merge two nodes with the smallest weights, removing them from the set
and replacing them with a node that has these two as its left and right
branches. The process stops when there is only one node left, which is
the root of the entire tree. Here is how the Huffman tree of [Figure
2.18](#Figure 2.18) was generated:
Initial leaves (A 8) (B 3) (C 1) (D 1) (E 1) (F 1) (G 1) (H 1) Merge (A
8) (B 3) (C D 2) (E 1) (F 1) (G 1) (H 1) Merge (A 8) (B 3) (C D 2) (E F
2) (G 1) (H 1) Merge (A 8) (B 3) (C D 2) (E F 2) (G H 2) Merge (A 8) (B
3) (C D 2) (E F G H 4) Merge (A 8) (B C D 5) (E F G H 4) Merge (A 8) (B
C D E F G H 9) Final merge (A B C D E F G H 17)
The algorithm does not always specify a unique tree, because there may
not be unique smallest-weight nodes at each step. Also, the choice of
the order in which the two nodes are merged (i.e., which will be the
right branch and which will be the left branch) is arbitrary.
#### Representing Huffman trees {#representing-huffman-trees .unnumbered}
In the exercises below we will work with a system that uses Huffman
trees to encode and decode messages and generates Huffman trees
according to the algorithm outlined above. We will begin by discussing
how trees are represented.
Leaves of the tree are represented by a list consisting of the symbol
`leaf`, the symbol at the leaf, and the weight:
::: scheme
(define (make-leaf symbol weight) (list 'leaf symbol weight)) (define
(leaf? object) (eq? (car object) 'leaf)) (define (symbol-leaf x) (cadr
x)) (define (weight-leaf x) (caddr x))
:::
A general tree will be a list of a left branch, a right branch, a set of
symbols, and a weight. The set of symbols will be simply a list of the
symbols, rather than some more sophisticated set representation. When we
make a tree by merging two nodes, we obtain the weight of the tree as
the sum of the weights of the nodes, and the set of symbols as the union
of the sets of symbols for the nodes. Since our symbol sets are
represented as lists, we can form the union by using the `append`
procedure we defined in [Section 2.2.1](#Section 2.2.1):
::: scheme
(define (make-code-tree left right) (list left right (append (symbols
left) (symbols right)) (+ (weight left) (weight right))))
:::
If we make a tree in this way, we have the following selectors:
::: scheme
(define (left-branch tree) (car tree)) (define (right-branch tree) (cadr
tree)) (define (symbols tree) (if (leaf? tree) (list (symbol-leaf tree))
(caddr tree))) (define (weight tree) (if (leaf? tree) (weight-leaf tree)
(cadddr tree)))
:::
The procedures `symbols` and `weight` must do something slightly
different depending on whether they are called with a leaf or a general
tree. These are simple examples of *generic procedures* (procedures that
can handle more than one kind of data), which we will have much more to
say about in [Section 2.4](#Section 2.4) and [Section
2.5](#Section 2.5).
#### The decoding procedure {#the-decoding-procedure .unnumbered}
The following procedure implements the decoding algorithm. It takes as
arguments a list of zeros and ones, together with a Huffman tree.
::: scheme
(define (decode bits tree) (define (decode-1 bits current-branch) (if
(null? bits) '() (let ((next-branch (choose-branch (car bits)
current-branch))) (if (leaf? next-branch) (cons (symbol-leaf
next-branch) (decode-1 (cdr bits) tree)) (decode-1 (cdr bits)
next-branch))))) (decode-1 bits tree)) (define (choose-branch bit
branch) (cond ((= bit 0) (left-branch branch)) ((= bit 1) (right-branch
branch)) (else (error \"bad bit: CHOOSE-BRANCH\" bit))))
:::
The procedure `decode/1` takes two arguments: the list of remaining bits
and the current position in the tree. It keeps moving "down" the tree,
choosing a left or a right branch according to whether the next bit in
the list is a zero or a one. (This is done with the procedure
`choose/branch`.) When it reaches a leaf, it returns the symbol at that
leaf as the next symbol in the message by `cons`ing it onto the result
of decoding the rest of the message, starting at the root of the tree.
Note the error check in the final clause of `choose/branch`, which
complains if the procedure finds something other than a zero or a one in
the input data.
#### Sets of weighted elements {#sets-of-weighted-elements .unnumbered}
In our representation of trees, each non-leaf node contains a set of
symbols, which we have represented as a simple list. However, the
tree-generating algorithm discussed above requires that we also work
with sets of leaves and trees, successively merging the two smallest
items. Since we will be required to repeatedly find the smallest item in
a set, it is convenient to use an ordered representation for this kind
of set.
We will represent a set of leaves and trees as a list of elements,
arranged in increasing order of weight. The following `adjoin/set`
procedure for constructing sets is similar to the one described in
[Exercise 2.61](#Exercise 2.61); however, items are compared by their
weights, and the element being added to the set is never already in it.
::: scheme
(define (adjoin-set x set) (cond ((null? set) (list x)) ((\< (weight x)
(weight (car set))) (cons x set)) (else (cons (car set) (adjoin-set x
(cdr set))))))
:::
The following procedure takes a list of symbol-frequency pairs such as
`((A 4) (B 2) (C 1) (D 1))` and constructs an initial ordered set of
leaves, ready to be merged according to the Huffman algorithm:
::: scheme
(define (make-leaf-set pairs) (if (null? pairs) '() (let ((pair (car
pairs))) (adjoin-set (make-leaf (car pair) [; symbol]{.roman} (cadr
pair)) [; frequency]{.roman} (make-leaf-set (cdr pairs))))))
:::
> **[]{#Exercise 2.67 label="Exercise 2.67"}Exercise 2.67:** Define an
> encoding tree and a sample message:
>
> ::: scheme
> (define sample-tree (make-code-tree (make-leaf 'A 4) (make-code-tree
> (make-leaf 'B 2) (make-code-tree (make-leaf 'D 1) (make-leaf 'C 1)))))
> (define sample-message '(0 1 1 0 0 1 0 1 0 1 1 1 0))
> :::
>
> Use the `decode` procedure to decode the message, and give the result.
> **[]{#Exercise 2.68 label="Exercise 2.68"}Exercise 2.68:** The
> `encode` procedure takes as arguments a message and a tree and
> produces the list of bits that gives the encoded message.
>
> ::: scheme
> (define (encode message tree) (if (null? message) '() (append
> (encode-symbol (car message) tree) (encode (cdr message) tree))))
> :::
>
> `encode/symbol` is a procedure, which you must write, that returns the
> list of bits that encodes a given symbol according to a given tree.
> You should design `encode/symbol` so that it signals an error if the
> symbol is not in the tree at all. Test your procedure by encoding the
> result you obtained in [Exercise 2.67](#Exercise 2.67) with the sample
> tree and seeing whether it is the same as the original sample message.
> **[]{#Exercise 2.69 label="Exercise 2.69"}Exercise 2.69:** The
> following procedure takes as its argument a list of symbol-frequency
> pairs (where no symbol appears in more than one pair) and generates a
> Huffman encoding tree according to the Huffman algorithm.
>
> ::: scheme
> (define (generate-huffman-tree pairs) (successive-merge (make-leaf-set
> pairs)))
> :::
>
> `make/leaf/set` is the procedure given above that transforms the list
> of pairs into an ordered set of leaves. `successive/merge` is the
> procedure you must write, using `make/code/tree` to successively merge
> the smallest-weight elements of the set until there is only one
> element left, which is the desired Huffman tree. (This procedure is
> slightly tricky, but not really complicated. If you find yourself
> designing a complex procedure, then you are almost certainly doing
> something wrong. You can take significant advantage of the fact that
> we are using an ordered set representation.)
> **[]{#Exercise 2.70 label="Exercise 2.70"}Exercise 2.70:** The
> following eight-symbol alphabet with associated relative frequencies
> was designed to efficiently encode the lyrics of 1950s rock songs.
> (Note that the "symbols" of an "alphabet" need not be individual
> letters.)
>
> A 2 GET 2 SHA 3 WAH 1 BOOM 1 JOB 2 NA 16 YIP 9
>
> Use `generate/huffman/tree` ([Exercise 2.69](#Exercise 2.69)) to
> generate a corresponding Huffman tree, and use `encode` ([Exercise
> 2.68](#Exercise 2.68)) to encode the following message:
>
> Get a job Sha na na na na na na na na Get a job Sha na na na na na na
> na na Wah yip yip yip yip yip yip yip yip yip Sha boom
>
> How many bits are required for the encoding? What is the smallest
> number of bits that would be needed to encode this song if we used a
> fixed-length code for the eight-symbol alphabet?
> **[]{#Exercise 2.71 label="Exercise 2.71"}Exercise 2.71:** Suppose we
> have a Huffman tree for an alphabet of $n$ symbols, and that the
> relative frequencies of the symbols are $1, 2, 4, \dots, 2^{n-1}$.
> Sketch the tree for $n=5$; for $n=10$. In such a tree (for general
> $n$) how many bits are required to encode the most frequent symbol?
> The least frequent symbol?
> **[]{#Exercise 2.72 label="Exercise 2.72"}Exercise 2.72:** Consider
> the encoding procedure that you designed in [Exercise
> 2.68](#Exercise 2.68). What is the order of growth in the number of
> steps needed to encode a symbol? Be sure to include the number of
> steps needed to search the symbol list at each node encountered. To
> answer this question in general is difficult. Consider the special
> case where the relative frequencies of the $n$ symbols are as
> described in [Exercise 2.71](#Exercise 2.71), and give the order of
> growth (as a function of $n$) of the number of steps needed to encode
> the most frequent and least frequent symbols in the alphabet.
## Multiple Representations for Abstract Data {#Section 2.4}
We have introduced data abstraction, a methodology for structuring
systems in such a way that much of a program can be specified
independent of the choices involved in implementing the data objects
that the program manipulates. For example, we saw in [Section
2.1.1](#Section 2.1.1) how to separate the task of designing a program
that uses rational numbers from the task of implementing rational
numbers in terms of the computer language's primitive mechanisms for
constructing compound data. The key idea was to erect an abstraction
barrier---in this case, the selectors and constructors for rational
numbers (`make/rat`, `numer`, `denom`)---that isolates the way rational
numbers are used from their underlying representation in terms of list
structure. A similar abstraction barrier isolates the details of the
procedures that perform rational arithmetic (`add/rat`, `sub/rat`,
`mul/rat`, and `div/rat`) from the "higher-level" procedures that use
rational numbers. The resulting program has the structure shown in
[Figure 2.1](#Figure 2.1).
These data-abstraction barriers are powerful tools for controlling
complexity. By isolating the underlying representations of data objects,
we can divide the task of designing a large program into smaller tasks
that can be performed separately. But this kind of data abstraction is
not yet powerful enough, because it may not always make sense to speak
of "the underlying representation" for a data object.
For one thing, there might be more than one useful representation for a
data object, and we might like to design systems that can deal with
multiple representations. To take a simple example, complex numbers may
be represented in two almost equivalent ways: in rectangular form (real
and imaginary parts) and in polar form (magnitude and angle). Sometimes
rectangular form is more appropriate and sometimes polar form is more
appropriate. Indeed, it is perfectly plausible to imagine a system in
which complex numbers are represented in both ways, and in which the
procedures for manipulating complex numbers work with either
representation.
More importantly, programming systems are often designed by many people
working over extended periods of time, subject to requirements that
change over time. In such an environment, it is simply not possible for
everyone to agree in advance on choices of data representation. So in
addition to the data-abstraction barriers that isolate representation
from use, we need abstraction barriers that isolate different design
choices from each other and permit different choices to coexist in a
single program. Furthermore, since large programs are often created by
combining pre-existing modules that were designed in isolation, we need
conventions that permit programmers to incorporate modules into larger
systems *additively*, that is, without having to redesign or reimplement
these modules.
In this section, we will learn how to cope with data that may be
represented in different ways by different parts of a program. This
requires constructing *generic procedures*---procedures that can operate
on data that may be represented in more than one way. Our main technique
for building generic procedures will be to work in terms of data objects
that have *type tags*, that is, data objects that include explicit
information about how they are to be processed. We will also discuss
*data-directed* programming, a powerful and convenient implementation
strategy for additively assembling systems with generic operations.
We begin with the simple complex-number example. We will see how type
tags and data-directed style enable us to design separate rectangular
and polar representations for complex numbers while maintaining the
notion of an abstract "complex-number" data object. We will accomplish
this by defining arithmetic procedures for complex numbers
(`add/complex`, `sub/complex`, `mul/complex`, and `div/complex`) in
terms of generic selectors that access parts of a complex number
independent of how the number is represented. The resulting
complex-number system, as shown in [Figure 2.19](#Figure 2.19), contains
two different kinds of abstraction barriers. The "horizontal"
abstraction barriers play the same role as the ones in [Figure
2.1](#Figure 2.1). They isolate "higher-level" operations from
"lower-level" representations. In addition, there is a "vertical"
barrier that gives us the ability to separately design and install
alternative representations.
[]{#Figure 2.19 label="Figure 2.19"}
![image](fig/chap2/Fig2.19a.pdf){width="108mm"}
> **Figure 2.19:** Data-abstraction barriers in the complex-number
> system.
In [Section 2.5](#Section 2.5) we will show how to use type tags and
data-directed style to develop a generic arithmetic package. This
provides procedures (`add`, `mul`, and so on) that can be used to
manipulate all sorts of "numbers" and can be easily extended when a new
kind of number is needed. In [Section 2.5.3](#Section 2.5.3), we'll show
how to use generic arithmetic in a system that performs symbolic
algebra.
### Representations for Complex Numbers {#Section 2.4.1}
We will develop a system that performs arithmetic operations on complex
numbers as a simple but unrealistic example of a program that uses
generic operations. We begin by discussing two plausible representations
for complex numbers as ordered pairs: rectangular form (real part and
imaginary part) and polar form (magnitude and angle).[^109] [Section
2.4.2](#Section 2.4.2) will show how both representations can be made to
coexist in a single system through the use of type tags and generic
operations.
Like rational numbers, complex numbers are naturally represented as
ordered pairs. The set of complex numbers can be thought of as a
two-dimensional space with two orthogonal axes, the "real" axis and the
"imaginary" axis. (See [Figure 2.20](#Figure 2.20).) From this point of
view, the complex number $z = x + iy$ (where $i^2 = -1$) can be thought
of as the point in the plane whose real coordinate is $x$ and whose
imaginary coordinate is $y$. Addition of complex numbers reduces in this
representation to addition of coordinates:
$$\begin{array}{r@{{}={}}l}
\hbox{Real-part} (z_1 + z_2)\; &
\hbox{ Real-part} (z_1)\; + \hbox{ Real-part} (z_2), \\
\hbox{Imaginary-part} (z_1 + z_2)\; &
\hbox{ Imaginary-part} (z_1)\; + \hbox{ Imaginary-part} (z_2).
\end{array}$$
[]{#Figure 2.20 label="Figure 2.20"}
![image](fig/chap2/Fig2.20.pdf){width="79mm"}
**Figure 2.20:** Complex numbers as points in the plane.
When multiplying complex numbers, it is more natural to think in terms
of representing a complex number in polar form, as a magnitude and an
angle ($r$ and $A$ in [Figure 2.20](#Figure 2.20)). The product of two
complex numbers is the vector obtained by stretching one complex number
by the length of the other and then rotating it through the angle of the
other:
$$\begin{array}{r@{{}={}}l}
\hbox{Magnitude} (z_1 \cdot z_2)\; &
\hbox{ Magnitude} (z_1)\; \cdot \hbox{ Magnitude} (z_2), \\
\hbox{Angle} (z_1 \cdot z_2)\; &
\hbox{ Angle} (z_1)\; + \hbox{ Angle} (z_2).
\end{array}$$
Thus, there are two different representations for complex numbers, which
are appropriate for different operations. Yet, from the viewpoint of
someone writing a program that uses complex numbers, the principle of
data abstraction suggests that all the operations for manipulating
complex numbers should be available regardless of which representation
is used by the computer. For example, it is often useful to be able to
find the magnitude of a complex number that is specified by rectangular
coordinates. Similarly, it is often useful to be able to determine the
real part of a complex number that is specified by polar coordinates.
To design such a system, we can follow the same data-abstraction
strategy we followed in designing the rational-number package in
[Section 2.1.1](#Section 2.1.1). Assume that the operations on complex
numbers are implemented in terms of four selectors: `real/part`,
`imag/part`, `magnitude` and `angle`. Also assume that we have two
procedures for constructing complex numbers: `make/from/real/imag`
returns a complex number with specified real and imaginary parts, and
`make/from/mag/ang` returns a complex number with specified magnitude
and angle. These procedures have the property that, for any complex
number `z`, both
::: scheme
(make-from-real-imag (real-part z) (imag-part z))
:::
and
::: scheme
(make-from-mag-ang (magnitude z) (angle z))
:::
produce complex numbers that are equal to `z`.
Using these constructors and selectors, we can implement arithmetic on
complex numbers using the "abstract data" specified by the constructors
and selectors, just as we did for rational numbers in [Section
2.1.1](#Section 2.1.1). As shown in the formulas above, we can add and
subtract complex numbers in terms of real and imaginary parts while
multiplying and dividing complex numbers in terms of magnitudes and
angles:
::: scheme
(define (add-complex z1 z2) (make-from-real-imag (+ (real-part z1)
(real-part z2)) (+ (imag-part z1) (imag-part z2)))) (define (sub-complex
z1 z2) (make-from-real-imag (- (real-part z1) (real-part z2)) (-
(imag-part z1) (imag-part z2)))) (define (mul-complex z1 z2)
(make-from-mag-ang (\* (magnitude z1) (magnitude z2)) (+ (angle z1)
(angle z2)))) (define (div-complex z1 z2) (make-from-mag-ang (/
(magnitude z1) (magnitude z2)) (- (angle z1) (angle z2))))
:::
To complete the complex-number package, we must choose a representation
and we must implement the constructors and selectors in terms of
primitive numbers and primitive list structure. There are two obvious
ways to do this: We can represent a complex number in "rectangular form"
as a pair (real part, imaginary part) or in "polar form" as a pair
(magnitude, angle). Which shall we choose?
In order to make the different choices concrete, imagine that there are
two programmers, Ben Bitdiddle and Alyssa P. Hacker, who are
independently designing representations for the complex-number system.
Ben chooses to represent complex numbers in rectangular form. With this
choice, selecting the real and imaginary parts of a complex number is
straightforward, as is constructing a complex number with given real and
imaginary parts. To find the magnitude and the angle, or to construct a
complex number with a given magnitude and angle, he uses the
trigonometric relations
$$\begin{array}{r@{{}={}}lr@{{}={}}l}
x & r \cos A, \qquad & r & \sqrt{x^2 + y^2}, \\
y & r \sin A, \qquad & A & \arctan(y, x),
\end{array}$$
which relate the real and imaginary parts $(x, y)$ to the magnitude and
the angle $(r, A)$.[^110] Ben's representation is therefore given by the
following selectors and constructors:
::: scheme
(define (real-part z) (car z)) (define (imag-part z) (cdr z)) (define
(magnitude z) (sqrt (+ (square (real-part z)) (square (imag-part z)))))
(define (angle z) (atan (imag-part z) (real-part z))) (define
(make-from-real-imag x y) (cons x y)) (define (make-from-mag-ang r a)
(cons (\* r (cos a)) (\* r (sin a))))
:::
Alyssa, in contrast, chooses to represent complex numbers in polar form.
For her, selecting the magnitude and angle is straightforward, but she
has to use the trigonometric relations to obtain the real and imaginary
parts. Alyssa's representation is:
::: scheme
(define (real-part z) (\* (magnitude z) (cos (angle z)))) (define
(imag-part z) (\* (magnitude z) (sin (angle z)))) (define (magnitude z)
(car z)) (define (angle z) (cdr z)) (define (make-from-real-imag x y)
(cons (sqrt (+ (square x) (square y))) (atan y x))) (define
(make-from-mag-ang r a) (cons r a))
:::
The discipline of data abstraction ensures that the same implementation
of `add/complex`, `sub/complex`, `mul/complex`, and `div/complex` will
work with either Ben's representation or Alyssa's representation.
### Tagged data {#Section 2.4.2}
One way to view data abstraction is as an application of the "principle
of least commitment." In implementing the complex-number system in
[Section 2.4.1](#Section 2.4.1), we can use either Ben's rectangular
representation or Alyssa's polar representation. The abstraction barrier
formed by the selectors and constructors permits us to defer to the last
possible moment the choice of a concrete representation for our data
objects and thus retain maximum flexibility in our system design.
The principle of least commitment can be carried to even further
extremes. If we desire, we can maintain the ambiguity of representation
even *after* we have designed the selectors and constructors, and elect
to use both Ben's representation *and* Alyssa's representation. If both
representations are included in a single system, however, we will need
some way to distinguish data in polar form from data in rectangular
form. Otherwise, if we were asked, for instance, to find the `magnitude`
of the pair (3, 4), we wouldn't know whether to answer 5 (interpreting
the number in rectangular form) or 3 (interpreting the number in polar
form). A straightforward way to accomplish this distinction is to
include a *type tag*---the symbol `rectangular` or `polar`---as part of
each complex number. Then when we need to manipulate a complex number we
can use the tag to decide which selector to apply.
In order to manipulate tagged data, we will assume that we have
procedures `type/tag` and `contents` that extract from a data object the
tag and the actual contents (the polar or rectangular coordinates, in
the case of a complex number). We will also postulate a procedure
`attach/tag` that takes a tag and contents and produces a tagged data
object. A straightforward way to implement this is to use ordinary list
structure:
::: scheme
(define (attach-tag type-tag contents) (cons type-tag contents)) (define
(type-tag datum) (if (pair? datum) (car datum) (error \"Bad tagged
datum: TYPE-TAG\" datum))) (define (contents datum) (if (pair? datum)
(cdr datum) (error \"Bad tagged datum: CONTENTS\" datum)))
:::
Using these procedures, we can define predicates `rectangular?` and
`polar?`, which recognize rectangular and polar numbers, respectively:
::: scheme
(define (rectangular? z) (eq? (type-tag z) 'rectangular)) (define
(polar? z) (eq? (type-tag z) 'polar))
:::
With type tags, Ben and Alyssa can now modify their code so that their
two different representations can coexist in the same system. Whenever
Ben constructs a complex number, he tags it as rectangular. Whenever
Alyssa constructs a complex number, she tags it as polar. In addition,
Ben and Alyssa must make sure that the names of their procedures do not
conflict. One way to do this is for Ben to append the suffix
`rectangular` to the name of each of his representation procedures and
for Alyssa to append `polar` to the names of hers. Here is Ben's revised
rectangular representation from [Section 2.4.1](#Section 2.4.1):
::: scheme
(define (real-part-rectangular z) (car z)) (define
(imag-part-rectangular z) (cdr z)) (define (magnitude-rectangular z)
(sqrt (+ (square (real-part-rectangular z)) (square
(imag-part-rectangular z))))) (define (angle-rectangular z) (atan
(imag-part-rectangular z) (real-part-rectangular z))) (define
(make-from-real-imag-rectangular x y) (attach-tag 'rectangular (cons x
y))) (define (make-from-mag-ang-rectangular r a) (attach-tag
'rectangular (cons (\* r (cos a)) (\* r (sin a)))))
:::
and here is Alyssa's revised polar representation:
::: scheme
(define (real-part-polar z) (\* (magnitude-polar z) (cos (angle-polar
z)))) (define (imag-part-polar z) (\* (magnitude-polar z) (sin
(angle-polar z)))) (define (magnitude-polar z) (car z)) (define
(angle-polar z) (cdr z)) (define (make-from-real-imag-polar x y)
(attach-tag 'polar (cons (sqrt (+ (square x) (square y))) (atan y x))))
(define (make-from-mag-ang-polar r a) (attach-tag 'polar (cons r a)))
:::
Each generic selector is implemented as a procedure that checks the tag
of its argument and calls the appropriate procedure for handling data of
that type. For example, to obtain the real part of a complex number,
`real/part` examines the tag to determine whether to use Ben's
`real/part/rectangular` or Alyssa's `real/part/polar`. In either case,
we use `contents` to extract the bare, untagged datum and send this to
the rectangular or polar procedure as required:
::: scheme
(define (real-part z) (cond ((rectangular? z) (real-part-rectangular
(contents z))) ((polar? z) (real-part-polar (contents z))) (else (error
\"Unknown type: REAL-PART\" z)))) (define (imag-part z) (cond
((rectangular? z) (imag-part-rectangular (contents z))) ((polar? z)
(imag-part-polar (contents z))) (else (error \"Unknown type: IMAG-PART\"
z)))) (define (magnitude z) (cond ((rectangular? z)
(magnitude-rectangular (contents z))) ((polar? z) (magnitude-polar
(contents z))) (else (error \"Unknown type: MAGNITUDE\" z)))) (define
(angle z) (cond ((rectangular? z) (angle-rectangular (contents z)))
((polar? z) (angle-polar (contents z))) (else (error \"Unknown type:
ANGLE\" z))))
:::
To implement the complex-number arithmetic operations, we can use the
same procedures `add/complex`, `sub/complex`, `mul/complex`, and
`div/complex` from [Section 2.4.1](#Section 2.4.1), because the
selectors they call are generic, and so will work with either
representation. For example, the procedure `add/complex` is still
::: scheme
(define (add-complex z1 z2) (make-from-real-imag (+ (real-part z1)
(real-part z2)) (+ (imag-part z1) (imag-part z2))))
:::
Finally, we must choose whether to construct complex numbers using Ben's
representation or Alyssa's representation. One reasonable choice is to
construct rectangular numbers whenever we have real and imaginary parts
and to construct polar numbers whenever we have magnitudes and angles:
::: scheme
(define (make-from-real-imag x y) (make-from-real-imag-rectangular x y))
(define (make-from-mag-ang r a) (make-from-mag-ang-polar r a))
:::
The resulting complex-number system has the structure shown in [Figure
2.21](#Figure 2.21). The system has been decomposed into three
relatively independent parts: the complex-number-arithmetic operations,
Alyssa's polar implementation, and Ben's rectangular implementation. The
polar and rectangular implementations could have been written by Ben and
Alyssa working separately, and both of these can be used as underlying
representations by a third programmer implementing the
complex-arithmetic procedures in terms of the abstract
constructor/selector interface.
[]{#Figure 2.21 label="Figure 2.21"}
![image](fig/chap2/Fig2.21a.pdf){width="108mm"}
**Figure 2.21:** Structure of the generic complex-arithmetic system.
Since each data object is tagged with its type, the selectors operate on
the data in a generic manner. That is, each selector is defined to have
a behavior that depends upon the particular type of data it is applied
to. Notice the general mechanism for interfacing the separate
representations: Within a given representation implementation (say,
Alyssa's polar package) a complex number is an untyped pair (magnitude,
angle). When a generic selector operates on a number of `polar` type, it
strips off the tag and passes the contents on to Alyssa's code.
Conversely, when Alyssa constructs a number for general use, she tags it
with a type so that it can be appropriately recognized by the
higher-level procedures. This discipline of stripping off and attaching
tags as data objects are passed from level to level can be an important
organizational strategy, as we shall see in [Section 2.5](#Section 2.5).
### Data-Directed Programming and Additivity {#Section 2.4.3}
The general strategy of checking the type of a datum and calling an
appropriate procedure is called *dispatching on type*. This is a
powerful strategy for obtaining modularity in system design. On the
other hand, implementing the dispatch as in [Section
2.4.2](#Section 2.4.2) has two significant weaknesses. One weakness is
that the generic interface procedures (`real/part`, `imag/part`,
`magnitude`, and `angle`) must know about all the different
representations. For instance, suppose we wanted to incorporate a new
representation for complex numbers into our complex-number system. We
would need to identify this new representation with a type, and then add
a clause to each of the generic interface procedures to check for the
new type and apply the appropriate selector for that representation.
Another weakness of the technique is that even though the individual
representations can be designed separately, we must guarantee that no
two procedures in the entire system have the same name. This is why Ben
and Alyssa had to change the names of their original procedures from
[Section 2.4.1](#Section 2.4.1).
The issue underlying both of these weaknesses is that the technique for
implementing generic interfaces is not *additive*. The person
implementing the generic selector procedures must modify those
procedures each time a new representation is installed, and the people
interfacing the individual representations must modify their code to
avoid name conflicts. In each of these cases, the changes that must be
made to the code are straightforward, but they must be made nonetheless,
and this is a source of inconvenience and error. This is not much of a
problem for the complex-number system as it stands, but suppose there
were not two but hundreds of different representations for complex
numbers. And suppose that there were many generic selectors to be
maintained in the abstract-data interface. Suppose, in fact, that no one
programmer knew all the interface procedures or all the representations.
The problem is real and must be addressed in such programs as
large-scale data-base-management systems.
What we need is a means for modularizing the system design even further.
This is provided by the programming technique known as *data-directed
programming*. To understand how data-directed programming works, begin
with the observation that whenever we deal with a set of generic
operations that are common to a set of different types we are, in
effect, dealing with a two-dimensional table that contains the possible
operations on one axis and the possible types on the other axis. The
entries in the table are the procedures that implement each operation
for each type of argument presented. In the complex-number system
developed in the previous section, the correspondence between operation
name, data type, and actual procedure was spread out among the various
conditional clauses in the generic interface procedures. But the same
information could have been organized in a table, as shown in [Figure
2.22](#Figure 2.22).
Data-directed programming is the technique of designing programs to work
with such a table directly. Previously, we implemented the mechanism
that interfaces the complex-arithmetic code with the two representation
packages as a set of procedures that each perform an explicit dispatch
on type. Here we will implement the interface as a single procedure that
looks up the combination of the operation name and argument type in the
table to find the correct procedure to apply, and then applies it to the
contents of the argument. If we do this, then to add a new
representation package to the system we need not change any existing
procedures; we need only add new entries to the table.
[]{#Figure 2.22 label="Figure 2.22"}
![image](fig/chap2/Fig2.22.pdf){width="102mm"}
**Figure 2.22:** Table of operations for the complex-number system.
To implement this plan, assume that we have two procedures, `put` and
`get`, for manipulating the operation-and-type table:
- $\hbox{\tt(put}\;\langle$*op*$\kern0.1em\rangle\;\langle$*type*$\kern0.08em\rangle\;\langle$*item*$\kern0.08em\rangle\hbox{\tt)}$
installs the $\langle$*item*$\kern0.08em\rangle$ in the table,
indexed by the $\langle$*op*$\kern0.1em\rangle$ and the
$\langle$*type*$\kern0.08em\rangle$.
- $\hbox{\tt(get}\;\langle$*op*$\kern0.1em\rangle\;\langle$*type*$\kern0.08em\rangle\hbox{\tt)}$
looks up the $\langle$*op*$\kern0.08em\rangle$,
$\langle$*type*$\kern0.08em\rangle$ entry in the table and returns
the item found there. If no item is found, `get` returns false.
For now, we can assume that `put` and `get` are included in our
language. In [Chapter 3](#Chapter 3) ([Section 3.3.3](#Section 3.3.3))
we will see how to implement these and other operations for manipulating
tables.
Here is how data-directed programming can be used in the complex-number
system. Ben, who developed the rectangular representation, implements
his code just as he did originally. He defines a collection of
procedures, or a *package*, and interfaces these to the rest of the
system by adding entries to the table that tell the system how to
operate on rectangular numbers. This is accomplished by calling the
following procedure:
::: scheme
(define (install-rectangular-package) [;; internal procedures]{.roman}
(define (real-part z) (car z)) (define (imag-part z) (cdr z)) (define
(make-from-real-imag x y) (cons x y)) (define (magnitude z) (sqrt (+
(square (real-part z)) (square (imag-part z))))) (define (angle z) (atan
(imag-part z) (real-part z))) (define (make-from-mag-ang r a) (cons (\*
r (cos a)) (\* r (sin a))))
[;; interface to the rest of the system]{.roman} (define (tag x)
(attach-tag 'rectangular x)) (put 'real-part '(rectangular) real-part)
(put 'imag-part '(rectangular) imag-part) (put 'magnitude '(rectangular)
magnitude) (put 'angle '(rectangular) angle) (put 'make-from-real-imag
'rectangular (lambda (x y) (tag (make-from-real-imag x y)))) (put
'make-from-mag-ang 'rectangular (lambda (r a) (tag (make-from-mag-ang r
a)))) 'done)
:::
Notice that the internal procedures here are the same procedures from
[Section 2.4.1](#Section 2.4.1) that Ben wrote when he was working in
isolation. No changes are necessary in order to interface them to the
rest of the system. Moreover, since these procedure definitions are
internal to the installation procedure, Ben needn't worry about name
conflicts with other procedures outside the rectangular package. To
interface these to the rest of the system, Ben installs his `real/part`
procedure under the operation name `real/part` and the type
`(rectangular)`, and similarly for the other selectors.[^111] The
interface also defines the constructors to be used by the external
system.[^112] These are identical to Ben's internally defined
constructors, except that they attach the tag.
Alyssa's polar package is analogous:
::: scheme
(define (install-polar-package) [;; internal procedures]{.roman}
(define (magnitude z) (car z)) (define (angle z) (cdr z)) (define
(make-from-mag-ang r a) (cons r a)) (define (real-part z) (\* (magnitude
z) (cos (angle z)))) (define (imag-part z) (\* (magnitude z) (sin (angle
z)))) (define (make-from-real-imag x y) (cons (sqrt (+ (square x)
(square y))) (atan y x))) [;; interface to the rest of the
system]{.roman} (define (tag x) (attach-tag 'polar x)) (put 'real-part
'(polar) real-part) (put 'imag-part '(polar) imag-part) (put 'magnitude
'(polar) magnitude) (put 'angle '(polar) angle) (put
'make-from-real-imag 'polar (lambda (x y) (tag (make-from-real-imag x
y)))) (put 'make-from-mag-ang 'polar (lambda (r a) (tag
(make-from-mag-ang r a)))) 'done)
:::
Even though Ben and Alyssa both still use their original procedures
defined with the same names as each other's (e.g., `real/part`), these
definitions are now internal to different procedures (see [Section
1.1.8](#Section 1.1.8)), so there is no name conflict.
The complex-arithmetic selectors access the table by means of a general
"operation" procedure called `apply/generic`, which applies a generic
operation to some arguments. `apply/generic` looks in the table under
the name of the operation and the types of the arguments and applies the
resulting procedure if one is present:[^113]
::: scheme
(define (apply-generic op . args) (let ((type-tags (map type-tag args)))
(let ((proc (get op type-tags))) (if proc (apply proc (map contents
args)) (error \"No method for these types: APPLY-GENERIC\" (list op
type-tags))))))
:::
Using `apply/generic`, we can define our generic selectors as follows:
::: scheme
(define (real-part z) (apply-generic 'real-part z)) (define (imag-part
z) (apply-generic 'imag-part z)) (define (magnitude z) (apply-generic
'magnitude z)) (define (angle z) (apply-generic 'angle z))
:::
Observe that these do not change at all if a new representation is added
to the system.
We can also extract from the table the constructors to be used by the
programs external to the packages in making complex numbers from real
and imaginary parts and from magnitudes and angles. As in [Section
2.4.2](#Section 2.4.2), we construct rectangular numbers whenever we
have real and imaginary parts, and polar numbers whenever we have
magnitudes and angles:
::: scheme
(define (make-from-real-imag x y) ((get 'make-from-real-imag
'rectangular) x y)) (define (make-from-mag-ang r a) ((get
'make-from-mag-ang 'polar) r a))
:::
> **[]{#Exercise 2.73 label="Exercise 2.73"}Exercise 2.73:** [Section
> 2.3.2](#Section 2.3.2) described a program that performs symbolic
> differentiation:
>
> ::: scheme
> (define (deriv exp var) (cond ((number? exp) 0) ((variable? exp) (if
> (same-variable? exp var) 1 0)) ((sum? exp) (make-sum (deriv (addend
> exp) var) (deriv (augend exp) var))) ((product? exp) (make-sum
> (make-product (multiplier exp) (deriv (multiplicand exp) var))
> (make-product (deriv (multiplier exp) var) (multiplicand exp))))
> $\color{SchemeDark}\langle$ *more rules can be added
> here* $\color{SchemeDark}\rangle$ (else (error \"unknown expression
> type: DERIV\" exp))))
> :::
>
> We can regard this program as performing a dispatch on the type of the
> expression to be differentiated. In this situation the "type tag" of
> the datum is the algebraic operator symbol (such as `+`) and the
> operation being performed is `deriv`. We can transform this program
> into data-directed style by rewriting the basic derivative procedure
> as
>
> ::: scheme
> (define (deriv exp var) (cond ((number? exp) 0) ((variable? exp) (if
> (same-variable? exp var) 1 0)) (else ((get 'deriv (operator exp))
> (operands exp) var)))) (define (operator exp) (car exp)) (define
> (operands exp) (cdr exp))
> :::
>
> a. Explain what was done above. Why can't we assimilate the
> predicates `number?` and `variable?` into the data-directed
> dispatch?
>
> b. Write the procedures for derivatives of sums and products, and the
> auxiliary code required to install them in the table used by the
> program above.
>
> c. Choose any additional differentiation rule that you like, such as
> the one for exponents ([Exercise 2.56](#Exercise 2.56)), and
> install it in this data-directed system.
>
> d. In this simple algebraic manipulator the type of an expression is
> the algebraic operator that binds it together. Suppose, however,
> we indexed the procedures in the opposite way, so that the
> dispatch line in `deriv` looked like
>
> ::: scheme
> ((get (operator exp) 'deriv) (operands exp) var)
> :::
>
> What corresponding changes to the derivative system are required?
> **[]{#Exercise 2.74 label="Exercise 2.74"}Exercise 2.74:** Insatiable
> Enterprises, Inc., is a highly decentralized conglomerate company
> consisting of a large number of independent divisions located all over
> the world. The company's computer facilities have just been
> interconnected by means of a clever network-interfacing scheme that
> makes the entire network appear to any user to be a single computer.
> Insatiable's president, in her first attempt to exploit the ability of
> the network to extract administrative information from division files,
> is dismayed to discover that, although all the division files have
> been implemented as data structures in Scheme, the particular data
> structure used varies from division to division. A meeting of division
> managers is hastily called to search for a strategy to integrate the
> files that will satisfy headquarters' needs while preserving the
> existing autonomy of the divisions.
>
> Show how such a strategy can be implemented with data-directed
> programming. As an example, suppose that each division's personnel
> records consist of a single file, which contains a set of records
> keyed on employees' names. The structure of the set varies from
> division to division. Furthermore, each employee's record is itself a
> set (structured differently from division to division) that contains
> information keyed under identifiers such as `address` and `salary`. In
> particular:
>
> a. Implement for headquarters a `get/record` procedure that retrieves
> a specified employee's record from a specified personnel file. The
> procedure should be applicable to any division's file. Explain how
> the individual divisions' files should be structured. In
> particular, what type information must be supplied?
>
> b. Implement for headquarters a `get/salary` procedure that returns
> the salary information from a given employee's record from any
> division's personnel file. How should the record be structured in
> order to make this operation work?
>
> c. Implement for headquarters a `find/employee/record` procedure.
> This should search all the divisions' files for the record of a
> given employee and return the record. Assume that this procedure
> takes as arguments an employee's name and a list of all the
> divisions' files.
>
> d. When Insatiable takes over a new company, what changes must be
> made in order to incorporate the new personnel information into
> the central system?
#### Message passing {#message-passing .unnumbered}
The key idea of data-directed programming is to handle generic
operations in programs by dealing explicitly with operation-and-type
tables, such as the table in [Figure 2.22](#Figure 2.22). The style of
programming we used in [Section 2.4.2](#Section 2.4.2) organized the
required dispatching on type by having each operation take care of its
own dispatching. In effect, this decomposes the operation-and-type table
into rows, with each generic operation procedure representing a row of
the table.
An alternative implementation strategy is to decompose the table into
columns and, instead of using "intelligent operations" that dispatch on
data types, to work with "intelligent data objects" that dispatch on
operation names. We can do this by arranging things so that a data
object, such as a rectangular number, is represented as a procedure that
takes as input the required operation name and performs the operation
indicated. In such a discipline, `make/from/real/imag` could be written
as
::: scheme
(define (make-from-real-imag x y) (define (dispatch op) (cond ((eq? op
'real-part) x) ((eq? op 'imag-part) y) ((eq? op 'magnitude) (sqrt (+
(square x) (square y)))) ((eq? op 'angle) (atan y x)) (else (error
\"Unknown op: MAKE-FROM-REAL-IMAG\" op)))) dispatch)
:::
The corresponding `apply/generic` procedure, which applies a generic
operation to an argument, now simply feeds the operation's name to the
data object and lets the object do the work:[^114]
::: scheme
(define (apply-generic op arg) (arg op))
:::
Note that the value returned by `make/from/real/imag` is a
procedure---the internal `dispatch` procedure. This is the procedure
that is invoked when `apply/generic` requests an operation to be
performed.
This style of programming is called *message passing*. The name comes
from the image that a data object is an entity that receives the
requested operation name as a "message." We have already seen an example
of message passing in [Section 2.1.3](#Section 2.1.3), where we saw how
`cons`, `car`, and `cdr` could be defined with no data objects but only
procedures. Here we see that message passing is not a mathematical trick
but a useful technique for organizing systems with generic operations.
In the remainder of this chapter we will continue to use data-directed
programming, rather than message passing, to discuss generic arithmetic
operations. In [Chapter 3](#Chapter 3) we will return to message
passing, and we will see that it can be a powerful tool for structuring
simulation programs.
> **[]{#Exercise 2.75 label="Exercise 2.75"}Exercise 2.75:** Implement
> the constructor `make/from/mag/ang` in message-passing style. This
> procedure should be analogous to the `make/from/real/imag` procedure
> given above.
> **[]{#Exercise 2.76 label="Exercise 2.76"}Exercise 2.76:** As a large
> system with generic operations evolves, new types of data objects or
> new operations may be needed. For each of the three
> strategies---generic operations with explicit dispatch, data-directed
> style, and message-passing-style---describe the changes that must be
> made to a system in order to add new types or new operations. Which
> organization would be most appropriate for a system in which new types
> must often be added? Which would be most appropriate for a system in
> which new operations must often be added?
## Systems with Generic Operations {#Section 2.5}
In the previous section, we saw how to design systems in which data
objects can be represented in more than one way. The key idea is to link
the code that specifies the data operations to the several
representations by means of generic interface procedures. Now we will
see how to use this same idea not only to define operations that are
generic over different representations but also to define operations
that are generic over different kinds of arguments. We have already seen
several different packages of arithmetic operations: the primitive
arithmetic (`+`, `-`, `*`, `/`) built into our language, the
rational-number arithmetic (`add/rat`, `sub/rat`, `mul/rat`, `div/rat`)
of [Section 2.1.1](#Section 2.1.1), and the complex-number arithmetic
that we implemented in [Section 2.4.3](#Section 2.4.3). We will now use
data-directed techniques to construct a package of arithmetic operations
that incorporates all the arithmetic packages we have already
constructed.
[]{#Figure 2.23 label="Figure 2.23"}
![image](fig/chap2/Fig2.23a.pdf){width="111mm"}
**Figure 2.23:** Generic arithmetic system.
[Figure 2.23](#Figure 2.23) shows the structure of the system we shall
build. Notice the abstraction barriers. From the perspective of someone
using "numbers," there is a single procedure `add` that operates on
whatever numbers are supplied. `add` is part of a generic interface that
allows the separate ordinary-arithmetic, rational-arithmetic, and
complex-arithmetic packages to be accessed uniformly by programs that
use numbers. Any individual arithmetic package (such as the complex
package) may itself be accessed through generic procedures (such as
`add/complex`) that combine packages designed for different
representations (such as rectangular and polar). Moreover, the structure
of the system is additive, so that one can design the individual
arithmetic packages separately and combine them to produce a generic
arithmetic system.
### Generic Arithmetic Operations {#Section 2.5.1}
The task of designing generic arithmetic operations is analogous to that
of designing the generic complex-number operations. We would like, for
instance, to have a generic addition procedure `add` that acts like
ordinary primitive addition `+` on ordinary numbers, like `add/rat` on
rational numbers, and like `add/complex` on complex numbers. We can
implement `add`, and the other generic arithmetic operations, by
following the same strategy we used in [Section 2.4.3](#Section 2.4.3)
to implement the generic selectors for complex numbers. We will attach a
type tag to each kind of number and cause the generic procedure to
dispatch to an appropriate package according to the data type of its
arguments.
The generic arithmetic procedures are defined as follows:
::: scheme
(define (add x y) (apply-generic 'add x y)) (define (sub x y)
(apply-generic 'sub x y)) (define (mul x y) (apply-generic 'mul x y))
(define (div x y) (apply-generic 'div x y))
:::
We begin by installing a package for handling *ordinary* numbers, that
is, the primitive numbers of our language. We will tag these with the
symbol `scheme/number`. The arithmetic operations in this package are
the primitive arithmetic procedures (so there is no need to define extra
procedures to handle the untagged numbers). Since these operations each
take two arguments, they are installed in the table keyed by the list
`(scheme/number scheme/number)`:
::: scheme
(define (install-scheme-number-package) (define (tag x) (attach-tag
'scheme-number x)) (put 'add '(scheme-number scheme-number) (lambda (x
y) (tag (+ x y)))) (put 'sub '(scheme-number scheme-number) (lambda (x
y) (tag (- x y)))) (put 'mul '(scheme-number scheme-number) (lambda (x
y) (tag (\* x y)))) (put 'div '(scheme-number scheme-number) (lambda (x
y) (tag (/ x y)))) (put 'make 'scheme-number (lambda (x) (tag x)))
'done)
:::
Users of the Scheme-number package will create (tagged) ordinary numbers
by means of the procedure:
::: scheme
(define (make-scheme-number n) ((get 'make 'scheme-number) n))
:::
Now that the framework of the generic arithmetic system is in place, we
can readily include new kinds of numbers. Here is a package that
performs rational arithmetic. Notice that, as a benefit of additivity,
we can use without modification the rational-number code from [Section
2.1.1](#Section 2.1.1) as the internal procedures in the package:
::: scheme
(define (install-rational-package) [;; internal procedures]{.roman}
(define (numer x) (car x)) (define (denom x) (cdr x)) (define (make-rat
n d) (let ((g (gcd n d))) (cons (/ n g) (/ d g)))) (define (add-rat x y)
(make-rat (+ (\* (numer x) (denom y)) (\* (numer y) (denom x))) (\*
(denom x) (denom y)))) (define (sub-rat x y) (make-rat (- (\* (numer x)
(denom y)) (\* (numer y) (denom x))) (\* (denom x) (denom y)))) (define
(mul-rat x y) (make-rat (\* (numer x) (numer y)) (\* (denom x) (denom
y)))) (define (div-rat x y) (make-rat (\* (numer x) (denom y)) (\*
(denom x) (numer y)))) [;; interface to rest of the system]{.roman}
(define (tag x) (attach-tag 'rational x)) (put 'add '(rational rational)
(lambda (x y) (tag (add-rat x y)))) (put 'sub '(rational rational)
(lambda (x y) (tag (sub-rat x y)))) (put 'mul '(rational rational)
(lambda (x y) (tag (mul-rat x y)))) (put 'div '(rational rational)
(lambda (x y) (tag (div-rat x y)))) (put 'make 'rational (lambda (n d)
(tag (make-rat n d)))) 'done) (define (make-rational n d) ((get 'make
'rational) n d))
:::
We can install a similar package to handle complex numbers, using the
tag `complex`. In creating the package, we extract from the table the
operations `make/from/real/imag` and `make/from/mag/ang` that were
defined by the rectangular and polar packages. Additivity permits us to
use, as the internal operations, the same `add/complex`, `sub/complex`,
`mul/complex`, and `div/complex` procedures from [Section
2.4.1](#Section 2.4.1).
::: scheme
(define (install-complex-package) [;; imported procedures from
rectangular and polar packages]{.roman} (define (make-from-real-imag x
y) ((get 'make-from-real-imag 'rectangular) x y)) (define
(make-from-mag-ang r a) ((get 'make-from-mag-ang 'polar) r a)) [;;
internal procedures]{.roman} (define (add-complex z1 z2)
(make-from-real-imag (+ (real-part z1) (real-part z2)) (+ (imag-part z1)
(imag-part z2)))) (define (sub-complex z1 z2) (make-from-real-imag (-
(real-part z1) (real-part z2)) (- (imag-part z1) (imag-part z2))))
(define (mul-complex z1 z2) (make-from-mag-ang (\* (magnitude z1)
(magnitude z2)) (+ (angle z1) (angle z2)))) (define (div-complex z1 z2)
(make-from-mag-ang (/ (magnitude z1) (magnitude z2)) (- (angle z1)
(angle z2)))) [;; interface to rest of the system]{.roman} (define
(tag z) (attach-tag 'complex z)) (put 'add '(complex complex) (lambda
(z1 z2) (tag (add-complex z1 z2)))) (put 'sub '(complex complex) (lambda
(z1 z2) (tag (sub-complex z1 z2)))) (put 'mul '(complex complex) (lambda
(z1 z2) (tag (mul-complex z1 z2)))) (put 'div '(complex complex) (lambda
(z1 z2) (tag (div-complex z1 z2)))) (put 'make-from-real-imag 'complex
(lambda (x y) (tag (make-from-real-imag x y)))) (put 'make-from-mag-ang
'complex (lambda (r a) (tag (make-from-mag-ang r a)))) 'done)
:::
Programs outside the complex-number package can construct complex
numbers either from real and imaginary parts or from magnitudes and
angles. Notice how the underlying procedures, originally defined in the
rectangular and polar packages, are exported to the complex package, and
exported from there to the outside world.
::: scheme
(define (make-complex-from-real-imag x y) ((get 'make-from-real-imag
'complex) x y)) (define (make-complex-from-mag-ang r a) ((get
'make-from-mag-ang 'complex) r a))
:::
What we have here is a two-level tag system. A typical complex number,
such as $3 + 4i$ in rectangular form, would be represented as shown in
[Figure 2.24](#Figure 2.24). The outer tag (`complex`) is used to direct
the number to the complex package. Once within the complex package, the
next tag (`rectangular`) is used to direct the number to the rectangular
package. In a large and complicated system there might be many levels,
each interfaced with the next by means of generic operations. As a data
object is passed "downward," the outer tag that is used to direct it to
the appropriate package is stripped off (by applying `contents`) and the
next level of tag (if any) becomes visible to be used for further
dispatching.
[]{#Figure 2.24 label="Figure 2.24"}
![image](fig/chap2/Fig2.24c.pdf){width="64mm"}
> **Figure 2.24:** Representation of $3 + 4i$ in rectangular form.
In the above packages, we used `add/rat`, `add/complex`, and the other
arithmetic procedures exactly as originally written. Once these
definitions are internal to different installation procedures, however,
they no longer need names that are distinct from each other: we could
simply name them `add`, `sub`, `mul`, and `div` in both packages.
> **[]{#Exercise 2.77 label="Exercise 2.77"}Exercise 2.77:** Louis
> Reasoner tries to evaluate the expression `(magnitude z)` where `z` is
> the object shown in [Figure 2.24](#Figure 2.24). To his surprise,
> instead of the answer 5 he gets an error message from `apply/generic`,
> saying there is no method for the operation `magnitude` on the types
> `(complex)`. He shows this interaction to Alyssa P. Hacker, who says
> "The problem is that the complex-number selectors were never defined
> for `complex` numbers, just for `polar` and `rectangular` numbers. All
> you have to do to make this work is add the following to the `complex`
> package:"
>
> ::: scheme
> (put 'real-part '(complex) real-part) (put 'imag-part '(complex)
> imag-part) (put 'magnitude '(complex) magnitude) (put 'angle
> '(complex) angle)
> :::
>
> Describe in detail why this works. As an example, trace through all
> the procedures called in evaluating the expression `(magnitude z)`
> where `z` is the object shown in [Figure 2.24](#Figure 2.24). In
> particular, how many times is `apply/generic` invoked? What procedure
> is dispatched to in each case?
> **[]{#Exercise 2.78 label="Exercise 2.78"}Exercise 2.78:** The
> internal procedures in the `scheme/number` package are essentially
> nothing more than calls to the primitive procedures `+`, `-`, etc. It
> was not possible to use the primitives of the language directly
> because our type-tag system requires that each data object have a type
> attached to it. In fact, however, all Lisp implementations do have a
> type system, which they use internally. Primitive predicates such as
> `symbol?` and `number?` determine whether data objects have particular
> types. Modify the definitions of `type/tag`, `contents`, and
> `attach/tag` from [Section 2.4.2](#Section 2.4.2) so that our generic
> system takes advantage of Scheme's internal type system. That is to
> say, the system should work as before except that ordinary numbers
> should be represented simply as Scheme numbers rather than as pairs
> whose `car` is the symbol `scheme/number`.
> **[]{#Exercise 2.79 label="Exercise 2.79"}Exercise 2.79:** Define a
> generic equality predicate `equ?` that tests the equality of two
> numbers, and install it in the generic arithmetic package. This
> operation should work for ordinary numbers, rational numbers, and
> complex numbers.
> **[]{#Exercise 2.80 label="Exercise 2.80"}Exercise 2.80:** Define a
> generic predicate `=zero?` that tests if its argument is zero, and
> install it in the generic arithmetic package. This operation should
> work for ordinary numbers, rational numbers, and complex numbers.
### Combining Data of Different Types {#Section 2.5.2}
We have seen how to define a unified arithmetic system that encompasses
ordinary numbers, complex numbers, rational numbers, and any other type
of number we might decide to invent, but we have ignored an important
issue. The operations we have defined so far treat the different data
types as being completely independent. Thus, there are separate packages
for adding, say, two ordinary numbers, or two complex numbers. What we
have not yet considered is the fact that it is meaningful to define
operations that cross the type boundaries, such as the addition of a
complex number to an ordinary number. We have gone to great pains to
introduce barriers between parts of our programs so that they can be
developed and understood separately. We would like to introduce the
cross-type operations in some carefully controlled way, so that we can
support them without seriously violating our module boundaries.
One way to handle cross-type operations is to design a different
procedure for each possible combination of types for which the operation
is valid. For example, we could extend the complex-number package so
that it provides a procedure for adding complex numbers to ordinary
numbers and installs this in the table using the tag
`(complex scheme/number)`:[^115]
::: scheme
[;; to be included in the complex package]{.roman} (define
(add-complex-to-schemenum z x) (make-from-real-imag (+ (real-part z) x)
(imag-part z))) (put 'add '(complex scheme-number) (lambda (z x) (tag
(add-complex-to-schemenum z x))))
:::
This technique works, but it is cumbersome. With such a system, the cost
of introducing a new type is not just the construction of the package of
procedures for that type but also the construction and installation of
the procedures that implement the cross-type operations. This can easily
be much more code than is needed to define the operations on the type
itself. The method also undermines our ability to combine separate
packages additively, or at least to limit the extent to which the
implementors of the individual packages need to take account of other
packages. For instance, in the example above, it seems reasonable that
handling mixed operations on complex numbers and ordinary numbers should
be the responsibility of the complex-number package. Combining rational
numbers and complex numbers, however, might be done by the complex
package, by the rational package, or by some third package that uses
operations extracted from these two packages. Formulating coherent
policies on the division of responsibility among packages can be an
overwhelming task in designing systems with many packages and many
cross-type operations.
#### Coercion {#coercion .unnumbered}
In the general situation of completely unrelated operations acting on
completely unrelated types, implementing explicit cross-type operations,
cumbersome though it may be, is the best that one can hope for.
Fortunately, we can usually do better by taking advantage of additional
structure that may be latent in our type system. Often the different
data types are not completely independent, and there may be ways by
which objects of one type may be viewed as being of another type. This
process is called *coercion*. For example, if we are asked to
arithmetically combine an ordinary number with a complex number, we can
view the ordinary number as a complex number whose imaginary part is
zero. This transforms the problem to that of combining two complex
numbers, which can be handled in the ordinary way by the
complex-arithmetic package.
In general, we can implement this idea by designing coercion procedures
that transform an object of one type into an equivalent object of
another type. Here is a typical coercion procedure, which transforms a
given ordinary number to a complex number with that real part and zero
imaginary part:
::: scheme
(define (scheme-number-\>complex n) (make-complex-from-real-imag
(contents n) 0))
:::
We install these coercion procedures in a special coercion table,
indexed under the names of the two types:
::: scheme
(put-coercion 'scheme-number 'complex scheme-number-\>complex)
:::
(We assume that there are `put/coercion` and `get/coercion` procedures
available for manipulating this table.) Generally some of the slots in
the table will be empty, because it is not generally possible to coerce
an arbitrary data object of each type into all other types. For example,
there is no way to coerce an arbitrary complex number to an ordinary
number, so there will be no general `complex/>scheme/number` procedure
included in the table.
Once the coercion table has been set up, we can handle coercion in a
uniform manner by modifying the `apply/generic` procedure of [Section
2.4.3](#Section 2.4.3). When asked to apply an operation, we first check
whether the operation is defined for the arguments' types, just as
before. If so, we dispatch to the procedure found in the
operation-and-type table. Otherwise, we try coercion. For simplicity, we
consider only the case where there are two arguments.[^116] We check the
coercion table to see if objects of the first type can be coerced to the
second type. If so, we coerce the first argument and try the operation
again. If objects of the first type cannot in general be coerced to the
second type, we try the coercion the other way around to see if there is
a way to coerce the second argument to the type of the first argument.
Finally, if there is no known way to coerce either type to the other
type, we give up. Here is the procedure:
::: scheme
(define (apply-generic op . args) (let ((type-tags (map type-tag args)))
(let ((proc (get op type-tags))) (if proc (apply proc (map contents
args)) (if (= (length args) 2) (let ((type1 (car type-tags)) (type2
(cadr type-tags)) (a1 (car args)) (a2 (cadr args))) (let ((t1-\>t2
(get-coercion type1 type2)) (t2-\>t1 (get-coercion type2 type1))) (cond
(t1-\>t2 (apply-generic op (t1-\>t2 a1) a2)) (t2-\>t1 (apply-generic op
a1 (t2-\>t1 a2))) (else (error \"No method for these types\" (list op
type-tags)))))) (error \"No method for these types\" (list op
type-tags)))))))
:::
This coercion scheme has many advantages over the method of defining
explicit cross-type operations, as outlined above. Although we still
need to write coercion procedures to relate the types (possibly $n^2$
procedures for a system with $n$ types), we need to write only one
procedure for each pair of types rather than a different procedure for
each collection of types and each generic operation.[^117] What we are
counting on here is the fact that the appropriate transformation between
types depends only on the types themselves, not on the operation to be
applied.
On the other hand, there may be applications for which our coercion
scheme is not general enough. Even when neither of the objects to be
combined can be converted to the type of the other it may still be
possible to perform the operation by converting both objects to a third
type. In order to deal with such complexity and still preserve
modularity in our programs, it is usually necessary to build systems
that take advantage of still further structure in the relations among
types, as we discuss next.
#### Hierarchies of types {#hierarchies-of-types .unnumbered}
The coercion scheme presented above relied on the existence of natural
relations between pairs of types. Often there is more "global" structure
in how the different types relate to each other. For instance, suppose
we are building a generic arithmetic system to handle integers, rational
numbers, real numbers, and complex numbers. In such a system, it is
quite natural to regard an integer as a special kind of rational number,
which is in turn a special kind of real number, which is in turn a
special kind of complex number. What we actually have is a so-called
*hierarchy of types*, in which, for example, integers are a *subtype* of
rational numbers (i.e., any operation that can be applied to a rational
number can automatically be applied to an integer). Conversely, we say
that rational numbers form a *supertype* of integers. The particular
hierarchy we have here is of a very simple kind, in which each type has
at most one supertype and at most one subtype. Such a structure, called
a *tower*, is illustrated in [Figure 2.25](#Figure 2.25).
[]{#Figure 2.25 label="Figure 2.25"}
![image](fig/chap2/Fig2.25.pdf){width="11mm"}
**Figure 2.25:** A tower of types.
If we have a tower structure, then we can greatly simplify the problem
of adding a new type to the hierarchy, for we need only specify how the
new type is embedded in the next supertype above it and how it is the
supertype of the type below it. For example, if we want to add an
integer to a complex number, we need not explicitly define a special
coercion procedure `integer/>complex`. Instead, we define how an integer
can be transformed into a rational number, how a rational number is
transformed into a real number, and how a real number is transformed
into a complex number. We then allow the system to transform the integer
into a complex number through these steps and then add the two complex
numbers.
We can redesign our `apply/generic` procedure in the following way: For
each type, we need to supply a `raise` procedure, which "raises" objects
of that type one level in the tower. Then when the system is required to
operate on objects of different types it can successively raise the
lower types until all the objects are at the same level in the tower.
([Exercise 2.83](#Exercise 2.83) and [Exercise 2.84](#Exercise 2.84)
concern the details of implementing such a strategy.)
Another advantage of a tower is that we can easily implement the notion
that every type "inherits" all operations defined on a supertype. For
instance, if we do not supply a special procedure for finding the real
part of an integer, we should nevertheless expect that `real/part` will
be defined for integers by virtue of the fact that integers are a
subtype of complex numbers. In a tower, we can arrange for this to
happen in a uniform way by modifying `apply/generic`. If the required
operation is not directly defined for the type of the object given, we
raise the object to its supertype and try again. We thus crawl up the
tower, transforming our argument as we go, until we either find a level
at which the desired operation can be performed or hit the top (in which
case we give up).
Yet another advantage of a tower over a more general hierarchy is that
it gives us a simple way to "lower" a data object to the simplest
representation. For example, if we add $2 + 3i$ to $4 - 3i$, it would be
nice to obtain the answer as the integer 6 rather than as the complex
number $6 + 0i$. [Exercise 2.85](#Exercise 2.85) discusses a way to
implement such a lowering operation. (The trick is that we need a
general way to distinguish those objects that can be lowered, such as
$6 + 0i$, from those that cannot, such as $6 + 2i$.)
[]{#Figure 2.26 label="Figure 2.26"}
![image](fig/chap2/Fig2.26e.pdf){width="96mm"}
**Figure 2.26:** Relations among types of geometric figures.
#### Inadequacies of hierarchies {#inadequacies-of-hierarchies .unnumbered}
If the data types in our system can be naturally arranged in a tower,
this greatly simplifies the problems of dealing with generic operations
on different types, as we have seen. Unfortunately, this is usually not
the case. [Figure 2.26](#Figure 2.26) illustrates a more complex
arrangement of mixed types, this one showing relations among different
types of geometric figures. We see that, in general, a type may have
more than one subtype. Triangles and quadrilaterals, for instance, are
both subtypes of polygons. In addition, a type may have more than one
supertype. For example, an isosceles right triangle may be regarded
either as an isosceles triangle or as a right triangle. This
multiple-supertypes issue is particularly thorny, since it means that
there is no unique way to "raise" a type in the hierarchy. Finding the
"correct" supertype in which to apply an operation to an object may
involve considerable searching through the entire type network on the
part of a procedure such as `apply/generic`. Since there generally are
multiple subtypes for a type, there is a similar problem in coercing a
value "down" the type hierarchy. Dealing with large numbers of
interrelated types while still preserving modularity in the design of
large systems is very difficult, and is an area of much current
research.[^118]
> **[]{#Exercise 2.81 label="Exercise 2.81"}Exercise 2.81:** Louis
> Reasoner has noticed that `apply/generic` may try to coerce the
> arguments to each other's type even if they already have the same
> type. Therefore, he reasons, we need to put procedures in the coercion
> table to *coerce* arguments of each type to their own type. For
> example, in addition to the `scheme/number/>complex` coercion shown
> above, he would do:
>
> ::: scheme
> (define (scheme-number-\>scheme-number n) n) (define
> (complex-\>complex z) z) (put-coercion 'scheme-number 'scheme-number
> scheme-number-\>scheme-number) (put-coercion 'complex 'complex
> complex-\>complex)
> :::
>
> a. With Louis's coercion procedures installed, what happens if
> `apply/generic` is called with two arguments of type
> `scheme/number` or two arguments of type `complex` for an
> operation that is not found in the table for those types? For
> example, assume that we've defined a generic exponentiation
> operation:
>
> ::: scheme
> (define (exp x y) (apply-generic 'exp x y))
> :::
>
> and have put a procedure for exponentiation in the Scheme-number
> package but not in any other package:
>
> ::: scheme
> [;; following added to Scheme-number package]{.roman} (put 'exp
> '(scheme-number scheme-number) (lambda (x y) (tag (expt x y))))
> [; using primitive `expt`]{.roman}
> :::
>
> What happens if we call `exp` with two complex numbers as
> arguments?
>
> b. Is Louis correct that something had to be done about coercion with
> arguments of the same type, or does `apply/generic` work correctly
> as is?
>
> c. Modify `apply/generic` so that it doesn't try coercion if the two
> arguments have the same type.
> **[]{#Exercise 2.82 label="Exercise 2.82"}Exercise 2.82:** Show how to
> generalize `apply/generic` to handle coercion in the general case of
> multiple arguments. One strategy is to attempt to coerce all the
> arguments to the type of the first argument, then to the type of the
> second argument, and so on. Give an example of a situation where this
> strategy (and likewise the two-argument version given above) is not
> sufficiently general. (Hint: Consider the case where there are some
> suitable mixed-type operations present in the table that will not be
> tried.)
> **[]{#Exercise 2.83 label="Exercise 2.83"}Exercise 2.83:** Suppose you
> are designing a generic arithmetic system for dealing with the tower
> of types shown in [Figure 2.25](#Figure 2.25): integer, rational,
> real, complex. For each type (except complex), design a procedure that
> raises objects of that type one level in the tower. Show how to
> install a generic `raise` operation that will work for each type
> (except complex).
> **[]{#Exercise 2.84 label="Exercise 2.84"}Exercise 2.84:** Using the
> `raise` operation of [Exercise 2.83](#Exercise 2.83), modify the
> `apply/generic` procedure so that it coerces its arguments to have the
> same type by the method of successive raising, as discussed in this
> section. You will need to devise a way to test which of two types is
> higher in the tower. Do this in a manner that is "compatible" with the
> rest of the system and will not lead to problems in adding new levels
> to the tower.
> **[]{#Exercise 2.85 label="Exercise 2.85"}Exercise 2.85:** This
> section mentioned a method for "simplifying" a data object by lowering
> it in the tower of types as far as possible. Design a procedure `drop`
> that accomplishes this for the tower described in [Exercise
> 2.83](#Exercise 2.83). The key is to decide, in some general way,
> whether an object can be lowered. For example, the complex number
> $1.5 + 0i$ can be lowered as far as `real`, the complex number
> $1 + 0i$ can be lowered as far as `integer`, and the complex number
> $2 + 3i$ cannot be lowered at all. Here is a plan for determining
> whether an object can be lowered: Begin by defining a generic
> operation `project` that "pushes" an object down in the tower. For
> example, projecting a complex number would involve throwing away the
> imaginary part. Then a number can be dropped if, when we `project` it
> and `raise` the result back to the type we started with, we end up
> with something equal to what we started with. Show how to implement
> this idea in detail, by writing a `drop` procedure that drops an
> object as far as possible. You will need to design the various
> projection operations[^119] and install `project` as a generic
> operation in the system. You will also need to make use of a generic
> equality predicate, such as described in [Exercise
> 2.79](#Exercise 2.79). Finally, use `drop` to rewrite `apply/generic`
> from [Exercise 2.84](#Exercise 2.84) so that it "simplifies" its
> answers.
> **[]{#Exercise 2.86 label="Exercise 2.86"}Exercise 2.86:** Suppose we
> want to handle complex numbers whose real parts, imaginary parts,
> magnitudes, and angles can be either ordinary numbers, rational
> numbers, or other numbers we might wish to add to the system. Describe
> and implement the changes to the system needed to accommodate this.
> You will have to define operations such as `sine` and `cosine` that
> are generic over ordinary numbers and rational numbers.
### Example: Symbolic Algebra {#Section 2.5.3}
The manipulation of symbolic algebraic expressions is a complex process
that illustrates many of the hardest problems that occur in the design
of large-scale systems. An algebraic expression, in general, can be
viewed as a hierarchical structure, a tree of operators applied to
operands. We can construct algebraic expressions by starting with a set
of primitive objects, such as constants and variables, and combining
these by means of algebraic operators, such as addition and
multiplication. As in other languages, we form abstractions that enable
us to refer to compound objects in simple terms. Typical abstractions in
symbolic algebra are ideas such as linear combination, polynomial,
rational function, or trigonometric function. We can regard these as
compound "types," which are often useful for directing the processing of
expressions. For example, we could describe the expression
$$x^2 \sin (y^2 + 1) + x \cos 2y + \cos(y^3 - 2y^2)$$
as a polynomial in $x$ with coefficients that are trigonometric
functions of polynomials in $y$ whose coefficients are integers.
We will not attempt to develop a complete algebraic-manipulation system
here. Such systems are exceedingly complex programs, embodying deep
algebraic knowledge and elegant algorithms. What we will do is look at a
simple but important part of algebraic manipulation: the arithmetic of
polynomials. We will illustrate the kinds of decisions the designer of
such a system faces, and how to apply the ideas of abstract data and
generic operations to help organize this effort.
#### Arithmetic on polynomials {#arithmetic-on-polynomials .unnumbered}
Our first task in designing a system for performing arithmetic on
polynomials is to decide just what a polynomial is. Polynomials are
normally defined relative to certain variables (the *indeterminates* of
the polynomial). For simplicity, we will restrict ourselves to
polynomials having just one indeterminate (*univariate
polynomials*).[^120] We will define a polynomial to be a sum of terms,
each of which is either a coefficient, a power of the indeterminate, or
a product of a coefficient and a power of the indeterminate. A
coefficient is defined as an algebraic expression that is not dependent
upon the indeterminate of the polynomial. For example,
$$5x^2 + 3x + 7$$
is a simple polynomial in $x$, and
$$(y^2 + 1)x^3 + (2y)x + 1$$
is a polynomial in $x$ whose coefficients are polynomials in $y$.
Already we are skirting some thorny issues. Is the first of these
polynomials the same as the polynomial $5y^2 + 3y + 7$, or not? A
reasonable answer might be "yes, if we are considering a polynomial
purely as a mathematical function, but no, if we are considering a
polynomial to be a syntactic form." The second polynomial is
algebraically equivalent to a polynomial in $y$ whose coefficients are
polynomials in $x$. Should our system recognize this, or not?
Furthermore, there are other ways to represent a polynomial---for
example, as a product of factors, or (for a univariate polynomial) as
the set of roots, or as a listing of the values of the polynomial at a
specified set of points.[^121] We can finesse these questions by
deciding that in our algebraic-manipulation system a "polynomial" will
be a particular syntactic form, not its underlying mathematical meaning.
Now we must consider how to go about doing arithmetic on polynomials. In
this simple system, we will consider only addition and multiplication.
Moreover, we will insist that two polynomials to be combined must have
the same indeterminate.
We will approach the design of our system by following the familiar
discipline of data abstraction. We will represent polynomials using a
data structure called a *poly*, which consists of a variable and a
collection of terms. We assume that we have selectors `variable` and
`term/list` that extract those parts from a poly and a constructor
`make/poly` that assembles a poly from a given variable and a term list.
A variable will be just a symbol, so we can use the `same/variable?`
procedure of [Section 2.3.2](#Section 2.3.2) to compare variables. The
following procedures define addition and multiplication of polys:
::: scheme
(define (add-poly p1 p2) (if (same-variable? (variable p1) (variable
p2)) (make-poly (variable p1) (add-terms (term-list p1) (term-list p2)))
(error \"Polys not in same var: ADD-POLY\" (list p1 p2)))) (define
(mul-poly p1 p2) (if (same-variable? (variable p1) (variable p2))
(make-poly (variable p1) (mul-terms (term-list p1) (term-list p2)))
(error \"Polys not in same var: MUL-POLY\" (list p1 p2))))
:::
To incorporate polynomials into our generic arithmetic system, we need
to supply them with type tags. We'll use the tag `polynomial`, and
install appropriate operations on tagged polynomials in the operation
table. We'll embed all our code in an installation procedure for the
polynomial package, similar to the ones in [Section
2.5.1](#Section 2.5.1):
::: scheme
(define (install-polynomial-package) [;; internal procedures]{.roman}
[;; representation of poly]{.roman} (define (make-poly variable
term-list) (cons variable term-list)) (define (variable p) (car p))
(define (term-list p) (cdr p)) $\color{SchemeDark}\langle$ *procedures
*same-variable?* and *variable?* from section
2.3.2* $\color{SchemeDark}\rangle$ [;; representation of terms and
term lists]{.roman} $\color{SchemeDark}\langle$ *procedures
*adjoin-term* $\dots$ *coeff* from text
below* $\color{SchemeDark}\rangle$ (define (add-poly p1 p2) $\dots$ )
$\color{SchemeDark}\langle$ *procedures used by
*add-poly** $\color{SchemeDark}\rangle$ (define (mul-poly p1 p2)
$\dots$ ) $\color{SchemeDark}\langle$ *procedures used by
*mul-poly** $\color{SchemeDark}\rangle$ [;; interface to rest of the
system]{.roman} (define (tag p) (attach-tag 'polynomial p)) (put 'add
'(polynomial polynomial) (lambda (p1 p2) (tag (add-poly p1 p2)))) (put
'mul '(polynomial polynomial) (lambda (p1 p2) (tag (mul-poly p1 p2))))
(put 'make 'polynomial (lambda (var terms) (tag (make-poly var terms))))
'done)
:::
Polynomial addition is performed termwise. Terms of the same order
(i.e., with the same power of the indeterminate) must be combined. This
is done by forming a new term of the same order whose coefficient is the
sum of the coefficients of the addends. Terms in one addend for which
there are no terms of the same order in the other addend are simply
accumulated into the sum polynomial being constructed.
In order to manipulate term lists, we will assume that we have a
constructor `the/empty/termlist` that returns an empty term list and a
constructor `adjoin/term` that adjoins a new term to a term list. We
will also assume that we have a predicate `empty/termlist?` that tells
if a given term list is empty, a selector `first/term` that extracts the
highest-order term from a term list, and a selector `rest/terms` that
returns all but the highest-order term. To manipulate terms, we will
suppose that we have a constructor `make/term` that constructs a term
with given order and coefficient, and selectors `order` and `coeff` that
return, respectively, the order and the coefficient of the term. These
operations allow us to consider both terms and term lists as data
abstractions, whose concrete representations we can worry about
separately.
Here is the procedure that constructs the term list for the sum of two
polynomials:[^122]
::: scheme
(define (add-terms L1 L2) (cond ((empty-termlist? L1) L2)
((empty-termlist? L2) L1) (else (let ((t1 (first-term L1)) (t2
(first-term L2))) (cond ((\> (order t1) (order t2)) (adjoin-term t1
(add-terms (rest-terms L1) L2))) ((\< (order t1) (order t2))
(adjoin-term t2 (add-terms L1 (rest-terms L2)))) (else (adjoin-term
(make-term (order t1) (add (coeff t1) (coeff t2))) (add-terms
(rest-terms L1) (rest-terms L2)))))))))
:::
The most important point to note here is that we used the generic
addition procedure `add` to add together the coefficients of the terms
being combined. This has powerful consequences, as we will see below.
In order to multiply two term lists, we multiply each term of the first
list by all the terms of the other list, repeatedly using
`mul/term/by/all/terms`, which multiplies a given term by all terms in a
given term list. The resulting term lists (one for each term of the
first list) are accumulated into a sum. Multiplying two terms forms a
term whose order is the sum of the orders of the factors and whose
coefficient is the product of the coefficients of the factors:
::: scheme
(define (mul-terms L1 L2) (if (empty-termlist? L1) (the-empty-termlist)
(add-terms (mul-term-by-all-terms (first-term L1) L2) (mul-terms
(rest-terms L1) L2)))) (define (mul-term-by-all-terms t1 L) (if
(empty-termlist? L) (the-empty-termlist) (let ((t2 (first-term L)))
(adjoin-term (make-term (+ (order t1) (order t2)) (mul (coeff t1) (coeff
t2))) (mul-term-by-all-terms t1 (rest-terms L))))))
:::
This is really all there is to polynomial addition and multiplication.
Notice that, since we operate on terms using the generic procedures
`add` and `mul`, our polynomial package is automatically able to handle
any type of coefficient that is known about by the generic arithmetic
package. If we include a coercion mechanism such as one of those
discussed in [Section 2.5.2](#Section 2.5.2), then we also are
automatically able to handle operations on polynomials of different
coefficient types, such as
$$[3x^2 + (2 + 3i)x + 7] \cdot \! \left[ x^4 + {2\over3} x^2 + (5 + 3i) \right]\!.$$
Because we installed the polynomial addition and multiplication
procedures `add/poly` and `mul/poly` in the generic arithmetic system as
the `add` and `mul` operations for type `polynomial`, our system is also
automatically able to handle polynomial operations such as
$$\Big[(y + 1)x^2 + (y^2 + 1)x + (y - 1)\Big] \cdot \Big[(y - 2)x + (y^3 + 7)\Big]\!.$$
The reason is that when the system tries to combine coefficients, it
will dispatch through `add` and `mul`. Since the coefficients are
themselves polynomials (in $y$), these will be combined using `add/poly`
and `mul/poly`. The result is a kind of "data-directed recursion" in
which, for example, a call to `mul/poly` will result in recursive calls
to `mul/poly` in order to multiply the coefficients. If the coefficients
of the coefficients were themselves polynomials (as might be used to
represent polynomials in three variables), the data direction would
ensure that the system would follow through another level of recursive
calls, and so on through as many levels as the structure of the data
dictates.[^123]
#### Representing term lists {#representing-term-lists .unnumbered}
Finally, we must confront the job of implementing a good representation
for term lists. A term list is, in effect, a set of coefficients keyed
by the order of the term. Hence, any of the methods for representing
sets, as discussed in [Section 2.3.3](#Section 2.3.3), can be applied to
this task. On the other hand, our procedures `add/terms` and `mul/terms`
always access term lists sequentially from highest to lowest order.
Thus, we will use some kind of ordered list representation.
How should we structure the list that represents a term list? One
consideration is the "density" of the polynomials we intend to
manipulate. A polynomial is said to be *dense* if it has nonzero
coefficients in terms of most orders. If it has many zero terms it is
said to be *sparse*. For example,
$$A: \quad x^5 + 2x^4 + 3x^2 - 2x - 5$$
is a dense polynomial, whereas
$$B: \quad x^{100} + 2x^2 + 1$$
is sparse.
The term lists of dense polynomials are most efficiently represented as
lists of the coefficients. For example, $A$ above would be nicely
represented as `(1 2 0 3 -2 -5)`. The order of a term in this
representation is the length of the sublist beginning with that term's
coefficient, decremented by 1.[^124] This would be a terrible
representation for a sparse polynomial such as $B$: There would be a
giant list of zeros punctuated by a few lonely nonzero terms. A more
reasonable representation of the term list of a sparse polynomial is as
a list of the nonzero terms, where each term is a list containing the
order of the term and the coefficient for that order. In such a scheme,
polynomial $B$ is efficiently represented as `((100 1) (2 2) (0 1))`. As
most polynomial manipulations are performed on sparse polynomials, we
will use this method. We will assume that term lists are represented as
lists of terms, arranged from highest-order to lowest-order term. Once
we have made this decision, implementing the selectors and constructors
for terms and term lists is straightforward:[^125]
::: scheme
(define (adjoin-term term term-list) (if (=zero? (coeff term)) term-list
(cons term term-list))) (define (the-empty-termlist) '()) (define
(first-term term-list) (car term-list)) (define (rest-terms term-list)
(cdr term-list)) (define (empty-termlist? term-list) (null? term-list))
(define (make-term order coeff) (list order coeff)) (define (order term)
(car term)) (define (coeff term) (cadr term))
:::
where `=zero?` is as defined in [Exercise 2.80](#Exercise 2.80). (See
also [Exercise 2.87](#Exercise 2.87) below.)
Users of the polynomial package will create (tagged) polynomials by
means of the procedure:
::: scheme
(define (make-polynomial var terms) ((get 'make 'polynomial) var terms))
:::
> **[]{#Exercise 2.87 label="Exercise 2.87"}Exercise 2.87:** Install
> `=zero?` for polynomials in the generic arithmetic package. This will
> allow `adjoin/term` to work for polynomials with coefficients that are
> themselves polynomials.
> **[]{#Exercise 2.88 label="Exercise 2.88"}Exercise 2.88:** Extend the
> polynomial system to include subtraction of polynomials. (Hint: You
> may find it helpful to define a generic negation operation.)
> **[]{#Exercise 2.89 label="Exercise 2.89"}Exercise 2.89:** Define
> procedures that implement the term-list representation described above
> as appropriate for dense polynomials.
> **[]{#Exercise 2.90 label="Exercise 2.90"}Exercise 2.90:** Suppose we
> want to have a polynomial system that is efficient for both sparse and
> dense polynomials. One way to do this is to allow both kinds of
> term-list representations in our system. The situation is analogous to
> the complex-number example of [Section 2.4](#Section 2.4), where we
> allowed both rectangular and polar representations. To do this we must
> distinguish different types of term lists and make the operations on
> term lists generic. Redesign the polynomial system to implement this
> generalization. This is a major effort, not a local change.
> **[]{#Exercise 2.91 label="Exercise 2.91"}Exercise 2.91:** A
> univariate polynomial can be divided by another one to produce a
> polynomial quotient and a polynomial remainder. For example,
>
> $${x^5 - 1 \over x^2 - 1} = x^3 + x, \hbox{ remainder } x - 1.$$
>
> Division can be performed via long division. That is, divide the
> highest-order term of the dividend by the highest-order term of the
> divisor. The result is the first term of the quotient. Next, multiply
> the result by the divisor, subtract that from the dividend, and
> produce the rest of the answer by recursively dividing the difference
> by the divisor. Stop when the order of the divisor exceeds the order
> of the dividend and declare the dividend to be the remainder. Also, if
> the dividend ever becomes zero, return zero as both quotient and
> remainder.
>
> We can design a `div/poly` procedure on the model of `add/poly` and
> `mul/poly`. The procedure checks to see if the two polys have the same
> variable. If so, `div/poly` strips off the variable and passes the
> problem to `div/terms`, which performs the division operation on term
> lists. `div/poly` finally reattaches the variable to the result
> supplied by `div/terms`. It is convenient to design `div/terms` to
> compute both the quotient and the remainder of a division. `div/terms`
> can take two term lists as arguments and return a list of the quotient
> term list and the remainder term list.
>
> Complete the following definition of `div/terms` by filling in the
> missing expressions. Use this to implement `div/poly`, which takes two
> polys as arguments and returns a list of the quotient and remainder
> polys.
>
> ::: smallscheme
> (define (div-terms L1 L2) (if (empty-termlist? L1) (list
> (the-empty-termlist) (the-empty-termlist)) (let ((t1 (first-term L1))
> (t2 (first-term L2))) (if (\> (order t2) (order t1)) (list
> (the-empty-termlist) L1) (let ((new-c (div (coeff t1) (coeff t2)))
> (new-o (- (order t1) (order t2)))) (let ((rest-of-result
> $\langle$ *compute rest of result recursively* $\rangle$ ))
> $\langle$ *form complete result* $\rangle$ ))))))
> :::
#### Hierarchies of types in symbolic algebra {#hierarchies-of-types-in-symbolic-algebra .unnumbered}
Our polynomial system illustrates how objects of one type (polynomials)
may in fact be complex objects that have objects of many different types
as parts. This poses no real difficulty in defining generic operations.
We need only install appropriate generic operations for performing the
necessary manipulations of the parts of the compound types. In fact, we
saw that polynomials form a kind of "recursive data abstraction," in
that parts of a polynomial may themselves be polynomials. Our generic
operations and our data-directed programming style can handle this
complication without much trouble.
On the other hand, polynomial algebra is a system for which the data
types cannot be naturally arranged in a tower. For instance, it is
possible to have polynomials in $x$ whose coefficients are polynomials
in $y$. It is also possible to have polynomials in $y$ whose
coefficients are polynomials in $x$. Neither of these types is "above"
the other in any natural way, yet it is often necessary to add together
elements from each set. There are several ways to do this. One
possibility is to convert one polynomial to the type of the other by
expanding and rearranging terms so that both polynomials have the same
principal variable. One can impose a towerlike structure on this by
ordering the variables and thus always converting any polynomial to a
"canonical form" with the highest-priority variable dominant and the
lower-priority variables buried in the coefficients. This strategy works
fairly well, except that the conversion may expand a polynomial
unnecessarily, making it hard to read and perhaps less efficient to work
with. The tower strategy is certainly not natural for this domain or for
any domain where the user can invent new types dynamically using old
types in various combining forms, such as trigonometric functions, power
series, and integrals.
It should not be surprising that controlling coercion is a serious
problem in the design of large-scale algebraic-manipulation systems.
Much of the complexity of such systems is concerned with relationships
among diverse types. Indeed, it is fair to say that we do not yet
completely understand coercion. In fact, we do not yet completely
understand the concept of a data type. Nevertheless, what we know
provides us with powerful structuring and modularity principles to
support the design of large systems.
> **[]{#Exercise 2.92 label="Exercise 2.92"}Exercise 2.92:** By imposing
> an ordering on variables, extend the polynomial package so that
> addition and multiplication of polynomials works for polynomials in
> different variables. (This is not easy!)
#### Extended exercise: Rational functions {#extended-exercise-rational-functions .unnumbered}
We can extend our generic arithmetic system to include *rational
functions*. These are "fractions" whose numerator and denominator are
polynomials, such as
$${x + 1 \over x^3 - 1}\,.$$
The system should be able to add, subtract, multiply, and divide
rational functions, and to perform such computations as
$${x + 1 \over x^3 - 1} + {x \over x^2 - 1} =
{x^3 + 2x^2 + 3x + 1 \over x^4 + x^3 - x - 1}\,.$$
(Here the sum has been simplified by removing common factors. Ordinary
"cross multiplication" would have produced a fourth-degree polynomial
over a fifth-degree polynomial.)
If we modify our rational-arithmetic package so that it uses generic
operations, then it will do what we want, except for the problem of
reducing fractions to lowest terms.
> **[]{#Exercise 2.93 label="Exercise 2.93"}Exercise 2.93:** Modify the
> rational-arithmetic package to use generic operations, but change
> `make/rat` so that it does not attempt to reduce fractions to lowest
> terms. Test your system by calling `make/rational` on two polynomials
> to produce a rational function:
>
> ::: scheme
> (define p1 (make-polynomial 'x '((2 1) (0 1)))) (define p2
> (make-polynomial 'x '((3 1) (0 1)))) (define rf (make-rational p2 p1))
> :::
>
> Now add `rf` to itself, using `add`. You will observe that this
> addition procedure does not reduce fractions to lowest terms.
We can reduce polynomial fractions to lowest terms using the same idea
we used with integers: modifying `make/rat` to divide both the numerator
and the denominator by their greatest common divisor. The notion of
"greatest common divisor" makes sense for polynomials. In fact, we can
compute the gcd of two polynomials using essentially the
same Euclid's Algorithm that works for integers.[^126] The integer
version is
::: scheme
(define (gcd a b) (if (= b 0) a (gcd b (remainder a b))))
:::
Using this, we could make the obvious modification to define a
gcd operation that works on term lists:
::: scheme
(define (gcd-terms a b) (if (empty-termlist? b) a (gcd-terms b
(remainder-terms a b))))
:::
where `remainder/terms` picks out the remainder component of the list
returned by the term-list division operation `div/terms` that was
implemented in [Exercise 2.91](#Exercise 2.91).
> **[]{#Exercise 2.94 label="Exercise 2.94"}Exercise 2.94:** Using
> `div/terms`, implement the procedure `remainder/terms` and use this to
> define `gcd/terms` as above. Now write a procedure `gcd/poly` that
> computes the polynomial gcd of two polys. (The procedure
> should signal an error if the two polys are not in the same variable.)
> Install in the system a generic operation `greatest/common/divisor`
> that reduces to `gcd/poly` for polynomials and to ordinary `gcd` for
> ordinary numbers. As a test, try
>
> ::: scheme
> (define p1 (make-polynomial 'x '((4 1) (3 -1) (2 -2) (1 2)))) (define
> p2 (make-polynomial 'x '((3 1) (1 -1)))) (greatest-common-divisor p1
> p2)
> :::
>
> and check your result by hand.
> **[]{#Exercise 2.95 label="Exercise 2.95"}Exercise 2.95:** Define
> $P_1$, $P_2$, and $P_3$ to be the polynomials
>
> $$\begin{array}{l@{{}:}l}
> P_1 & \quad x^2 - 2x + 1, \\
> P_2 & \quad 11x^2 + 7, \\
> P_3 & \quad 13x + 5.
> \end{array}$$
>
> Now define $Q_1$ to be the product of $P_1$ and $P_2$ and $Q_2$ to be
> the product of $P_1$ and $P_3$, and use `greatest/common/divisor`
> ([Exercise 2.94](#Exercise 2.94)) to compute the gcd of
> $Q_1$ and $Q_2$. Note that the answer is not the same as $P_1$. This
> example introduces noninteger operations into the computation, causing
> difficulties with the gcd algorithm.[^127] To understand
> what is happening, try tracing `gcd/terms` while computing the
> gcd or try performing the division by hand.
We can solve the problem exhibited in [Exercise 2.95](#Exercise 2.95) if
we use the following modification of the gcd algorithm
(which really works only in the case of polynomials with integer
coefficients). Before performing any polynomial division in the
gcd computation, we multiply the dividend by an integer
constant factor, chosen to guarantee that no fractions will arise during
the division process. Our answer will thus differ from the actual
gcd by an integer constant factor, but this does not
matter in the case of reducing rational functions to lowest terms; the
gcd will be used to divide both the numerator and
denominator, so the integer constant factor will cancel out.
More precisely, if $P$ and $Q$ are polynomials, let $O_1$ be the order
of $P$ (i.e., the order of the largest term of $P$) and let $O_2$ be the
order of $Q$. Let $c$ be the leading coefficient of $Q$. Then it can be
shown that, if we multiply $P$ by the *integerizing factor*
$c^{1 + O_1 - O_2}$, the resulting polynomial can be divided by $Q$ by
using the `div/terms` algorithm without introducing any fractions. The
operation of multiplying the dividend by this constant and then dividing
is sometimes called the *pseudodivision* of $P$ by $Q$. The remainder of
the division is called the *pseudoremainder*.
> **[]{#Exercise 2.96 label="Exercise 2.96"}Exercise 2.96:**
>
> a. Implement the procedure `pseudoremainder/terms`, which is just
> like `remainder/terms` except that it multiplies the dividend by
> the integerizing factor described above before calling
> `div/terms`. Modify `gcd/terms` to use `pseudoremainder/terms`,
> and verify that `greatest/common/divisor` now produces an answer
> with integer coefficients on the example in [Exercise
> 2.95](#Exercise 2.95).
>
> b. The gcd now has integer coefficients, but they are
> larger than those of $P_1$. Modify `gcd/terms` so that it removes
> common factors from the coefficients of the answer by dividing all
> the coefficients by their (integer) greatest common divisor.
Thus, here is how to reduce a rational function to lowest terms:
- Compute the gcd of the numerator and denominator,
using the version of `gcd/terms` from [Exercise
2.96](#Exercise 2.96).
- When you obtain the gcd, multiply both numerator and
denominator by the same integerizing factor before dividing through
by the gcd, so that division by the gcd
will not introduce any noninteger coefficients. As the factor you
can use the leading coefficient of the gcd raised to
the power $1 + O_1 - O_2$, where $O_2$ is the order of the
gcd and $O_1$ is the maximum of the orders of the
numerator and denominator. This will ensure that dividing the
numerator and denominator by the gcd will not
introduce any fractions.
- The result of this operation will be a numerator and denominator
with integer coefficients. The coefficients will normally be very
large because of all of the integerizing factors, so the last step
is to remove the redundant factors by computing the (integer)
greatest common divisor of all the coefficients of the numerator and
the denominator and dividing through by this factor.
> **[]{#Exercise 2.97 label="Exercise 2.97"}Exercise 2.97:**
>
> a. Implement this algorithm as a procedure `reduce/terms` that takes
> two term lists `n` and `d` as arguments and returns a list `nn`,
> `dd`, which are `n` and `d` reduced to lowest terms via the
> algorithm given above. Also write a procedure `reduce/poly`,
> analogous to `add/poly`, that checks to see if the two polys have
> the same variable. If so, `reduce/poly` strips off the variable
> and passes the problem to `reduce/terms`, then reattaches the
> variable to the two term lists supplied by `reduce/terms`.
>
> b. Define a procedure analogous to `reduce/terms` that does what the
> original `make/rat` did for integers:
>
> ::: scheme
> (define (reduce-integers n d) (let ((g (gcd n d))) (list (/ n g)
> (/ d g))))
> :::
>
> and define `reduce` as a generic operation that calls
> `apply/generic` to dispatch to either `reduce/poly` (for
> `polynomial` arguments) or `reduce/integers` (for `scheme/number`
> arguments). You can now easily make the rational-arithmetic
> package reduce fractions to lowest terms by having `make/rat` call
> `reduce` before combining the given numerator and denominator to
> form a rational number. The system now handles rational
> expressions in either integers or polynomials. To test your
> program, try the example at the beginning of this extended
> exercise:
>
> ::: scheme
> (define p1 (make-polynomial 'x '((1 1) (0 1)))) (define p2
> (make-polynomial 'x '((3 1) (0 -1)))) (define p3 (make-polynomial
> 'x '((1 1)))) (define p4 (make-polynomial 'x '((2 1) (0 -1))))
> (define rf1 (make-rational p1 p2)) (define rf2 (make-rational p3
> p4)) (add rf1 rf2)
> :::
>
> See if you get the correct answer, correctly reduced to lowest
> terms.
The gcd computation is at the heart of any system that
does operations on rational functions. The algorithm used above,
although mathematically straightforward, is extremely slow. The slowness
is due partly to the large number of division operations and partly to
the enormous size of the intermediate coefficients generated by the
pseudodivisions. One of the active areas in the development of
algebraic-manipulation systems is the design of better algorithms for
computing polynomial gcds.[^128]
# Modularity, Objects, and State {#Chapter 3}
> Mεταβάλλον ὰναπαύεται\
> (Even while it changes, it stands still.)\
> ---Heraclitus
> Plus ça change, plus c'est la même chose.\
> ---Alphonse Karr
The preceding chapters introduced the basic
elements from which programs are made. We saw how primitive procedures
and primitive data are combined to construct compound entities, and we
learned that abstraction is vital in helping us to cope with the
complexity of large systems. But these tools are not sufficient for
designing programs. Effective program synthesis also requires
organizational principles that can guide us in formulating the overall
design of a program. In particular, we need strategies to help us
structure large systems so that they will be *modular*, that is, so that
they can be divided "naturally" into coherent parts that can be
separately developed and maintained.
One powerful design strategy, which is particularly appropriate to the
construction of programs for modeling physical systems, is to base the
structure of our programs on the structure of the system being modeled.
For each object in the system, we construct a corresponding
computational object. For each system action, we define a symbolic
operation in our computational model. Our hope in using this strategy is
that extending the model to accommodate new objects or new actions will
require no strategic changes to the program, only the addition of the
new symbolic analogs of those objects or actions. If we have been
successful in our system organization, then to add a new feature or
debug an old one we will have to work on only a localized part of the
system.
To a large extent, then, the way we organize a large program is dictated
by our perception of the system to be modeled. In this chapter we will
investigate two prominent organizational strategies arising from two
rather different "world views" of the structure of systems. The first
organizational strategy concentrates on *objects*, viewing a large
system as a collection of distinct objects whose behaviors may change
over time. An alternative organizational strategy concentrates on the
*streams* of information that flow in the system, much as an electrical
engineer views a signal-processing system.
Both the object-based approach and the stream-processing approach raise
significant linguistic issues in programming. With objects, we must be
concerned with how a computational object can change and yet maintain
its identity. This will force us to abandon our old substitution model
of computation ([Section 1.1.5](#Section 1.1.5)) in favor of a more
mechanistic but less theoretically tractable *environment model* of
computation. The difficulties of dealing with objects, change, and
identity are a fundamental consequence of the need to grapple with time
in our computational models. These difficulties become even greater when
we allow the possibility of concurrent execution of programs. The stream
approach can be most fully exploited when we decouple simulated time in
our model from the order of the events that take place in the computer
during evaluation. We will accomplish this using a technique known as
*delayed evaluation*.
## Assignment and Local State {#Section 3.1}
We ordinarily view the world as populated by independent objects, each
of which has a state that changes over time. An object is said to "have
state" if its behavior is influenced by its history. A bank account, for
example, has state in that the answer to the question "Can I withdraw
\$100?" depends upon the history of deposit and withdrawal transactions.
We can characterize an object's state by one or more *state variables*,
which among them maintain enough information about history to determine
the object's current behavior. In a simple banking system, we could
characterize the state of an account by a current balance rather than by
remembering the entire history of account transactions.
In a system composed of many objects, the objects are rarely completely
independent. Each may influence the states of others through
interactions, which serve to couple the state variables of one object to
those of other objects. Indeed, the view that a system is composed of
separate objects is most useful when the state variables of the system
can be grouped into closely coupled subsystems that are only loosely
coupled to other subsystems.
This view of a system can be a powerful framework for organizing
computational models of the system. For such a model to be modular, it
should be decomposed into computational objects that model the actual
objects in the system. Each computational object must have its own
*local state variables* describing the actual object's state. Since the
states of objects in the system being modeled change over time, the
state variables of the corresponding computational objects must also
change. If we choose to model the flow of time in the system by the
elapsed time in the computer, then we must have a way to construct
computational objects whose behaviors change as our programs run. In
particular, if we wish to model state variables by ordinary symbolic
names in the programming language, then the language must provide an
*assignment operator* to enable us to change the value associated with a
name.
### Local State Variables {#Section 3.1.1}
To illustrate what we mean by having a computational object with
time-varying state, let us model the situation of withdrawing money from
a bank account. We will do this using a procedure `withdraw`, which
takes as argument an `amount` to be withdrawn. If there is enough money
in the account to accommodate the withdrawal, then `withdraw` should
return the balance remaining after the withdrawal. Otherwise, `withdraw`
should return the message *Insufficient funds*. For example, if we begin
with \$100 in the account, we should obtain the following sequence of
responses using `withdraw`:
::: scheme
(withdraw 25) *75* (withdraw 25) *50* (withdraw 60) *\"Insufficient
funds\"* (withdraw 15) *35*
:::
Observe that the expression `(withdraw 25)`, evaluated twice, yields
different values. This is a new kind of behavior for a procedure. Until
now, all our procedures could be viewed as specifications for computing
mathematical functions. A call to a procedure computed the value of the
function applied to the given arguments, and two calls to the same
procedure with the same arguments always produced the same result.[^129]
To implement `withdraw`, we can use a variable `balance` to indicate the
balance of money in the account and define `withdraw` as a procedure
that accesses `balance`. The `withdraw` procedure checks to see if
`balance` is at least as large as the requested `amount`. If so,
`withdraw` decrements `balance` by `amount` and returns the new value of
`balance`. Otherwise, `withdraw` returns the *Insufficient funds*
message. Here are the definitions of `balance` and `withdraw`:
::: scheme
(define balance 100) (define (withdraw amount) (if (\>= balance amount)
(begin (set! balance (- balance amount)) balance) \"Insufficient
funds\"))
:::
Decrementing `balance` is accomplished by the expression
::: scheme
(set! balance (- balance amount))
:::
This uses the `set!` special form, whose syntax is
::: scheme
(set! $\color{SchemeDark}\langle$ *name* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *new-value* $\color{SchemeDark}\rangle$ )
:::
Here $\langle$*name*$\kern0.04em\rangle$ is a symbol and
$\langle$*new-value*$\kern0.04em\rangle$ is any expression. `set!`
changes $\langle$*name*$\kern0.04em\rangle$ so that its value is the
result obtained by evaluating $\langle$*new-value*$\kern0.04em\rangle$.
In the case at hand, we are changing `balance` so that its new value
will be the result of subtracting `amount` from the previous value of
`balance`.[^130]
`withdraw` also uses the `begin` special form to cause two expressions
to be evaluated in the case where the `if` test is true: first
decrementing `balance` and then returning the value of `balance`. In
general, evaluating the expression
::: scheme
(begin
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize k}}\rangle$ )
:::
causes the expressions $\langle\kern0.06em$*exp*$_1\rangle$ through
$\langle\kern0.06em$*exp*$_k\rangle$ to be evaluated in sequence and the
value of the final expression $\langle\kern0.06em$*exp*$_k\rangle$ to be
returned as the value of the entire `begin` form.[^131]
Although `withdraw` works as desired, the variable `balance` presents a
problem. As specified above, `balance` is a name defined in the global
environment and is freely accessible to be examined or modified by any
procedure. It would be much better if we could somehow make `balance`
internal to `withdraw`, so that `withdraw` would be the only procedure
that could access `balance` directly and any other procedure could
access `balance` only indirectly (through calls to `withdraw`). This
would more accurately model the notion that `balance` is a local state
variable used by `withdraw` to keep track of the state of the account.
We can make `balance` internal to `withdraw` by rewriting the definition
as follows:
::: scheme
(define new-withdraw (let ((balance 100)) (lambda (amount) (if (\>=
balance amount) (begin (set! balance (- balance amount)) balance)
\"Insufficient funds\"))))
:::
What we have done here is use `let` to establish an environment with a
local variable `balance`, bound to the initial value 100. Within this
local environment, we use `lambda` to create a procedure that takes
`amount` as an argument and behaves like our previous `withdraw`
procedure. This procedure---returned as the result of evaluating the
`let` expression---is `new/withdraw`, which behaves in precisely the
same way as `withdraw` but whose variable `balance` is not accessible by
any other procedure.[^132]
Combining `set!` with local variables is the general programming
technique we will use for constructing computational objects with local
state. Unfortunately, using this technique raises a serious problem:
When we first introduced procedures, we also introduced the substitution
model of evaluation ([Section 1.1.5](#Section 1.1.5)) to provide an
interpretation of what procedure application means. We said that
applying a procedure should be interpreted as evaluating the body of the
procedure with the formal parameters replaced by their values. The
trouble is that, as soon as we introduce assignment into our language,
substitution is no longer an adequate model of procedure application.
(We will see why this is so in [Section 3.1.3](#Section 3.1.3).) As a
consequence, we technically have at this point no way to understand why
the `new/withdraw` procedure behaves as claimed above. In order to
really understand a procedure such as `new/withdraw`, we will need to
develop a new model of procedure application. In [Section
3.2](#Section 3.2) we will introduce such a model, together with an
explanation of `set!` and local variables. First, however, we examine
some variations on the theme established by `new/withdraw`.
The following procedure, `make/withdraw`, creates "withdrawal
processors." The formal parameter `balance` in `make/withdraw` specifies
the initial amount of money in the account.[^133]
::: scheme
(define (make-withdraw balance) (lambda (amount) (if (\>= balance
amount) (begin (set! balance (- balance amount)) balance) \"Insufficient
funds\")))
:::
`make/withdraw` can be used as follows to create two objects `W1` and
`W2`:
::: scheme
(define W1 (make-withdraw 100)) (define W2 (make-withdraw 100))
(W1 50) *50* (W2 70) *30* (W2 40) *\"Insufficient funds\"* (W1 40)
*10*
:::
Observe that `W1` and `W2` are completely independent objects, each with
its own local state variable `balance`. Withdrawals from one do not
affect the other.
We can also create objects that handle deposits as well as withdrawals,
and thus we can represent simple bank accounts. Here is a procedure that
returns a "bank-account object" with a specified initial balance:
::: scheme
(define (make-account balance) (define (withdraw amount) (if (\>=
balance amount) (begin (set! balance (- balance amount)) balance)
\"Insufficient funds\")) (define (deposit amount) (set! balance (+
balance amount)) balance) (define (dispatch m) (cond ((eq? m 'withdraw)
withdraw) ((eq? m 'deposit) deposit) (else (error \"Unknown request:
MAKE-ACCOUNT\" m)))) dispatch)
:::
Each call to `make/account` sets up an environment with a local state
variable `balance`. Within this environment, `make/account` defines
procedures `deposit` and `withdraw` that access `balance` and an
additional procedure `dispatch` that takes a "message" as input and
returns one of the two local procedures. The `dispatch` procedure itself
is returned as the value that represents the bank-account object. This
is precisely the *message-passing* style of programming that we saw in
[Section 2.4.3](#Section 2.4.3), although here we are using it in
conjunction with the ability to modify local variables.
`make/account` can be used as follows:
::: scheme
(define acc (make-account 100)) ((acc 'withdraw) 50) *50* ((acc
'withdraw) 60) *\"Insufficient funds\"* ((acc 'deposit) 40) *90*
((acc 'withdraw) 60) *30*
:::
Each call to `acc` returns the locally defined `deposit` or `withdraw`
procedure, which is then applied to the specified `amount`. As was the
case with `make/withdraw`, another call to `make/account`
::: scheme
(define acc2 (make-account 100))
:::
will produce a completely separate account object, which maintains its
own local `balance`.
> **[]{#Exercise 3.1 label="Exercise 3.1"}Exercise 3.1:** An
> *accumulator* is a procedure that is called repeatedly with a single
> numeric argument and accumulates its arguments into a sum. Each time
> it is called, it returns the currently accumulated sum. Write a
> procedure `make/accumulator` that generates accumulators, each
> maintaining an independent sum. The input to `make/accumulator` should
> specify the initial value of the sum; for example
>
> ::: scheme
> (define A (make-accumulator 5)) (A 10) *15* (A 10) *25*
> :::
> **[]{#Exercise 3.2 label="Exercise 3.2"}Exercise 3.2:** In
> software-testing applications, it is useful to be able to count the
> number of times a given procedure is called during the course of a
> computation. Write a procedure `make/monitored` that takes as input a
> procedure, `f`, that itself takes one input. The result returned by
> `make/monitored` is a third procedure, say `mf`, that keeps track of
> the number of times it has been called by maintaining an internal
> counter. If the input to `mf` is the special symbol `how/many/calls?`,
> then `mf` returns the value of the counter. If the input is the
> special symbol `reset/count`, then `mf` resets the counter to zero.
> For any other input, `mf` returns the result of calling `f` on that
> input and increments the counter. For instance, we could make a
> monitored version of the `sqrt` procedure:
>
> ::: scheme
> (define s (make-monitored sqrt)) (s 100) *10* (s 'how-many-calls?)
> *1*
> :::
> **[]{#Exercise 3.3 label="Exercise 3.3"}Exercise 3.3:** Modify the
> `make/account` procedure so that it creates password-protected
> accounts. That is, `make/account` should take a symbol as an
> additional argument, as in
>
> ::: scheme
> (define acc (make-account 100 'secret-password))
> :::
>
> The resulting account object should process a request only if it is
> accompanied by the password with which the account was created, and
> should otherwise return a complaint:
>
> ::: scheme
> ((acc 'secret-password 'withdraw) 40) *60* ((acc
> 'some-other-password 'deposit) 50) *\"Incorrect password\"*
> :::
> **[]{#Exercise 3.4 label="Exercise 3.4"}Exercise 3.4:** Modify the
> `make/account` procedure of [Exercise 3.3](#Exercise 3.3) by adding
> another local state variable so that, if an account is accessed more
> than seven consecutive times with an incorrect password, it invokes
> the procedure `call/the/cops`.
### The Benefits of Introducing Assignment {#Section 3.1.2}
As we shall see, introducing assignment into our programming language
leads us into a thicket of difficult conceptual issues. Nevertheless,
viewing systems as collections of objects with local state is a powerful
technique for maintaining a modular design. As a simple example,
consider the design of a procedure `rand` that, whenever it is called,
returns an integer chosen at random.
It is not at all clear what is meant by "chosen at random." What we
presumably want is for successive calls to `rand` to produce a sequence
of numbers that has statistical properties of uniform distribution. We
will not discuss methods for generating suitable sequences here. Rather,
let us assume that we have a procedure `rand/update` that has the
property that if we start with a given number $x_1$ and form
::: scheme
$\color{SchemeDark}x_2$ = (rand-update $\color{SchemeDark}x_1$ )
$\color{SchemeDark}x_3$ = (rand-update $\color{SchemeDark}x_2$ )
:::
then the sequence of values $x_1$, $x_2$, $x_3$, $\dots$ will have the
desired statistical properties.[^134]
We can implement `rand` as a procedure with a local state variable `x`
that is initialized to some fixed value `random/init`. Each call to
`rand` computes `rand/update` of the current value of `x`, returns this
as the random number, and also stores this as the new value of `x`.
::: scheme
(define rand (let ((x random-init)) (lambda () (set! x (rand-update x))
x)))
:::
Of course, we could generate the same sequence of random numbers without
using assignment by simply calling `rand/update` directly. However, this
would mean that any part of our program that used random numbers would
have to explicitly remember the current value of `x` to be passed as an
argument to `rand/update`. To realize what an annoyance this would be,
consider using random numbers to implement a technique called *Monte
Carlo simulation*.
The Monte Carlo method consists of choosing sample experiments at random
from a large set and then making deductions on the basis of the
probabilities estimated from tabulating the results of those
experiments. For example, we can approximate $\pi$ using the fact that
$6/\pi^2$ is the probability that two integers chosen at random will
have no factors in common; that is, that their greatest common divisor
will be 1.[^135] To obtain the approximation to $\pi$, we perform a
large number of experiments. In each experiment we choose two integers
at random and perform a test to see if their gcd is 1. The
fraction of times that the test is passed gives us our estimate of
$6/\pi^2$, and from this we obtain our approximation to $\pi$.
The heart of our program is a procedure `monte/carlo`, which takes as
arguments the number of times to try an experiment, together with the
experiment, represented as a no-argument procedure that will return
either true or false each time it is run. `monte/carlo` runs the
experiment for the designated number of trials and returns a number
telling the fraction of the trials in which the experiment was found to
be true.
::: scheme
(define (estimate-pi trials) (sqrt (/ 6 (monte-carlo trials
cesaro-test)))) (define (cesaro-test) (= (gcd (rand) (rand)) 1))
(define (monte-carlo trials experiment) (define (iter trials-remaining
trials-passed) (cond ((= trials-remaining 0) (/ trials-passed trials))
((experiment) (iter (- trials-remaining 1) (+ trials-passed 1))) (else
(iter (- trials-remaining 1) trials-passed)))) (iter trials 0))
:::
Now let us try the same computation using `rand/update` directly rather
than `rand`, the way we would be forced to proceed if we did not use
assignment to model local state:
::: scheme
(define (estimate-pi trials) (sqrt (/ 6 (random-gcd-test trials
random-init)))) (define (random-gcd-test trials initial-x) (define (iter
trials-remaining trials-passed x) (let ((x1 (rand-update x))) (let ((x2
(rand-update x1))) (cond ((= trials-remaining 0) (/ trials-passed
trials)) ((= (gcd x1 x2) 1) (iter (- trials-remaining 1) (+
trials-passed 1) x2)) (else (iter (- trials-remaining 1) trials-passed
x2)))))) (iter trials 0 initial-x))
:::
While the program is still simple, it betrays some painful breaches of
modularity. In our first version of the program, using `rand`, we can
express the Monte Carlo method directly as a general `monte/carlo`
procedure that takes as an argument an arbitrary `experiment` procedure.
In our second version of the program, with no local state for the
random-number generator, `random/gcd/test` must explicitly manipulate
the random numbers `x1` and `x2` and recycle `x2` through the iterative
loop as the new input to `rand/update`. This explicit handling of the
random numbers intertwines the structure of accumulating test results
with the fact that our particular experiment uses two random numbers,
whereas other Monte Carlo experiments might use one random number or
three. Even the top-level procedure `estimate/pi` has to be concerned
with supplying an initial random number. The fact that the random-number
generator's insides are leaking out into other parts of the program
makes it difficult for us to isolate the Monte Carlo idea so that it can
be applied to other tasks. In the first version of the program,
assignment encapsulates the state of the random-number generator within
the `rand` procedure, so that the details of random-number generation
remain independent of the rest of the program.
The general phenomenon illustrated by the Monte Carlo example is this:
From the point of view of one part of a complex process, the other parts
appear to change with time. They have hidden time-varying local state.
If we wish to write computer programs whose structure reflects this
decomposition, we make computational objects (such as bank accounts and
random-number generators) whose behavior changes with time. We model
state with local state variables, and we model the changes of state with
assignments to those variables.
It is tempting to conclude this discussion by saying that, by
introducing assignment and the technique of hiding state in local
variables, we are able to structure systems in a more modular fashion
than if all state had to be manipulated explicitly, by passing
additional parameters. Unfortunately, as we shall see, the story is not
so simple.
> **[]{#Exercise 3.5 label="Exercise 3.5"}Exercise 3.5:** *Monte Carlo
> integration* is a method of estimating definite integrals by means of
> Monte Carlo simulation. Consider computing the area of a region of
> space described by a predicate $P(x, y)$ that is true for points
> $(x, y)$ in the region and false for points not in the region. For
> example, the region contained within a circle of radius 3 centered at
> (5, 7) is described by the predicate that tests whether
> $(x - 5)^2 + (y - 7)^2 \le 3^2$. To estimate the area of the region
> described by such a predicate, begin by choosing a rectangle that
> contains the region. For example, a rectangle with diagonally opposite
> corners at (2, 4) and (8, 10) contains the circle above. The desired
> integral is the area of that portion of the rectangle that lies in the
> region. We can estimate the integral by picking, at random, points
> $(x, y)$ that lie in the rectangle, and testing $P(x, y)$ for each
> point to determine whether the point lies in the region. If we try
> this with many points, then the fraction of points that fall in the
> region should give an estimate of the proportion of the rectangle that
> lies in the region. Hence, multiplying this fraction by the area of
> the entire rectangle should produce an estimate of the integral.
>
> Implement Monte Carlo integration as a procedure `estimate/integral`
> that takes as arguments a predicate `P`, upper and lower bounds `x1`,
> `x2`, `y1`, and `y2` for the rectangle, and the number of trials to
> perform in order to produce the estimate. Your procedure should use
> the same `monte/carlo` procedure that was used above to estimate
> $\pi$. Use your `estimate/integral` to produce an estimate of $\pi$ by
> measuring the area of a unit circle.
>
> You will find it useful to have a procedure that returns a number
> chosen at random from a given range. The following `random/in/range`
> procedure implements this in terms of the `random` procedure used in
> [Section 1.2.6](#Section 1.2.6), which returns a nonnegative number
> less than its input.[^136]
>
> ::: scheme
> (define (random-in-range low high) (let ((range (- high low))) (+ low
> (random range))))
> :::
> **[]{#Exercise 3.6 label="Exercise 3.6"}Exercise 3.6:** It is useful
> to be able to reset a random-number generator to produce a sequence
> starting from a given value. Design a new `rand` procedure that is
> called with an argument that is either the symbol `generate` or the
> symbol `reset` and behaves as follows: `(rand ’generate)` produces a
> new random number;
> `((rand ’reset)`$\;\langle$*new-value*$\kern0.11em\rangle$`)` resets
> the internal state variable to the designated
> $\langle$*new-value*$\kern0.08em\rangle$. Thus, by resetting the
> state, one can generate repeatable sequences. These are very handy to
> have when testing and debugging programs that use random numbers.
### The Costs of Introducing Assignment {#Section 3.1.3}
As we have seen, the `set!` operation enables us to model objects that
have local state. However, this advantage comes at a price. Our
programming language can no longer be interpreted in terms of the
substitution model of procedure application that we introduced in
[Section 1.1.5](#Section 1.1.5). Moreover, no simple model with "nice"
mathematical properties can be an adequate framework for dealing with
objects and assignment in programming languages.
So long as we do not use assignments, two evaluations of the same
procedure with the same arguments will produce the same result, so that
procedures can be viewed as computing mathematical functions.
Programming without any use of assignments, as we did throughout the
first two chapters of this book, is accordingly known as *functional
programming*.
To understand how assignment complicates matters, consider a simplified
version of the `make/withdraw` procedure of [Section
3.1.1](#Section 3.1.1) that does not bother to check for an insufficient
amount:
::: scheme
(define (make-simplified-withdraw balance) (lambda (amount) (set!
balance (- balance amount)) balance)) (define W
(make-simplified-withdraw 25)) (W 20) *5* (W 10) *-5*
:::
Compare this procedure with the following `make/decrementer` procedure,
which does not use `set!`:
::: scheme
(define (make-decrementer balance) (lambda (amount) (- balance amount)))
:::
`make/decrementer` returns a procedure that subtracts its input from a
designated amount `balance`, but there is no accumulated effect over
successive calls, as with `make/simplified/withdraw`:
::: scheme
(define D (make-decrementer 25)) (D 20) *5* (D 10) *15*
:::
We can use the substitution model to explain how `make/decrementer`
works. For instance, let us analyze the evaluation of the expression
::: scheme
((make-decrementer 25) 20)
:::
We first simplify the operator of the combination by substituting 25 for
`balance` in the body of `make/decrementer`. This reduces the expression
to
::: scheme
((lambda (amount) (- 25 amount)) 20)
:::
Now we apply the operator by substituting 20 for `amount` in the body of
the `lambda` expression:
::: scheme
(- 25 20)
:::
The final answer is 5.
Observe, however, what happens if we attempt a similar substitution
analysis with `make/simplified/withdraw`:
::: scheme
((make-simplified-withdraw 25) 20)
:::
We first simplify the operator by substituting 25 for `balance` in the
body of `make/simplified/withdraw`. This reduces the expression to[^137]
::: scheme
((lambda (amount) (set! balance (- 25 amount)) 25) 20)
:::
Now we apply the operator by substituting 20 for `amount` in the body of
the `lambda` expression:
::: scheme
(set! balance (- 25 20)) 25
:::
If we adhered to the substitution model, we would have to say that the
meaning of the procedure application is to first set `balance` to 5 and
then return 25 as the value of the expression. This gets the wrong
answer. In order to get the correct answer, we would have to somehow
distinguish the first occurrence of `balance` (before the effect of the
`set!`) from the second occurrence of `balance` (after the effect of the
`set!`), and the substitution model cannot do this.
The trouble here is that substitution is based ultimately on the notion
that the symbols in our language are essentially names for values. But
as soon as we introduce `set!` and the idea that the value of a variable
can change, a variable can no longer be simply a name. Now a variable
somehow refers to a place where a value can be stored, and the value
stored at this place can change. In [Section 3.2](#Section 3.2) we will
see how environments play this role of "place" in our computational
model.
#### Sameness and change {#sameness-and-change .unnumbered}
The issue surfacing here is more profound than the mere breakdown of a
particular model of computation. As soon as we introduce change into our
computational models, many notions that were previously straightforward
become problematical. Consider the concept of two things being "the
same."
Suppose we call `make/decrementer` twice with the same argument to
create two procedures:
::: scheme
(define D1 (make-decrementer 25)) (define D2 (make-decrementer 25))
:::
Are `D1` and `D2` the same? An acceptable answer is yes, because `D1`
and `D2` have the same computational behavior---each is a procedure that
subtracts its input from 25. In fact, `D1` could be substituted for `D2`
in any computation without changing the result.
Contrast this with making two calls to `make/simplified/withdraw`:
::: scheme
(define W1 (make-simplified-withdraw 25)) (define W2
(make-simplified-withdraw 25))
:::
Are `W1` and `W2` the same? Surely not, because calls to `W1` and `W2`
have distinct effects, as shown by the following sequence of
interactions:
::: scheme
(W1 20) *5* (W1 20) *-15* (W2 20) *5*
:::
Even though `W1` and `W2` are "equal" in the sense that they are both
created by evaluating the same expression,
`(make/simplified/withdraw 25)`, it is not true that `W1` could be
substituted for `W2` in any expression without changing the result of
evaluating the expression.
A language that supports the concept that "equals can be substituted for
equals" in an expression without changing the value of the expression is
said to be *referentially transparent*. Referential transparency is
violated when we include `set!` in our computer language. This makes it
tricky to determine when we can simplify expressions by substituting
equivalent expressions. Consequently, reasoning about programs that use
assignment becomes drastically more difficult.
Once we forgo referential transparency, the notion of what it means for
computational objects to be "the same" becomes difficult to capture in a
formal way. Indeed, the meaning of "same" in the real world that our
programs model is hardly clear in itself. In general, we can determine
that two apparently identical objects are indeed "the same one" only by
modifying one object and then observing whether the other object has
changed in the same way. But how can we tell if an object has "changed"
other than by observing the "same" object twice and seeing whether some
property of the object differs from one observation to the next? Thus,
we cannot determine "change" without some *a priori* notion of
"sameness," and we cannot determine sameness without observing the
effects of change.
As an example of how this issue arises in programming, consider the
situation where Peter and Paul have a bank account with \$100 in it.
There is a substantial difference between modeling this as
::: scheme
(define peter-acc (make-account 100)) (define paul-acc (make-account
100))
:::
and modeling it as
::: scheme
(define peter-acc (make-account 100)) (define paul-acc peter-acc)
:::
In the first situation, the two bank accounts are distinct. Transactions
made by Peter will not affect Paul's account, and vice versa. In the
second situation, however, we have defined `paul/acc` to be *the same
thing* as `peter/acc`. In effect, Peter and Paul now have a joint bank
account, and if Peter makes a withdrawal from `peter/acc` Paul will
observe less money in `paul/acc`. These two similar but distinct
situations can cause confusion in building computational models. With
the shared account, in particular, it can be especially confusing that
there is one object (the bank account) that has two different names
(`peter/acc` and `paul/acc`); if we are searching for all the places in
our program where `paul/acc` can be changed, we must remember to look
also at things that change `peter/acc`.[^138]
With reference to the above remarks on "sameness" and "change," observe
that if Peter and Paul could only examine their bank balances, and could
not perform operations that changed the balance, then the issue of
whether the two accounts are distinct would be moot. In general, so long
as we never modify data objects, we can regard a compound data object to
be precisely the totality of its pieces. For example, a rational number
is determined by giving its numerator and its denominator. But this view
is no longer valid in the presence of change, where a compound data
object has an "identity" that is something different from the pieces of
which it is composed. A bank account is still "the same" bank account
even if we change the balance by making a withdrawal; conversely, we
could have two different bank accounts with the same state information.
This complication is a consequence, not of our programming language, but
of our perception of a bank account as an object. We do not, for
example, ordinarily regard a rational number as a changeable object with
identity, such that we could change the numerator and still have "the
same" rational number.
#### Pitfalls of imperative programming {#pitfalls-of-imperative-programming .unnumbered}
In contrast to functional programming, programming that makes extensive
use of assignment is known as *imperative programming*. In addition to
raising complications about computational models, programs written in
imperative style are susceptible to bugs that cannot occur in functional
programs. For example, recall the iterative factorial program from
[Section 1.2.1](#Section 1.2.1):
::: scheme
(define (factorial n) (define (iter product counter) (if (\> counter n)
product (iter (\* counter product) (+ counter 1)))) (iter 1 1))
:::
Instead of passing arguments in the internal iterative loop, we could
adopt a more imperative style by using explicit assignment to update the
values of the variables `product` and `counter`:
::: scheme
(define (factorial n) (let ((product 1) (counter 1)) (define (iter) (if
(\> counter n) product (begin (set! product (\* counter product)) (set!
counter (+ counter 1)) (iter)))) (iter)))
:::
This does not change the results produced by the program, but it does
introduce a subtle trap. How do we decide the order of the assignments?
As it happens, the program is correct as written. But writing the
assignments in the opposite order
::: scheme
(set! counter (+ counter 1)) (set! product (\* counter product))
:::
would have produced a different, incorrect result. In general,
programming with assignment forces us to carefully consider the relative
orders of the assignments to make sure that each statement is using the
correct version of the variables that have been changed. This issue
simply does not arise in functional programs.[^139]
The complexity of imperative programs becomes even worse if we consider
applications in which several processes execute concurrently. We will
return to this in [Section 3.4](#Section 3.4). First, however, we will
address the issue of providing a computational model for expressions
that involve assignment, and explore the uses of objects with local
state in designing simulations.
> **[]{#Exercise 3.7 label="Exercise 3.7"}Exercise 3.7:** Consider the
> bank account objects created by `make/account`, with the password
> modification described in [Exercise 3.3](#Exercise 3.3). Suppose that
> our banking system requires the ability to make joint accounts. Define
> a procedure `make/joint` that accomplishes this. `make/joint` should
> take three arguments. The first is a password-protected account. The
> second argument must match the password with which the account was
> defined in order for the `make/joint` operation to proceed. The third
> argument is a new password. `make/joint` is to create an additional
> access to the original account using the new password. For example, if
> `peter/acc` is a bank account with password `open/sesame`, then
>
> ::: scheme
> (define paul-acc (make-joint peter-acc 'open-sesame 'rosebud))
> :::
>
> will allow one to make transactions on `peter/acc` using the name
> `paul/acc` and the password `rosebud`. You may wish to modify your
> solution to [Exercise 3.3](#Exercise 3.3) to accommodate this new
> feature.
> **[]{#Exercise 3.8 label="Exercise 3.8"}Exercise 3.8:** When we
> defined the evaluation model in [Section 1.1.3](#Section 1.1.3), we
> said that the first step in evaluating an expression is to evaluate
> its subexpressions. But we never specified the order in which the
> subexpressions should be evaluated (e.g., left to right or right to
> left). When we introduce assignment, the order in which the arguments
> to a procedure are evaluated can make a difference to the result.
> Define a simple procedure `f` such that evaluating
>
> ::: scheme
> (+ (f 0) (f 1))
> :::
>
> will return 0 if the arguments to `+` are evaluated from left to right
> but will return 1 if the arguments are evaluated from right to left.
## The Environment Model of Evaluation {#Section 3.2}
When we introduced compound procedures in [Chapter 1](#Chapter 1), we
used the substitution model of evaluation ([Section
1.1.5](#Section 1.1.5)) to define what is meant by applying a procedure
to arguments:
- To apply a compound procedure to arguments, evaluate the body of the
procedure with each formal parameter replaced by the corresponding
argument.
Once we admit assignment into our programming language, such a
definition is no longer adequate. In particular, [Section
3.1.3](#Section 3.1.3) argued that, in the presence of assignment, a
variable can no longer be considered to be merely a name for a value.
Rather, a variable must somehow designate a "place" in which values can
be stored. In our new model of evaluation, these places will be
maintained in structures called *environments*.
An environment is a sequence of *frames*. Each frame is a table
(possibly empty) of *bindings*, which associate variable names with
their corresponding values. (A single frame may contain at most one
binding for any variable.) Each frame also has a pointer to its
*enclosing environment*, unless, for the purposes of discussion, the
frame is considered to be *global*. The *value of a variable* with
respect to an environment is the value given by the binding of the
variable in the first frame in the environment that contains a binding
for that variable. If no frame in the sequence specifies a binding for
the variable, then the variable is said to be *unbound* in the
environment.
[Figure 3.1](#Figure 3.1) shows a simple environment structure
consisting of three frames, labeled I, II, and III. In the diagram, A,
B, C, and D are pointers to environments. C and D point to the same
environment. The variables `z` and `x` are bound in frame II, while `y`
and `x` are bound in frame I. The value of `x` in environment D is 3.
The value of `x` with respect to environment B is also 3. This is
determined as follows: We examine the first frame in the sequence (frame
III) and do not find a binding for `x`, so we proceed to the enclosing
environment D and find the binding in frame I. On the other hand, the
value of `x` in environment A is 7, because the first frame in the
sequence (frame II) contains a binding of `x` to 7. With respect to
environment A, the binding of `x` to 7 in frame II is said to *shadow*
the binding of `x` to 3 in frame I.
[]{#Figure 3.1 label="Figure 3.1"}
![image](fig/chap3/Fig3.1.pdf){width="48mm"}
**Figure 3.1:** A simple environment structure.
The environment is crucial to the evaluation process, because it
determines the context in which an expression should be evaluated.
Indeed, one could say that expressions in a programming language do not,
in themselves, have any meaning. Rather, an expression acquires a
meaning only with respect to some environment in which it is evaluated.
Even the interpretation of an expression as straightforward as `(+ 1 1)`
depends on an understanding that one is operating in a context in which
`+` is the symbol for addition. Thus, in our model of evaluation we will
always speak of evaluating an expression with respect to some
environment. To describe interactions with the interpreter, we will
suppose that there is a global environment, consisting of a single frame
(with no enclosing environment) that includes values for the symbols
associated with the primitive procedures. For example, the idea that `+`
is the symbol for addition is captured by saying that the symbol `+` is
bound in the global environment to the primitive addition procedure.
### The Rules for Evaluation {#Section 3.2.1}
The overall specification of how the interpreter evaluates a combination
remains the same as when we first introduced it in [Section
1.1.3](#Section 1.1.3):
- To evaluate a combination:
1. Evaluate the subexpressions of the combination.[^140]
2. Apply the value of the operator subexpression to the values of the
operand subexpressions.
The environment model of evaluation replaces the substitution model in
specifying what it means to apply a compound procedure to arguments.
In the environment model of evaluation, a procedure is always a pair
consisting of some code and a pointer to an environment. Procedures are
created in one way only: by evaluating a λ-expression. This produces a
procedure whose code is obtained from the text of the λ-expression and
whose environment is the environment in which the λ-expression was
evaluated to produce the procedure. For example, consider the procedure
definition
::: scheme
(define (square x) (\* x x))
:::
evaluated in the global environment. The procedure definition syntax is
just syntactic sugar for an underlying implicit λ-expression. It would
have been equivalent to have used
::: scheme
(define square (lambda (x) (\* x x)))
:::
which evaluates `(lambda (x) (* x x))` and binds `square` to the
resulting value, all in the global environment.
[Figure 3.2](#Figure 3.2) shows the result of evaluating this `define`
expression. The procedure object is a pair whose code specifies that the
procedure has one formal parameter, namely `x`, and a procedure body
`(* x x)`. The environment part of the procedure is a pointer to the
global environment, since that is the environment in which the
λ-expression was evaluated to produce the procedure. A new binding,
which associates the procedure object with the symbol `square`, has been
added to the global frame. In general, `define` creates definitions by
adding bindings to frames.
[]{#Figure 3.2 label="Figure 3.2"}
![image](fig/chap3/Fig3.2b.pdf){width="49mm"}
> **Figure 3.2:** Environment structure produced by evaluating\
> `(define (square x) (* x x))` in the global environment.
Now that we have seen how procedures are created, we can describe how
procedures are applied. The environment model specifies: To apply a
procedure to arguments, create a new environment containing a frame that
binds the parameters to the values of the arguments. The enclosing
environment of this frame is the environment specified by the procedure.
Now, within this new environment, evaluate the procedure body.
To show how this rule is followed, [Figure 3.3](#Figure 3.3) illustrates
the environment structure created by evaluating the expression
`(square 5)` in the global environment, where `square` is the procedure
generated in [Figure 3.2](#Figure 3.2). Applying the procedure results
in the creation of a new environment, labeled E1 in the figure, that
begins with a frame in which `x`, the formal parameter for the
procedure, is bound to the argument 5. The pointer leading upward from
this frame shows that the frame's enclosing environment is the global
environment. The global environment is chosen here, because this is the
environment that is indicated as part of the `square` procedure object.
Within E1, we evaluate the body of the procedure, `(* x x)`. Since the
value of `x` in E1 is 5, the result is `(* 5 5)`, or 25.
[]{#Figure 3.3 label="Figure 3.3"}
![image](fig/chap3/Fig3.3b.pdf){width="78mm"}
> **Figure 3.3:** Environment created by evaluating `(square 5)` in the
> global environment.
The environment model of procedure application can be summarized by two
rules:
- A procedure object is applied to a set of arguments by constructing
a frame, binding the formal parameters of the procedure to the
arguments of the call, and then evaluating the body of the procedure
in the context of the new environment constructed. The new frame has
as its enclosing environment the environment part of the procedure
object being applied.
- A procedure is created by evaluating a λ-expression relative to a
given environment. The resulting procedure object is a pair
consisting of the text of the λ-expression and a pointer to the
environment in which the procedure was created.
We also specify that defining a symbol using `define` creates a binding
in the current environment frame and assigns to the symbol the indicated
value.[^141] Finally, we specify the behavior of `set!`, the operation
that forced us to introduce the environment model in the first place.
Evaluating the expression
`(set!`$\;\langle$*variable*$\kern0.08em\rangle$$\;\langle$*value*$\kern0.08em\rangle$`)`
in some environment locates the binding of the variable in the
environment and changes that binding to indicate the new value. That is,
one finds the first frame in the environment that contains a binding for
the variable and modifies that frame. If the variable is unbound in the
environment, then `set!` signals an error.
These evaluation rules, though considerably more complex than the
substitution model, are still reasonably straightforward. Moreover, the
evaluation model, though abstract, provides a correct description of how
the interpreter evaluates expressions. In [Chapter 4](#Chapter 4) we
shall see how this model can serve as a blueprint for implementing a
working interpreter. The following sections elaborate the details of the
model by analyzing some illustrative programs.
### Applying Simple Procedures {#Section 3.2.2}
When we introduced the substitution model in [Section
1.1.5](#Section 1.1.5) we showed how the combination `(f 5)` evaluates
to 136, given the following procedure definitions:
::: scheme
(define (square x) (\* x x)) (define (sum-of-squares x y) (+ (square x)
(square y))) (define (f a) (sum-of-squares (+ a 1) (\* a 2)))
:::
We can analyze the same example using the environment model. [Figure
3.4](#Figure 3.4) shows the three procedure objects created by
evaluating the definitions of `f`, `square`, and `sum/of/squares` in the
global environment. Each procedure object consists of some code,
together with a pointer to the global environment.
[]{#Figure 3.4 label="Figure 3.4"}
![image](fig/chap3/Fig3.4a.pdf){width="106mm"}
**Figure 3.4:** Procedure objects in the global frame.
In [Figure 3.5](#Figure 3.5) we see the environment structure created by
evaluating the expression `(f 5)`. The call to `f` creates a new
environment E1 beginning with a frame in which `a`, the formal parameter
of `f`, is bound to the argument 5. In E1, we evaluate the body of `f`:
::: scheme
(sum-of-squares (+ a 1) (\* a 2))
:::
To evaluate this combination, we first evaluate the subexpressions. The
first subexpression, `sum/of/squares`, has a value that is a procedure
object. (Notice how this value is found: We first look in the first
frame of E1, which contains no binding for `sum/of/squares`. Then we
proceed to the enclosing environment, i.e. the global environment, and
find the binding shown in [Figure 3.4](#Figure 3.4).) The other two
subexpressions are evaluated by applying the primitive operations `+`
and `*` to evaluate the two combinations `(+ a 1)` and `(* a 2)` to
obtain 6 and 10, respectively.
Now we apply the procedure object `sum/of/squares` to the arguments 6
and 10. This results in a new environment E2 in which the formal
parameters `x` and `y` are bound to the arguments. Within E2 we evaluate
the combination `(+ (square x) (square y))`. This leads us to evaluate
`(square x)`, where `square` is found in the global frame and `x` is 6.
Once again, we set up a new environment, E3, in which `x` is bound to 6,
and within this we evaluate the body of `square`, which is `(* x x)`.
Also as part of applying `sum/of/squares`, we must evaluate the
subexpression `(square y)`, where `y` is 10. This second call to
`square` creates another environment, E4, in which `x`, the formal
parameter of `square`, is bound to 10. And within E4 we must evaluate
`(* x x)`.
[]{#Figure 3.5 label="Figure 3.5"}
![image](fig/chap3/Fig3.5a.pdf){width="100mm"}
> **Figure 3.5:** Environments created by evaluating `(f 5)` using the
> procedures in [Figure 3.4](#Figure 3.4).
The important point to observe is that each call to `square` creates a
new environment containing a binding for `x`. We can see here how the
different frames serve to keep separate the different local variables
all named `x`. Notice that each frame created by `square` points to the
global environment, since this is the environment indicated by the
`square` procedure object.
After the subexpressions are evaluated, the results are returned. The
values generated by the two calls to `square` are added by
`sum/of/squares`, and this result is returned by `f`. Since our focus
here is on the environment structures, we will not dwell on how these
returned values are passed from call to call; however, this is also an
important aspect of the evaluation process, and we will return to it in
detail in [Chapter 5](#Chapter 5).
> **[]{#Exercise 3.9 label="Exercise 3.9"}Exercise 3.9:** In [Section
> 1.2.1](#Section 1.2.1) we used the substitution model to analyze two
> procedures for computing factorials, a recursive version
>
> ::: scheme
> (define (factorial n) (if (= n 1) 1 (\* n (factorial (- n 1)))))
> :::
>
> and an iterative version
>
> ::: scheme
> (define (factorial n) (fact-iter 1 1 n)) (define (fact-iter product
> counter max-count) (if (\> counter max-count) product (fact-iter (\*
> counter product) (+ counter 1) max-count)))
> :::
>
> Show the environment structures created by evaluating\
> `(factorial 6)` using each version of the `factorial` procedure.[^142]
### Frames as the Repository of Local State {#Section 3.2.3}
We can turn to the environment model to see how procedures and
assignment can be used to represent objects with local state. As an
example, consider the "withdrawal processor" from [Section
3.1.1](#Section 3.1.1) created by calling the procedure
::: scheme
(define (make-withdraw balance) (lambda (amount) (if (\>= balance
amount) (begin (set! balance (- balance amount)) balance) \"Insufficient
funds\")))
:::
Let us describe the evaluation of
::: scheme
(define W1 (make-withdraw 100))
:::
followed by
::: scheme
(W1 50) *50*
:::
[Figure 3.6](#Figure 3.6) shows the result of defining the
`make/withdraw` procedure in the global environment. This produces a
procedure object that contains a pointer to the global environment. So
far, this is no different from the examples we have already seen, except
that the body of the procedure is itself a λ-expression.
[]{#Figure 3.6 label="Figure 3.6"}
![image](fig/chap3/Fig3.6b.pdf){width="91mm"}
> **Figure 3.6:** Result of defining `make/withdraw` in the global
> environment.
[]{#Figure 3.7 label="Figure 3.7"}
![image](fig/chap3/Fig3.7a.pdf){width="100mm"}
**Figure 3.7:** Result of evaluating `(define W1 (make/withdraw 100))`.
The interesting part of the computation happens when we apply the
procedure `make/withdraw` to an argument:
::: scheme
(define W1 (make-withdraw 100))
:::
We begin, as usual, by setting up an environment E1 in which the formal
parameter `balance` is bound to the argument 100. Within this
environment, we evaluate the body of `make/withdraw`, namely the
λ-expression. This constructs a new procedure object, whose code is as
specified by the `lambda` and whose environment is E1, the environment
in which the `lambda` was evaluated to produce the procedure. The
resulting procedure object is the value returned by the call to
`make/withdraw`. This is bound to `W1` in the global environment, since
the `define` itself is being evaluated in the global environment.
[Figure 3.7](#Figure 3.7) shows the resulting environment structure.
[]{#Figure 3.8 label="Figure 3.8"}
![image](fig/chap3/Fig3.8c.pdf){width="99mm"}
**Figure 3.8:** Environments created by applying the procedure object
`W1`.
Now we can analyze what happens when `W1` is applied to an argument:
::: scheme
(W1 50) *50*
:::
We begin by constructing a frame in which `amount`, the formal parameter
of `W1`, is bound to the argument 50. The crucial point to observe is
that this frame has as its enclosing environment not the global
environment, but rather the environment E1, because this is the
environment that is specified by the `W1` procedure object. Within this
new environment, we evaluate the body of the procedure:
::: scheme
(if (\>= balance amount) (begin (set! balance (- balance amount))
balance) \"Insufficient funds\")
:::
The resulting environment structure is shown in [Figure
3.8](#Figure 3.8). The expression being evaluated references both
`amount` and `balance`. `amount` will be found in the first frame in the
environment, while `balance` will be found by following the
enclosing-environment pointer to E1.
[]{#Figure 3.9 label="Figure 3.9"}
![image](fig/chap3/Fig3.9a.pdf){width="96mm"}
**Figure 3.9:** Environments after the call to `W1`.
When the `set!` is executed, the binding of `balance` in E1 is changed.
At the completion of the call to `W1`, `balance` is 50, and the frame
that contains `balance` is still pointed to by the procedure object
`W1`. The frame that binds `amount` (in which we executed the code that
changed `balance`) is no longer relevant, since the procedure call that
constructed it has terminated, and there are no pointers to that frame
from other parts of the environment. The next time `W1` is called, this
will build a new frame that binds `amount` and whose enclosing
environment is E1. We see that E1 serves as the "place" that holds the
local state variable for the procedure object `W1`. [Figure
3.9](#Figure 3.9) shows the situation after the call to `W1`.
Observe what happens when we create a second "withdraw" object by making
another call to `make/withdraw`:
::: scheme
(define W2 (make-withdraw 100))
:::
[]{#Figure 3.10 label="Figure 3.10"}
![image](fig/chap3/Fig3.10a.pdf){width="108mm"}
> **Figure 3.10:** Using `(define W2 (make/withdraw 100))` to create a
> second object.
This produces the environment structure of [Figure 3.10](#Figure 3.10),
which shows that `W2` is a procedure object, that is, a pair with some
code and an environment. The environment E2 for `W2` was created by the
call to `make/withdraw`. It contains a frame with its own local binding
for `balance`. On the other hand, `W1` and `W2` have the same code: the
code specified by the λ-expression in the body of `make/withdraw`.[^143]
We see here why `W1` and `W2` behave as independent objects. Calls to
`W1` reference the state variable `balance` stored in E1, whereas calls
to `W2` reference the `balance` stored in E2. Thus, changes to the local
state of one object do not affect the other object.
> **[]{#Exercise 3.10 label="Exercise 3.10"}Exercise 3.10:** In the
> `make/withdraw` procedure, the local variable `balance` is created as
> a parameter of `make/withdraw`. We could also create the local state
> variable explicitly, using `let`, as follows:
>
> ::: scheme
> (define (make-withdraw initial-amount) (let ((balance initial-amount))
> (lambda (amount) (if (\>= balance amount) (begin (set! balance (-
> balance amount)) balance) \"Insufficient funds\"))))
> :::
>
> Recall from [Section 1.3.2](#Section 1.3.2) that `let` is simply
> syntactic sugar for a procedure call:
>
> ::: scheme
> (let
> (( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}\rangle$ ))
> $\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
> :::
>
> is interpreted as an alternate syntax for
>
> ::: scheme
> ((lambda
> ( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}\rangle$ )
> $\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
> $\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}\rangle$ )
> :::
>
> Use the environment model to analyze this alternate version of
> `make/withdraw`, drawing figures like the ones above to illustrate the
> interactions
>
> ::: scheme
> (define W1 (make-withdraw 100)) (W1 50) (define W2 (make-withdraw
> 100))
> :::
>
> Show that the two versions of `make/withdraw` create objects with the
> same behavior. How do the environment structures differ for the two
> versions?
### Internal Definitions {#Section 3.2.4}
[Section 1.1.8](#Section 1.1.8) introduced the idea that procedures can
have internal definitions, thus leading to a block structure as in the
following procedure to compute square roots:
::: scheme
(define (sqrt x) (define (good-enough? guess) (\< (abs (- (square guess)
x)) 0.001)) (define (improve guess) (average guess (/ x guess))) (define
(sqrt-iter guess) (if (good-enough? guess) guess (sqrt-iter (improve
guess)))) (sqrt-iter 1.0))
:::
Now we can use the environment model to see why these internal
definitions behave as desired. [Figure 3.11](#Figure 3.11) shows the
point in the evaluation of the expression `(sqrt 2)` where the internal
procedure `good/enough?` has been called for the first time with `guess`
equal to 1.
Observe the structure of the environment. `sqrt` is a symbol in the
global environment that is bound to a procedure object whose associated
environment is the global environment. When `sqrt` was called, a new
environment E1 was formed, subordinate to the global environment, in
which the parameter `x` is bound to 2. The body of `sqrt` was then
evaluated in E1. Since the first expression in the body of `sqrt` is
::: scheme
(define (good-enough? guess) (\< (abs (- (square guess) x)) 0.001))
:::
evaluating this expression defined the procedure `good/enough?` in the
environment E1. To be more precise, the symbol `good/enough?` was added
to the first frame of E1, bound to a procedure object whose associated
environment is E1. Similarly, `improve` and `sqrt/iter` were defined as
procedures in E1. For conciseness, [Figure 3.11](#Figure 3.11) shows
only the procedure object for `good/enough?`.
[]{#Figure 3.11 label="Figure 3.11"}
![image](fig/chap3/Fig3.11a.pdf){width="107mm"}
> **Figure 3.11:** `sqrt` procedure with internal definitions.
After the local procedures were defined, the expression
`(sqrt/iter 1.0)` was evaluated, still in environment E1. So the
procedure object bound to `sqrt/iter` in E1 was called with 1 as an
argument. This created an environment E2 in which `guess`, the parameter
of `sqrt/iter`, is bound to 1. `sqrt/iter` in turn called `good/enough?`
with the value of `guess` (from E2) as the argument for `good/enough?`.
This set up another environment, E3, in which `guess` (the parameter of
`good/enough?`) is bound to 1. Although `sqrt/iter` and `good/enough?`
both have a parameter named `guess`, these are two distinct local
variables located in different frames. Also, E2 and E3 both have E1 as
their enclosing environment, because the `sqrt/iter` and `good/enough?`
procedures both have E1 as their environment part. One consequence of
this is that the symbol `x` that appears in the body of `good/enough?`
will reference the binding of `x` that appears in E1, namely the value
of `x` with which the original `sqrt` procedure was called.
The environment model thus explains the two key properties that make
local procedure definitions a useful technique for modularizing
programs:
- The names of the local procedures do not interfere with names
external to the enclosing procedure, because the local procedure
names will be bound in the frame that the procedure creates when it
is run, rather than being bound in the global environment.
- The local procedures can access the arguments of the enclosing
procedure, simply by using parameter names as free variables. This
is because the body of the local procedure is evaluated in an
environment that is subordinate to the evaluation environment for
the enclosing procedure.
> **[]{#Exercise 3.11 label="Exercise 3.11"}Exercise 3.11:** In [Section
> 3.2.3](#Section 3.2.3) we saw how the environment model described the
> behavior of procedures with local state. Now we have seen how internal
> definitions work. A typical message-passing procedure contains both of
> these aspects. Consider the bank account procedure of [Section
> 3.1.1](#Section 3.1.1):
>
> ::: scheme
> (define (make-account balance) (define (withdraw amount) (if (\>=
> balance amount) (begin (set! balance (- balance amount)) balance)
> \"Insufficient funds\")) (define (deposit amount) (set! balance (+
> balance amount)) balance) (define (dispatch m) (cond ((eq? m
> 'withdraw) withdraw) ((eq? m 'deposit) deposit) (else (error \"Unknown
> request: MAKE-ACCOUNT\" m)))) dispatch)
> :::
>
> Show the environment structure generated by the sequence of
> interactions
>
> ::: scheme
> (define acc (make-account 50)) ((acc 'deposit) 40) *90* ((acc
> 'withdraw) 60) *30*
> :::
>
> Where is the local state for `acc` kept? Suppose we define another
> account
>
> ::: scheme
> (define acc2 (make-account 100))
> :::
>
> How are the local states for the two accounts kept distinct? Which
> parts of the environment structure are shared between `acc` and
> `acc2`?
## Modeling with Mutable Data {#Section 3.3}
Chapter 2 dealt with compound data as a means for constructing
computational objects that have several parts, in order to model
real-world objects that have several aspects. In that chapter we
introduced the discipline of data abstraction, according to which data
structures are specified in terms of constructors, which create data
objects, and selectors, which access the parts of compound data objects.
But we now know that there is another aspect of data that [Chapter
2](#Chapter 2) did not address. The desire to model systems composed of
objects that have changing state leads us to the need to modify compound
data objects, as well as to construct and select from them. In order to
model compound objects with changing state, we will design data
abstractions to include, in addition to selectors and constructors,
operations called *mutators*, which modify data objects. For instance,
modeling a banking system requires us to change account balances. Thus,
a data structure for representing bank accounts might admit an operation
::: scheme
(set-balance!
$\color{SchemeDark}\langle$ *account* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *new-value* $\color{SchemeDark}\rangle$ )
:::
that changes the balance of the designated account to the designated new
value. Data objects for which mutators are defined are known as *mutable
data objects*.
[Chapter 2](#Chapter 2) introduced pairs as a general-purpose "glue" for
synthesizing compound data. We begin this section by defining basic
mutators for pairs, so that pairs can serve as building blocks for
constructing mutable data objects. These mutators greatly enhance the
representational power of pairs, enabling us to build data structures
other than the sequences and trees that we worked with in [Section
2.2](#Section 2.2). We also present some examples of simulations in
which complex systems are modeled as collections of objects with local
state.
### Mutable List Structure {#Section 3.3.1}
The basic operations on pairs---`cons`, `car`, and `cdr`---can be used
to construct list structure and to select parts from list structure, but
they are incapable of modifying list structure. The same is true of the
list operations we have used so far, such as `append` and `list`, since
these can be defined in terms of `cons`, `car`, and `cdr`. To modify
list structures we need new operations.
The primitive mutators for pairs are `set/car!` and `set/cdr!`.
`set/car!` takes two arguments, the first of which must be a pair. It
modifies this pair, replacing the `car` pointer by a pointer to the
second argument of `set/car!`.[^144]
As an example, suppose that `x` is bound to the list `((a b) c d)` and
`y` to the list `(e f)` as illustrated in [Figure 3.12](#Figure 3.12).
Evaluating the expression ` (set/car! x y)` modifies the pair to which
`x` is bound, replacing its `car` by the value of `y`. The result of the
operation is shown in [Figure 3.13](#Figure 3.13). The structure `x` has
been modified and would now be printed as `((e f) c d)`. The pairs
representing the list `(a b)`, identified by the pointer that was
replaced, are now detached from the original structure.[^145]
Compare [Figure 3.13](#Figure 3.13) with [Figure 3.14](#Figure 3.14),
which illustrates the result of executing `(define z (cons y (cdr x)))`
with `x` and `y` bound to the original lists of [Figure
3.12](#Figure 3.12). The variable `z` is now bound to a new pair created
by the `cons` operation; the list to which `x` is bound is unchanged.
[]{#Figure 3.12 label="Figure 3.12"}
![image](fig/chap3/Fig3.12b.pdf){width="72mm"}
> **Figure 3.12:** Lists `x`: `((a b) c d)` and `y`: `(e f)`.
[]{#Figure 3.13 label="Figure 3.13"}
![image](fig/chap3/Fig3.13b.pdf){width="72mm"}
**Figure 3.13:** Effect of `(set/car! x y)` on the lists in [Figure
3.12](#Figure 3.12).
[]{#Figure 3.14 label="Figure 3.14"}
![image](fig/chap3/Fig3.14b.pdf){width="72mm"}
> **Figure 3.14:** Effect of `(define z (cons y (cdr x)))` on the lists
> in [Figure 3.12](#Figure 3.12).
[]{#Figure 3.15 label="Figure 3.15"}
![image](fig/chap3/Fig3.15b.pdf){width="72mm"}
**Figure 3.15:** Effect of `(set/cdr! x y)` on the lists in [Figure
3.12](#Figure 3.12).
The `set/cdr!` operation is similar to `set/car!`. The only difference
is that the `cdr` pointer of the pair, rather than the `car` pointer, is
replaced. The effect of executing `(set/cdr! x y)` on the lists of
[Figure 3.12](#Figure 3.12) is shown in [Figure 3.15](#Figure 3.15).
Here the `cdr` pointer of `x` has been replaced by the pointer to
`(e f)`. Also, the list `(c d)`, which used to be the `cdr` of `x`, is
now detached from the structure.
`cons` builds new list structure by creating new pairs, while `set/car!`
and `set/cdr!` modify existing pairs. Indeed, we could implement `cons`
in terms of the two mutators, together with a procedure `get/new/pair`,
which returns a new pair that is not part of any existing list
structure. We obtain the new pair, set its `car` and `cdr` pointers to
the designated objects, and return the new pair as the result of the
`cons`.[^146]
::: scheme
(define (cons x y) (let ((new (get-new-pair))) (set-car! new x)
(set-cdr! new y) new))
:::
> **[]{#Exercise 3.12 label="Exercise 3.12"}Exercise 3.12:** The
> following procedure for appending lists was introduced in [Section
> 2.2.1](#Section 2.2.1):
>
> ::: scheme
> (define (append x y) (if (null? x) y (cons (car x) (append (cdr x)
> y))))
> :::
>
> `append` forms a new list by successively `cons`ing the elements of
> `x` onto `y`. The procedure `append!` is similar to `append`, but it
> is a mutator rather than a constructor. It appends the lists by
> splicing them together, modifying the final pair of `x` so that its
> `cdr` is now `y`. (It is an error to call `append!` with an empty
> `x`.)
>
> ::: scheme
> (define (append! x y) (set-cdr! (last-pair x) y) x)
> :::
>
> Here `last/pair` is a procedure that returns the last pair in its
> argument:
>
> ::: scheme
> (define (last-pair x) (if (null? (cdr x)) x (last-pair (cdr x))))
> :::
>
> Consider the interaction
>
> ::: scheme
> (define x (list 'a 'b)) (define y (list 'c 'd)) (define z (append x
> y)) z *(a b c d)* (cdr x)
> $\color{SchemeDark}\langle$ *response* $\color{SchemeDark}\rangle$
> (define w (append! x y)) w *(a b c d)* (cdr x)
> $\color{SchemeDark}\langle$ *response* $\color{SchemeDark}\rangle$
> :::
>
> What are the missing $\langle$*response*$\rangle$s? Draw
> box-and-pointer\
> diagrams to explain your answer.
> **[]{#Exercise 3.13 label="Exercise 3.13"}Exercise 3.13:** Consider
> the following `make/cycle` procedure, which uses the `last/pair`
> procedure defined in [Exercise 3.12](#Exercise 3.12):
>
> ::: scheme
> (define (make-cycle x) (set-cdr! (last-pair x) x) x)
> :::
>
> Draw a box-and-pointer diagram that shows the structure `z` created by
>
> ::: scheme
> (define z (make-cycle (list 'a 'b 'c)))
> :::
>
> What happens if we try to compute `(last/pair z)`?
> **[]{#Exercise 3.14 label="Exercise 3.14"}Exercise 3.14:** The
> following procedure is quite useful, although obscure:
>
> ::: scheme
> (define (mystery x) (define (loop x y) (if (null? x) y (let ((temp
> (cdr x))) (set-cdr! x y) (loop temp x)))) (loop x '()))
> :::
>
> `loop` uses the "temporary" variable `temp` to hold the old value of
> the `cdr` of `x`, since the `set/cdr!` on the next line destroys the
> `cdr`. Explain what `mystery` does in general. Suppose `v` is defined
> by `(define v (list ’a ’b ’c ’d))`. Draw the box-and-pointer diagram
> that represents the list to which `v` is bound. Suppose that we now
> evaluate `(define w (mystery v))`. Draw box-and-pointer diagrams that
> show the structures `v` and `w` after evaluating this expression. What
> would be printed as the values of `v` and `w`?
#### Sharing and identity {#sharing-and-identity .unnumbered}
We mentioned in [Section 3.1.3](#Section 3.1.3) the theoretical issues
of "sameness" and "change" raised by the introduction of assignment.
These issues arise in practice when individual pairs are *shared* among
different data objects. For example, consider the structure formed by
::: scheme
(define x (list 'a 'b)) (define z1 (cons x x))
:::
As shown in [Figure 3.16](#Figure 3.16), `z1` is a pair whose `car` and
`cdr` both point to the same pair `x`. This sharing of `x` by the `car`
and `cdr` of `z1` is a consequence of the straightforward way in which
`cons` is implemented. In general, using `cons` to construct lists will
result in an interlinked structure of pairs in which many individual
pairs are shared by many different structures.
In contrast to [Figure 3.16](#Figure 3.16), [Figure 3.17](#Figure 3.17)
shows the structure created by
::: scheme
(define z2 (cons (list 'a 'b) (list 'a 'b)))
:::
In this structure, the pairs in the two `(a b)` lists are distinct,
although the actual symbols are shared.[^147]
[]{#Figure 3.16 label="Figure 3.16"}
![image](fig/chap3/Fig3.16b.pdf){width="46mm"}
> **Figure 3.16:** The list `z1` formed by `(cons x x)`.
[]{#Figure 3.17 label="Figure 3.17"}
![image](fig/chap3/Fig3.17b.pdf){width="71mm"}
> **Figure 3.17:** The list `z2` formed by
> `(cons (list ’a ’b) (list ’a ’b))`.
When thought of as a list, `z1` and `z2` both represent "the same" list,
`((a b) a b)`. In general, sharing is completely undetectable if we
operate on lists using only `cons`, `car`, and `cdr`. However, if we
allow mutators on list structure, sharing becomes significant. As an
example of the difference that sharing can make, consider the following
procedure, which modifies the `car` of the structure to which it is
applied:
::: scheme
(define (set-to-wow! x) (set-car! (car x) 'wow) x)
:::
Even though `z1` and `z2` are "the same" structure, applying
`set/to/wow!` to them yields different results. With `z1`, altering the
`car` also changes the `cdr`, because in `z1` the `car` and the `cdr`
are the same pair. With `z2`, the `car` and `cdr` are distinct, so
`set/to/wow!` modifies only the `car`:
::: scheme
z1 *((a b) a b)* (set-to-wow! z1) *((wow b) wow b)* z2 *((a b) a
b)* (set-to-wow! z2) *((wow b) a b)*
:::
One way to detect sharing in list structures is to use the predicate
`eq?`, which we introduced in [Section 2.3.1](#Section 2.3.1) as a way
to test whether two symbols are equal. More generally, `(eq? x y)` tests
whether `x` and `y` are the same object (that is, whether `x` and `y`
are equal as pointers). Thus, with `z1` and `z2` as defined in [Figure
3.16](#Figure 3.16) and [Figure 3.17](#Figure 3.17),
`(eq? (car z1) (cdr z1))` is true and `(eq? (car z2) (cdr z2))` is
false.
As will be seen in the following sections, we can exploit sharing to
greatly extend the repertoire of data structures that can be represented
by pairs. On the other hand, sharing can also be dangerous, since
modifications made to structures will also affect other structures that
happen to share the modified parts. The mutation operations `set/car!`
and `set/cdr!` should be used with care; unless we have a good
understanding of how our data objects are shared, mutation can have
unanticipated results.[^148]
> **[]{#Exercise 3.15 label="Exercise 3.15"}Exercise 3.15:** Draw
> box-and-pointer diagrams to explain the effect of `set/to/wow!` on the
> structures `z1` and `z2` above.
> **[]{#Exercise 3.16 label="Exercise 3.16"}Exercise 3.16:** Ben
> Bitdiddle decides to write a procedure to count the number of pairs in
> any list structure. "It's easy," he reasons. "The number of pairs in
> any structure is the number in the `car` plus the number in the `cdr`
> plus one more to count the current pair." So Ben writes the following
> procedure:
>
> ::: scheme
> (define (count-pairs x) (if (not (pair? x)) 0 (+ (count-pairs (car x))
> (count-pairs (cdr x)) 1)))
> :::
>
> Show that this procedure is not correct. In particular, draw
> box-and-pointer diagrams representing list structures made up of
> exactly three pairs for which Ben's procedure would return 3; return
> 4; return 7; never return at all.
> **[]{#Exercise 3.17 label="Exercise 3.17"}Exercise 3.17:** Devise a
> correct version of the `count/pairs` procedure of [Exercise
> 3.16](#Exercise 3.16) that returns the number of distinct pairs in any
> structure. (Hint: Traverse the structure, maintaining an auxiliary
> data structure that is used to keep track of which pairs have already
> been counted.)
> **[]{#Exercise 3.18 label="Exercise 3.18"}Exercise 3.18:** Write a
> procedure that examines a list and determines whether it contains a
> cycle, that is, whether a program that tried to find the end of the
> list by taking successive `cdr`s would go into an infinite loop.
> [Exercise 3.13](#Exercise 3.13) constructed such lists.
> **[]{#Exercise 3.19 label="Exercise 3.19"}Exercise 3.19:** Redo
> [Exercise 3.18](#Exercise 3.18) using an algorithm that takes only a
> constant amount of space. (This requires a very clever idea.)
#### Mutation is just assignment {#mutation-is-just-assignment .unnumbered}
When we introduced compound data, we observed in [Section
2.1.3](#Section 2.1.3) that pairs can be represented purely in terms of
procedures:
::: scheme
(define (cons x y) (define (dispatch m) (cond ((eq? m 'car) x) ((eq? m
'cdr) y) (else (error \"Undefined operation: CONS\" m)))) dispatch)
(define (car z) (z 'car)) (define (cdr z) (z 'cdr))
:::
The same observation is true for mutable data. We can implement mutable
data objects as procedures using assignment and local state. For
instance, we can extend the above pair implementation to handle
`set/car!` and `set/cdr!` in a manner analogous to the way we
implemented bank accounts using `make/account` in [Section
3.1.1](#Section 3.1.1):
::: scheme
(define (cons x y) (define (set-x! v) (set! x v)) (define (set-y! v)
(set! y v)) (define (dispatch m) (cond ((eq? m 'car) x) ((eq? m 'cdr) y)
((eq? m 'set-car!) set-x!) ((eq? m 'set-cdr!) set-y!) (else (error
\"Undefined operation: CONS\" m)))) dispatch) (define (car z) (z 'car))
(define (cdr z) (z 'cdr)) (define (set-car! z new-value) ((z 'set-car!)
new-value) z) (define (set-cdr! z new-value) ((z 'set-cdr!) new-value)
z)
:::
Assignment is all that is needed, theoretically, to account for the
behavior of mutable data. As soon as we admit `set!` to our language, we
raise all the issues, not only of assignment, but of mutable data in
general.[^149]
> **[]{#Exercise 3.20 label="Exercise 3.20"}Exercise 3.20:** Draw
> environment diagrams to illustrate the evaluation of the sequence of
> expressions
>
> ::: scheme
> (define x (cons 1 2)) (define z (cons x x)) (set-car! (cdr z) 17) (car
> x) *17*
> :::
>
> using the procedural implementation of pairs given above. (Compare
> [Exercise 3.11](#Exercise 3.11).)
### Representing Queues {#Section 3.3.2}
The mutators `set/car!` and `set/cdr!` enable us to use pairs to
construct data structures that cannot be built with `cons`, `car`, and
`cdr` alone. This section shows how to use pairs to represent a data
structure called a queue. [Section 3.3.3](#Section 3.3.3) will show how
to represent data structures called tables.
A *queue* is a sequence in which items are inserted at one end (called
the *rear* of the queue) and deleted from the other end (the *front*).
[Figure 3.18](#Figure 3.18) shows an initially empty queue in which the
items `a` and `b` are inserted. Then `a` is removed, `c` and `d` are
inserted, and `b` is removed. Because items are always removed in the
order in which they are inserted, a queue is sometimes called a *FIFO*
(first in, first out) buffer.
[]{#Figure 3.18 label="Figure 3.18"}
![image](fig/chap3/Fig3.18a.pdf){width="70mm"}
**Figure 3.18:** Queue operations.
In terms of data abstraction, we can regard a queue as defined by the
following set of operations:
- a constructor: `(make/queue)` returns an empty queue (a queue
containing no items).
- two selectors:
`(empty/queue? `$\langle$*`queue`*$\rangle$`)` tests if the queue is
empty.
`(front/queue `$\langle$*`queue`*$\rangle$`)` returns the object at
the front of the queue, signaling an error if the queue is empty; it
does not modify the queue.
- two mutators:
`(insert/queue! `$\langle$*`queue`*$\rangle$` `$\langle$*`item`*$\rangle$`)`
inserts the item at the rear of the queue and returns the modified
queue as its value.
`(delete/queue! `$\langle$*`queue`*$\rangle$`)` removes the item at
the front of the queue and returns the modified queue as its value,
signaling an error if the queue is empty before the deletion.
Because a queue is a sequence of items, we could certainly represent it
as an ordinary list; the front of the queue would be the `car` of the
list, inserting an item in the queue would amount to appending a new
element at the end of the list, and deleting an item from the queue
would just be taking the `cdr` of the list. However, this representation
is inefficient, because in order to insert an item we must scan the list
until we reach the end. Since the only method we have for scanning a
list is by successive `cdr` operations, this scanning requires
$\Theta(n)$ steps for a list of $n$ items. A simple modification to the
list representation overcomes this disadvantage by allowing the queue
operations to be implemented so that they require $\Theta$(1) steps;
that is, so that the number of steps needed is independent of the length
of the queue.
The difficulty with the list representation arises from the need to scan
to find the end of the list. The reason we need to scan is that,
although the standard way of representing a list as a chain of pairs
readily provides us with a pointer to the beginning of the list, it
gives us no easily accessible pointer to the end. The modification that
avoids the drawback is to represent the queue as a list, together with
an additional pointer that indicates the final pair in the list. That
way, when we go to insert an item, we can consult the rear pointer and
so avoid scanning the list.
A queue is represented, then, as a pair of pointers, `front/ptr` and
`rear/ptr`, which indicate, respectively, the first and last pairs in an
ordinary list. Since we would like the queue to be an identifiable
object, we can use `cons` to combine the two pointers. Thus, the queue
itself will be the `cons` of the two pointers. [Figure
3.19](#Figure 3.19) illustrates this representation.
[]{#Figure 3.19 label="Figure 3.19"}
![image](fig/chap3/Fig3.19b.pdf){width="69mm"}
> **Figure 3.19:** Implementation of a queue as a list with front and
> rear pointers.
To define the queue operations we use the following procedures, which
enable us to select and to modify the front and rear pointers of a
queue:
::: scheme
(define (front-ptr queue) (car queue)) (define (rear-ptr queue) (cdr
queue)) (define (set-front-ptr! queue item) (set-car! queue item))
(define (set-rear-ptr! queue item) (set-cdr! queue item))
:::
Now we can implement the actual queue operations. We will consider a
queue to be empty if its front pointer is the empty list:
::: scheme
(define (empty-queue? queue) (null? (front-ptr queue)))
:::
The `make/queue` constructor returns, as an initially empty queue, a
pair whose `car` and `cdr` are both the empty list:
::: scheme
(define (make-queue) (cons '() '()))
:::
To select the item at the front of the queue, we return the `car` of the
pair indicated by the front pointer:
::: scheme
(define (front-queue queue) (if (empty-queue? queue) (error \"FRONT
called with an empty queue\" queue) (car (front-ptr queue))))
:::
[]{#Figure 3.20 label="Figure 3.20"}
![image](fig/chap3/Fig3.20b.pdf){width="88mm"}
> **Figure 3.20:** Result of using `(insert/queue! q ’d)` on the queue
> of [Figure 3.19](#Figure 3.19).
To insert an item in a queue, we follow the method whose result is
indicated in [Figure 3.20](#Figure 3.20). We first create a new pair
whose `car` is the item to be inserted and whose `cdr` is the empty
list. If the queue was initially empty, we set the front and rear
pointers of the queue to this new pair. Otherwise, we modify the final
pair in the queue to point to the new pair, and also set the rear
pointer to the new pair.
::: scheme
(define (insert-queue! queue item) (let ((new-pair (cons item '())))
(cond ((empty-queue? queue) (set-front-ptr! queue new-pair)
(set-rear-ptr! queue new-pair) queue) (else (set-cdr! (rear-ptr queue)
new-pair) (set-rear-ptr! queue new-pair) queue))))
:::
[]{#Figure 3.21 label="Figure 3.21"}
![image](fig/chap3/Fig3.21b.pdf){width="88mm"}
> **Figure 3.21:** Result of using `(delete/queue! q)` on the queue of
> [Figure 3.20](#Figure 3.20).
To delete the item at the front of the queue, we merely modify the front
pointer so that it now points at the second item in the queue, which can
be found by following the `cdr` pointer of the first item (see [Figure
3.21](#Figure 3.21)):[^150]
::: scheme
(define (delete-queue! queue) (cond ((empty-queue? queue) (error
\"DELETE! called with an empty queue\" queue)) (else (set-front-ptr!
queue (cdr (front-ptr queue))) queue)))
:::
> **[]{#Exercise 3.21 label="Exercise 3.21"}Exercise 3.21:** Ben
> Bitdiddle decides to test the queue implementation described above. He
> types in the procedures to the Lisp interpreter and proceeds to try
> them out:
>
> ::: scheme
> (define q1 (make-queue)) (insert-queue! q1 'a) *((a) a)*
> (insert-queue! q1 'b) *((a b) b)* (delete-queue! q1) *((b) b)*
> (delete-queue! q1) *(() b)*
> :::
>
> "It's all wrong!" he complains. "The interpreter's response shows that
> the last item is inserted into the queue twice. And when I delete both
> items, the second `b` is still there, so the queue isn't empty, even
> though it's supposed to be." Eva Lu Ator suggests that Ben has
> misunderstood what is happening. "It's not that the items are going
> into the queue twice," she explains. "It's just that the standard Lisp
> printer doesn't know how to make sense of the queue representation. If
> you want to see the queue printed correctly, you'll have to define
> your own print procedure for queues." Explain what Eva Lu is talking
> about. In particular, show why Ben's examples produce the printed
> results that they do. Define a procedure `print/queue` that takes a
> queue as input and prints the sequence of items in the queue.
> **[]{#Exercise 3.22 label="Exercise 3.22"}Exercise 3.22:** Instead of
> representing a queue as a pair of pointers, we can build a queue as a
> procedure with local state. The local state will consist of pointers
> to the beginning and the end of an ordinary list. Thus, the
> `make/queue` procedure will have the form
>
> ::: scheme
> (define (make-queue) (let ((front-ptr $\dots$ ) (rear-ptr $\dots$
> )) $\color{SchemeDark}\langle$ *definitions of internal
> procedures* $\color{SchemeDark}\rangle$ (define (dispatch m)
> $\dots$ ) dispatch))
> :::
>
> Complete the definition of `make/queue` and provide implementations of
> the queue operations using this representation.
> **[]{#Exercise 3.23 label="Exercise 3.23"}Exercise 3.23:** A *deque*
> ("double-ended queue") is a sequence in which items can be inserted
> and deleted at either the front or the rear. Operations on deques are
> the constructor `make/deque`, the predicate `empty/deque?`, selectors
> `front/deque` and `rear/deque`, mutators `front/insert/deque!`,
> `rear/insert/deque!`, `front/delete/deque!`, and `rear/delete/deque!`.
> Show how to represent deques using pairs, and give implementations of
> the operations.[^151] All operations should be accomplished in
> $\Theta$(1) steps.
### Representing Tables {#Section 3.3.3}
When we studied various ways of representing sets in [Chapter
2](#Chapter 2), we mentioned in [Section 2.3.3](#Section 2.3.3) the task
of maintaining a table of records indexed by identifying keys. In the
implementation of data-directed programming in [Section
2.4.3](#Section 2.4.3), we made extensive use of two-dimensional tables,
in which information is stored and retrieved using two keys. Here we see
how to build tables as mutable list structures.
[]{#Figure 3.22 label="Figure 3.22"}
![image](fig/chap3/Fig3.22c.pdf){width="81mm"}
**Figure 3.22:** A table represented as a headed list.
We first consider a one-dimensional table, in which each value is stored
under a single key. We implement the table as a list of records, each of
which is implemented as a pair consisting of a key and the associated
value. The records are glued together to form a list by pairs whose
`car`s point to successive records. These gluing pairs are called the
*backbone* of the table. In order to have a place that we can change
when we add a new record to the table, we build the table as a *headed
list*. A headed list has a special backbone pair at the beginning, which
holds a dummy "record"---in this case the arbitrarily chosen symbol
`*table*`. [Figure 3.22](#Figure 3.22) shows the box-and-pointer diagram
for the table
::: scheme
a: 1 b: 2 c: 3
:::
To extract information from a table we use the `lookup` procedure, which
takes a key as argument and returns the associated value (or false if
there is no value stored under that key). `lookup` is defined in terms
of the `assoc` operation, which expects a key and a list of records as
arguments. Note that `assoc` never sees the dummy record. `assoc`
returns the record that has the given key as its `car`.[^152] `lookup`
then checks to see that the resulting record returned by `assoc` is not
false, and returns the value (the `cdr`) of the record.
::: scheme
(define (lookup key table) (let ((record (assoc key (cdr table)))) (if
record (cdr record) false))) (define (assoc key records) (cond ((null?
records) false) ((equal? key (caar records)) (car records)) (else (assoc
key (cdr records)))))
:::
To insert a value in a table under a specified key, we first use `assoc`
to see if there is already a record in the table with this key. If not,
we form a new record by `cons`ing the key with the value, and insert
this at the head of the table's list of records, after the dummy record.
If there already is a record with this key, we set the `cdr` of this
record to the designated new value. The header of the table provides us
with a fixed location to modify in order to insert the new record.[^153]
::: scheme
(define (insert! key value table) (let ((record (assoc key (cdr
table)))) (if record (set-cdr! record value) (set-cdr! table (cons (cons
key value) (cdr table))))) 'ok)
:::
To construct a new table, we simply create a list containing the symbol
`*table*`:
::: scheme
(define (make-table) (list '\*table\*))
:::
#### Two-dimensional tables {#two-dimensional-tables .unnumbered}
In a two-dimensional table, each value is indexed by two keys. We can
construct such a table as a one-dimensional table in which each key
identifies a subtable. [Figure 3.23](#Figure 3.23) shows the
box-and-pointer diagram for the table
math: +: 43 letters: a: 97 -: 45 b: 98 \*: 42
which has two subtables. (The subtables don't need a special header
symbol, since the key that identifies the subtable serves this purpose.)
When we look up an item, we use the first key to identify the correct
subtable. Then we use the second key to identify the record within the
subtable.
::: scheme
(define (lookup key-1 key-2 table) (let ((subtable (assoc key-1 (cdr
table)))) (if subtable (let ((record (assoc key-2 (cdr subtable)))) (if
record (cdr record) false)) false)))
:::
[]{#Figure 3.23 label="Figure 3.23"}
![image](fig/chap3/Fig3.23a.pdf){width="103mm"}
**Figure 3.23:** A two-dimensional table.
To insert a new item under a pair of keys, we use `assoc` to see if
there is a subtable stored under the first key. If not, we build a new
subtable containing the single record (`key/2`, `value`) and insert it
into the table under the first key. If a subtable already exists for the
first key, we insert the new record into this subtable, using the
insertion method for one-dimensional tables described above:
::: scheme
(define (insert! key-1 key-2 value table) (let ((subtable (assoc key-1
(cdr table)))) (if subtable (let ((record (assoc key-2 (cdr subtable))))
(if record (set-cdr! record value) (set-cdr! subtable (cons (cons key-2
value) (cdr subtable))))) (set-cdr! table (cons (list key-1 (cons key-2
value)) (cdr table))))) 'ok)
:::
#### Creating local tables {#creating-local-tables .unnumbered}
The `lookup` and `insert!` operations defined above take the table as an
argument. This enables us to use programs that access more than one
table. Another way to deal with multiple tables is to have separate
`lookup` and `insert!` procedures for each table. We can do this by
representing a table procedurally, as an object that maintains an
internal table as part of its local state. When sent an appropriate
message, this "table object" supplies the procedure with which to
operate on the internal table. Here is a generator for two-dimensional
tables represented in this fashion:
::: scheme
(define (make-table) (let ((local-table (list '\*table\*))) (define
(lookup key-1 key-2) (let ((subtable (assoc key-1 (cdr local-table))))
(if subtable (let ((record (assoc key-2 (cdr subtable)))) (if record
(cdr record) false)) false))) (define (insert! key-1 key-2 value) (let
((subtable (assoc key-1 (cdr local-table)))) (if subtable (let ((record
(assoc key-2 (cdr subtable)))) (if record (set-cdr! record value)
(set-cdr! subtable (cons (cons key-2 value) (cdr subtable))))) (set-cdr!
local-table (cons (list key-1 (cons key-2 value)) (cdr local-table)))))
'ok) (define (dispatch m) (cond ((eq? m 'lookup-proc) lookup) ((eq? m
'insert-proc!) insert!) (else (error \"Unknown operation: TABLE\" m))))
dispatch))
:::
Using `make/table`, we could implement the `get` and `put` operations
used in [Section 2.4.3](#Section 2.4.3) for data-directed programming,
as follows:
::: scheme
(define operation-table (make-table)) (define get (operation-table
'lookup-proc)) (define put (operation-table 'insert-proc!))
:::
`get` takes as arguments two keys, and `put` takes as arguments two keys
and a value. Both operations access the same local table, which is
encapsulated within the object created by the call to `make/table`.
> **[]{#Exercise 3.24 label="Exercise 3.24"}Exercise 3.24:** In the
> table implementations above, the keys are tested for equality using
> `equal?` (called by `assoc`). This is not always the appropriate test.
> For instance, we might have a table with numeric keys in which we
> don't need an exact match to the number we're looking up, but only a
> number within some tolerance of it. Design a table constructor
> `make/table` that takes as an argument a `same/key?` procedure that
> will be used to test "equality" of keys. `make/table` should return a
> `dispatch` procedure that can be used to access appropriate `lookup`
> and `insert!` procedures for a local table.
> **[]{#Exercise 3.25 label="Exercise 3.25"}Exercise 3.25:**
> Generalizing one- and two-dimensional tables, show how to implement a
> table in which values are stored under an arbitrary number of keys and
> different values may be stored under different numbers of keys. The
> `lookup` and `insert!` procedures should take as input a list of keys
> used to access the table.
> **[]{#Exercise 3.26 label="Exercise 3.26"}Exercise 3.26:** To search a
> table as implemented above, one needs to scan through the list of
> records. This is basically the unordered list representation of
> [Section 2.3.3](#Section 2.3.3). For large tables, it may be more
> efficient to structure the table in a different manner. Describe a
> table implementation where the (key, value) records are organized
> using a binary tree, assuming that keys can be ordered in some way
> (e.g., numerically or alphabetically). (Compare [Exercise
> 2.66](#Exercise 2.66) of [Chapter 2](#Chapter 2).)
> **[]{#Exercise 3.27 label="Exercise 3.27"}Exercise 3.27:**
> *Memoization* (also called *tabulation*) is a technique that enables a
> procedure to record, in a local table, values that have previously
> been computed. This technique can make a vast difference in the
> performance of a program. A memoized procedure maintains a table in
> which values of previous calls are stored using as keys the arguments
> that produced the values. When the memoized procedure is asked to
> compute a value, it first checks the table to see if the value is
> already there and, if so, just returns that value. Otherwise, it
> computes the new value in the ordinary way and stores this in the
> table. As an example of memoization, recall from [Section
> 1.2.2](#Section 1.2.2) the exponential process for computing Fibonacci
> numbers:
>
> ::: scheme
> (define (fib n) (cond ((= n 0) 0) ((= n 1) 1) (else (+ (fib (- n 1))
> (fib (- n 2))))))
> :::
>
> The memoized version of the same procedure is
>
> ::: scheme
> (define memo-fib (memoize (lambda (n) (cond ((= n 0) 0) ((= n 1) 1)
> (else (+ (memo-fib (- n 1)) (memo-fib (- n 2))))))))
> :::
>
> where the memoizer is defined as
>
> ::: scheme
> (define (memoize f) (let ((table (make-table))) (lambda (x) (let
> ((previously-computed-result (lookup x table))) (or
> previously-computed-result (let ((result (f x))) (insert! x result
> table) result))))))
> :::
>
> Draw an environment diagram to analyze the computation of
> `(memo/fib 3)`. Explain why `memo/fib` computes the $n^{\mathrm{th}}$
> Fibonacci number in a number of steps proportional to $n$. Would the
> scheme still work if we had simply defined `memo/fib` to be
> `(memoize fib)`?
### A Simulator for Digital Circuits {#Section 3.3.4}
Designing complex digital systems, such as computers, is an important
engineering activity. Digital systems are constructed by interconnecting
simple elements. Although the behavior of these individual elements is
simple, networks of them can have very complex behavior. Computer
simulation of proposed circuit designs is an important tool used by
digital systems engineers. In this section we design a system for
performing digital logic simulations. This system typifies a kind of
program called an *event-driven simulation*, in which actions ("events")
trigger further events that happen at a later time, which in turn
trigger more events, and so on.
Our computational model of a circuit will be composed of objects that
correspond to the elementary components from which the circuit is
constructed. There are *wires*, which carry *digital signals*. A digital
signal may at any moment have only one of two possible values, 0 and 1.
There are also various types of digital *function boxes*, which connect
wires carrying input signals to other output wires. Such boxes produce
output signals computed from their input signals. The output signal is
delayed by a time that depends on the type of the function box. For
example, an *inverter* is a primitive function box that inverts its
input. If the input signal to an inverter changes to 0, then one
inverter-delay later the inverter will change its output signal to 1. If
the input signal to an inverter changes to 1, then one inverter-delay
later the inverter will change its output signal to 0. We draw an
inverter symbolically as in [Figure 3.24](#Figure 3.24). An *and-gate*,
also shown in [Figure 3.24](#Figure 3.24), is a primitive function box
with two inputs and one output. It drives its output signal to a value
that is the *logical and* of the inputs. That is, if both of its input
signals become 1, then one and-gate-delay time later the and-gate will
force its output signal to be 1; otherwise the output will be 0. An
*or-gate* is a similar two-input primitive function box that drives its
output signal to a value that is the *logical or* of the inputs. That
is, the output will become 1 if at least one of the input signals is 1;
otherwise the output will become 0.
[]{#Figure 3.24 label="Figure 3.24"}
![image](fig/chap3/Fig3.24b.pdf){width="74mm"}
**Figure 3.24:** Primitive functions in the digital logic simulator.
We can connect primitive functions together to construct more complex
functions. To accomplish this we wire the outputs of some function boxes
to the inputs of other function boxes. For example, the *half-adder*
circuit shown in [Figure 3.25](#Figure 3.25) consists of an or-gate, two
and-gates, and an inverter. It takes two input signals, A and B, and has
two output signals, S and C. S will become 1 whenever precisely one of A
and B is 1, and C will become 1 whenever A and B are both 1. We can see
from the figure that, because of the delays involved, the outputs may be
generated at different times. Many of the difficulties in the design of
digital circuits arise from this fact.
[]{#Figure 3.25 label="Figure 3.25"}
![image](fig/chap3/Fig3.25c.pdf){width="72mm"}
**Figure 3.25:** A half-adder circuit.
We will now build a program for modeling the digital logic circuits we
wish to study. The program will construct computational objects modeling
the wires, which will "hold" the signals. Function boxes will be modeled
by procedures that enforce the correct relationships among the signals.
One basic element of our simulation will be a procedure `make/wire`,
which constructs wires. For example, we can construct six wires as
follows:
::: scheme
(define a (make-wire)) (define b (make-wire)) (define c (make-wire))
(define d (make-wire)) (define e (make-wire)) (define s (make-wire))
:::
We attach a function box to a set of wires by calling a procedure that
constructs that kind of box. The arguments to the constructor procedure
are the wires to be attached to the box. For example, given that we can
construct and-gates, or-gates, and inverters, we can wire together the
half-adder shown in [Figure 3.25](#Figure 3.25):
::: scheme
(or-gate a b d) *ok* (and-gate a b c) *ok* (inverter c e) *ok*
(and-gate d e s) *ok*
:::
Better yet, we can explicitly name this operation by defining a
procedure `half/adder` that constructs this circuit, given the four
external wires to be attached to the half-adder:
::: scheme
(define (half-adder a b s c) (let ((d (make-wire)) (e (make-wire)))
(or-gate a b d) (and-gate a b c) (inverter c e) (and-gate d e s) 'ok))
:::
The advantage of making this definition is that we can use `half/adder`
itself as a building block in creating more complex circuits. [Figure
3.26](#Figure 3.26), for example, shows a *full-adder* composed of two
half-adders and an or-gate.[^154] We can construct a full-adder as
follows:
::: scheme
(define (full-adder a b c-in sum c-out) (let ((s (make-wire)) (c1
(make-wire)) (c2 (make-wire))) (half-adder b c-in s c1) (half-adder a s
sum c2) (or-gate c1 c2 c-out) 'ok))
:::
[]{#Figure 3.26 label="Figure 3.26"}
![image](fig/chap3/Fig3.26a.pdf){width="74mm"}
**Figure 3.26:** A full-adder circuit.
Having defined `full/adder` as a procedure, we can now use it as a
building block for creating still more complex circuits. (For example,
see [Exercise 3.30](#Exercise 3.30).)
In essence, our simulator provides us with the tools to construct a
language of circuits. If we adopt the general perspective on languages
with which we approached the study of Lisp in [Section
1.1](#Section 1.1), we can say that the primitive function boxes form
the primitive elements of the language, that wiring boxes together
provides a means of combination, and that specifying wiring patterns as
procedures serves as a means of abstraction.
#### Primitive function boxes {#primitive-function-boxes .unnumbered}
The primitive function boxes implement the "forces" by which a change in
the signal on one wire influences the signals on other wires. To build
function boxes, we use the following operations on wires:
- `(get/signal`$\;\;\langle\kern0.06em\hbox{\ttfamily\slshape wire}\kern0.08em\rangle$`)`
returns the current value of the signal on the wire.
- `(set/signal!`$\;\;\langle\kern0.08em\hbox{\ttfamily\slshape wire}\kern0.08em\rangle\;\;\langle\kern0.08em\hbox{\ttfamily\slshape new value}\kern0.08em\rangle$`)`
changes the value of the signal on the wire to the new value.
- `(add/action!`$\;\;\langle\kern0.08em\hbox{\ttfamily\slshape wire}\kern0.08em\rangle\;\;\langle\kern0.08em\hbox{\ttfamily\slshape procedure of no arguments}\kern0.02em\rangle$`)`
asserts that the designated procedure should be run whenever the
signal on the wire changes value. Such procedures are the vehicles
by which changes in the signal value on the wire are communicated to
other wires.
In addition, we will make use of a procedure `after/delay` that takes a
time delay and a procedure to be run and executes the given procedure
after the given delay.
Using these procedures, we can define the primitive digital logic
functions. To connect an input to an output through an inverter, we use
`add/action!` to associate with the input wire a procedure that will be
run whenever the signal on the input wire changes value. The procedure
computes the `logical/not` of the input signal, and then, after one
`inverter/delay`, sets the output signal to be this new value:
::: scheme
(define (inverter input output) (define (invert-input) (let ((new-value
(logical-not (get-signal input)))) (after-delay inverter-delay (lambda
() (set-signal! output new-value))))) (add-action! input invert-input)
'ok) (define (logical-not s) (cond ((= s 0) 1) ((= s 1) 0) (else (error
\"Invalid signal\" s))))
:::
An and-gate is a little more complex. The action procedure must be run
if either of the inputs to the gate changes. It computes the
`logical/and` (using a procedure analogous to `logical/not`) of the
values of the signals on the input wires and sets up a change to the new
value to occur on the output wire after one `and/gate/delay`.
::: scheme
(define (and-gate a1 a2 output) (define (and-action-procedure) (let
((new-value (logical-and (get-signal a1) (get-signal a2)))) (after-delay
and-gate-delay (lambda () (set-signal! output new-value)))))
(add-action! a1 and-action-procedure) (add-action! a2
and-action-procedure) 'ok)
:::
> **[]{#Exercise 3.28 label="Exercise 3.28"}Exercise 3.28:** Define an
> or-gate as a primitive function box. Your `or/gate` constructor should
> be similar to `and/gate`.
> **[]{#Exercise 3.29 label="Exercise 3.29"}Exercise 3.29:** Another way
> to construct an or-gate is as a compound digital logic device, built
> from and-gates and inverters. Define a procedure `or/gate` that
> accomplishes this. What is the delay time of the or-gate in terms of
> `and/gate/delay` and `inverter/delay`?
> **[]{#Exercise 3.30 label="Exercise 3.30"}Exercise 3.30:** [Figure
> 3.27](#Figure 3.27) shows a *ripple-carry adder* formed by stringing
> together $n$ full-adders. This is the simplest form of parallel adder
> for adding two $n$-bit binary numbers. The inputs $A_1$, $A_2$, $A_3$,
> $\dots$, $A_n$ and $B_1$, $B_2$, $B_3$, $\dots$, $B_n$ are the two
> binary numbers to be added (each $A_k$ and $B_k$ is a 0 or a 1). The
> circuit generates $S_1$, $S_2$, $S_3$, $\dots$, $S_n$, the $n$ bits of
> the sum, and $C$, the carry from the addition. Write a procedure
> `ripple/carry/adder` that generates this circuit. The procedure should
> take as arguments three lists of $n$ wires each---the $A_k$, the
> $B_k$, and the $S_k$---and also another wire $C$. The major drawback
> of the ripple-carry adder is the need to wait for the carry signals to
> propagate. What is the delay needed to obtain the complete output from
> an $n$-bit ripple-carry adder, expressed in terms of the delays for
> and-gates, or-gates, and inverters?
[]{#Figure 3.27 label="Figure 3.27"}
![image](fig/chap3/Fig3.27a.pdf){width="96mm"}
**Figure 3.27:** A ripple-carry adder for $n$-bit numbers.
#### Representing wires {#representing-wires .unnumbered}
A wire in our simulation will be a computational object with two local
state variables: a `signal/value` (initially taken to be 0) and a
collection of `action/procedures` to be run when the signal changes
value. We implement the wire, using message-passing style, as a
collection of local procedures together with a `dispatch` procedure that
selects the appropriate local operation, just as we did with the simple
bank-account object in [Section 3.1.1](#Section 3.1.1):
::: scheme
(define (make-wire) (let ((signal-value 0) (action-procedures '()))
(define (set-my-signal! new-value) (if (not (= signal-value new-value))
(begin (set! signal-value new-value) (call-each action-procedures))
'done)) (define (accept-action-procedure! proc) (set! action-procedures
(cons proc action-procedures)) (proc)) (define (dispatch m) (cond ((eq?
m 'get-signal) signal-value) ((eq? m 'set-signal!) set-my-signal!) ((eq?
m 'add-action!) accept-action-procedure!) (else (error \"Unknown
operation: WIRE\" m)))) dispatch))
:::
The local procedure `set/my/signal!` tests whether the new signal value
changes the signal on the wire. If so, it runs each of the action
procedures, using the following procedure `call/each`, which calls each
of the items in a list of no-argument procedures:
::: scheme
(define (call-each procedures) (if (null? procedures) 'done (begin ((car
procedures)) (call-each (cdr procedures)))))
:::
The local procedure `accept/action/procedure!` adds the given procedure
to the list of procedures to be run, and then runs the new procedure
once. (See [Exercise 3.31](#Exercise 3.31).)
With the local `dispatch` procedure set up as specified, we can provide
the following procedures to access the local operations on wires:[^155]
::: scheme
(define (get-signal wire) (wire 'get-signal)) (define (set-signal! wire
new-value) ((wire 'set-signal!) new-value)) (define (add-action! wire
action-procedure) ((wire 'add-action!) action-procedure))
:::
Wires, which have time-varying signals and may be incrementally attached
to devices, are typical of mutable objects. We have modeled them as
procedures with local state variables that are modified by assignment.
When a new wire is created, a new set of state variables is allocated
(by the `let` expression in `make/wire`) and a new `dispatch` procedure
is constructed and returned, capturing the environment with the new
state variables.
The wires are shared among the various devices that have been connected
to them. Thus, a change made by an interaction with one device will
affect all the other devices attached to the wire. The wire communicates
the change to its neighbors by calling the action procedures provided to
it when the connections were established.
#### The agenda {#the-agenda .unnumbered}
The only thing needed to complete the simulator is `after/delay`. The
idea here is that we maintain a data structure, called an *agenda*, that
contains a schedule of things to do. The following operations are
defined for agendas:
- `(make/agenda)` returns a new empty agenda.
- `(empty/agenda?`$\;\;\langle\kern0.08em\hbox{\ttfamily\slshape agenda}\kern0.06em\rangle\hbox{\tt)}$
is true if the specified agenda is empty.
- `(first/agenda/item`$\;\;\langle\kern0.08em\hbox{\ttfamily\slshape agenda}\kern0.06em\rangle\hbox{\tt)}$
returns the first item on the agenda.
- `(remove/first/agenda/item!`$\;\langle\kern0.08em\hbox{\ttfamily\slshape agenda}\kern0.06em\rangle\hbox{\tt)}$
modifies the agenda by removing the first item.
- `(add/to/agenda!`$\;\;\langle\kern0.03em\hbox{\ttfamily\slshape time}\kern0.06em\rangle\;\;\langle\kern0.08em\hbox{\ttfamily\slshape action}\kern0.06em\rangle\;\;\langle\kern0.08em\hbox{\ttfamily\slshape agenda}\kern0.06em\rangle\hbox{\tt)}$
modifies the agenda by adding the given action procedure to be run
at the specified time.
- `(current/time`$\;\;\langle\kern0.08em\hbox{\ttfamily\slshape agenda}\kern0.04em\rangle\hbox{\tt)}$
returns the current simulation time.
The particular agenda that we use is denoted by `the/agenda`. The
procedure `after/delay` adds new elements to `the/agenda`:
::: scheme
(define (after-delay delay action) (add-to-agenda! (+ delay
(current-time the-agenda)) action the-agenda))
:::
The simulation is driven by the procedure `propagate`, which operates on
`the/agenda`, executing each procedure on the agenda in sequence. In
general, as the simulation runs, new items will be added to the agenda,
and `propagate` will continue the simulation as long as there are items
on the agenda:
::: scheme
(define (propagate) (if (empty-agenda? the-agenda) 'done (let
((first-item (first-agenda-item the-agenda))) (first-item)
(remove-first-agenda-item! the-agenda) (propagate))))
:::
#### A sample simulation {#a-sample-simulation .unnumbered}
The following procedure, which places a "probe" on a wire, shows the
simulator in action. The probe tells the wire that, whenever its signal
changes value, it should print the new signal value, together with the
current time and a name that identifies the wire:
::: scheme
(define (probe name wire) (add-action! wire (lambda () (newline)
(display name) (display \" \") (display (current-time the-agenda))
(display \" New-value = \") (display (get-signal wire)))))
:::
We begin by initializing the agenda and specifying delays for the
primitive function boxes:
::: scheme
(define the-agenda (make-agenda)) (define inverter-delay 2) (define
and-gate-delay 3) (define or-gate-delay 5)
:::
Now we define four wires, placing probes on two of them:
::: scheme
(define input-1 (make-wire)) (define input-2 (make-wire)) (define sum
(make-wire)) (define carry (make-wire))
(probe 'sum sum) *sum 0 New-value = 0*
(probe 'carry carry) *carry 0 New-value = 0*
:::
Next we connect the wires in a half-adder circuit (as in [Figure
3.25](#Figure 3.25)), set the signal on `input/1` to 1, and run the
simulation:
::: scheme
(half-adder input-1 input-2 sum carry) *ok*
:::
::: scheme
(set-signal! input-1 1) *done*
:::
::: scheme
(propagate) *sum 8 New-value = 1* *done*
:::
The `sum` signal changes to 1 at time 8. We are now eight time units
from the beginning of the simulation. At this point, we can set the
signal on `input/2` to 1 and allow the values to propagate:
::: scheme
(set-signal! input-2 1) *done*
:::
::: scheme
(propagate) *carry 11 New-value = 1* *sum 16 New-value = 0* *done*
:::
The `carry` changes to 1 at time 11 and the `sum` changes to 0 at time
16.
> **[]{#Exercise 3.31 label="Exercise 3.31"}Exercise 3.31:** The
> internal procedure `accept/action/procedure!` defined in `make/wire`
> specifies that when a new action procedure is added to a wire, the
> procedure is immediately run. Explain why this initialization is
> necessary. In particular, trace through the half-adder example in the
> paragraphs above and say how the system's response would differ if we
> had defined `accept/action/procedure!` as
>
> ::: scheme
> (define (accept-action-procedure! proc) (set! action-procedures (cons
> proc action-procedures)))
> :::
#### Implementing the agenda {#implementing-the-agenda .unnumbered}
Finally, we give details of the agenda data structure, which holds the
procedures that are scheduled for future execution.
The agenda is made up of *time segments*. Each time segment is a pair
consisting of a number (the time) and a queue (see [Exercise
3.32](#Exercise 3.32)) that holds the procedures that are scheduled to
be run during that time segment.
::: scheme
(define (make-time-segment time queue) (cons time queue)) (define
(segment-time s) (car s)) (define (segment-queue s) (cdr s))
:::
We will operate on the time-segment queues using the queue operations
described in [Section 3.3.2](#Section 3.3.2).
The agenda itself is a one-dimensional table of time segments. It
differs from the tables described in [Section 3.3.3](#Section 3.3.3) in
that the segments will be sorted in order of increasing time. In
addition, we store the *current time* (i.e., the time of the last action
that was processed) at the head of the agenda. A newly constructed
agenda has no time segments and has a current time of 0:[^156]
::: scheme
(define (make-agenda) (list 0)) (define (current-time agenda) (car
agenda)) (define (set-current-time! agenda time) (set-car! agenda time))
(define (segments agenda) (cdr agenda)) (define (set-segments! agenda
segments) (set-cdr! agenda segments)) (define (first-segment agenda)
(car (segments agenda))) (define (rest-segments agenda) (cdr (segments
agenda)))
:::
An agenda is empty if it has no time segments:
::: scheme
(define (empty-agenda? agenda) (null? (segments agenda)))
:::
To add an action to an agenda, we first check if the agenda is empty. If
so, we create a time segment for the action and install this in the
agenda. Otherwise, we scan the agenda, examining the time of each
segment. If we find a segment for our appointed time, we add the action
to the associated queue. If we reach a time later than the one to which
we are appointed, we insert a new time segment into the agenda just
before it. If we reach the end of the agenda, we must create a new time
segment at the end.
::: scheme
(define (add-to-agenda! time action agenda) (define (belongs-before?
segments) (or (null? segments) (\< time (segment-time (car segments)))))
(define (make-new-time-segment time action) (let ((q (make-queue)))
(insert-queue! q action) (make-time-segment time q))) (define
(add-to-segments! segments) (if (= (segment-time (car segments)) time)
(insert-queue! (segment-queue (car segments)) action) (let ((rest (cdr
segments))) (if (belongs-before? rest) (set-cdr! segments (cons
(make-new-time-segment time action) (cdr segments))) (add-to-segments!
rest))))) (let ((segments (segments agenda))) (if (belongs-before?
segments) (set-segments! agenda (cons (make-new-time-segment time
action) segments)) (add-to-segments! segments))))
:::
The procedure that removes the first item from the agenda deletes the
item at the front of the queue in the first time segment. If this
deletion makes the time segment empty, we remove it from the list of
segments:[^157]
::: scheme
(define (remove-first-agenda-item! agenda) (let ((q (segment-queue
(first-segment agenda)))) (delete-queue! q) (if (empty-queue? q)
(set-segments! agenda (rest-segments agenda)))))
:::
The first agenda item is found at the head of the queue in the first
time segment. Whenever we extract an item, we also update the current
time:[^158]
::: scheme
(define (first-agenda-item agenda) (if (empty-agenda? agenda) (error
\"Agenda is empty: FIRST-AGENDA-ITEM\") (let ((first-seg (first-segment
agenda))) (set-current-time! agenda (segment-time first-seg))
(front-queue (segment-queue first-seg)))))
:::
> **[]{#Exercise 3.32 label="Exercise 3.32"}Exercise 3.32:** The
> procedures to be run during each time segment of the agenda are kept
> in a queue. Thus, the procedures for each segment are called in the
> order in which they were added to the agenda (first in, first out).
> Explain why this order must be used. In particular, trace the behavior
> of an and-gate whose inputs change from 0, 1 to 1, 0 in the same
> segment and say how the behavior would differ if we stored a segment's
> procedures in an ordinary list, adding and removing procedures only at
> the front (last in, first out).
### Propagation of Constraints {#Section 3.3.5}
Computer programs are traditionally organized as one-directional
computations, which perform operations on prespecified arguments to
produce desired outputs. On the other hand, we often model systems in
terms of relations among quantities. For example, a mathematical model
of a mechanical structure might include the information that the
deflection $d$ of a metal rod is related to the force $F$ on the rod,
the length $L$ of the rod, the cross-sectional area $A$, and the elastic
modulus $E$ via the equation
$$dAE = FL.$$
Such an equation is not one-directional. Given any four of the
quantities, we can use it to compute the fifth. Yet translating the
equation into a traditional computer language would force us to choose
one of the quantities to be computed in terms of the other four. Thus, a
procedure for computing the area $A$ could not be used to compute the
deflection $d$, even though the computations of $A$ and $d$ arise from
the same equation.[^159]
In this section, we sketch the design of a language that enables us to
work in terms of relations themselves. The primitive elements of the
language are *primitive constraints*, which state that certain relations
hold between quantities. For example, `(adder a b c)` specifies that the
quantities $a$, $b$, and $c$ must be related by the equation
$a + b = c$, `(multiplier x y z)` expresses the constraint $xy = z$, and
`(constant 3.14 x)` says that the value of $x$ must be 3.14.
Our language provides a means of combining primitive constraints in
order to express more complex relations. We combine constraints by
constructing *constraint networks*, in which constraints are joined by
*connectors*. A connector is an object that "holds" a value that may
participate in one or more constraints. For example, we know that the
relationship between Fahrenheit and Celsius temperatures is
$$9C = 5(F - 32).$$
Such a constraint can be thought of as a network consisting of primitive
adder, multiplier, and constant constraints ([Figure
3.28](#Figure 3.28)). In the figure, we see on the left a multiplier box
with three terminals, labeled $m$`<!-- -->`{=html}1,
$m$`<!-- -->`{=html}2, and $p$. These connect the multiplier to the rest
of the network as follows: The $m$`<!-- -->`{=html}1 terminal is linked
to a connector $C$, which will hold the Celsius temperature. The
$m$`<!-- -->`{=html}2 terminal is linked to a connector $w$, which is
also linked to a constant box that holds 9. The $p$ terminal, which the
multiplier box constrains to be the product of $m$`<!-- -->`{=html}1 and
$m$`<!-- -->`{=html}2, is linked to the $p$ terminal of another
multiplier box, whose $m$`<!-- -->`{=html}2 is connected to a constant 5
and whose $m$`<!-- -->`{=html}1 is connected to one of the terms in a
sum.
[]{#Figure 3.28 label="Figure 3.28"}
![image](fig/chap3/Fig3.28.pdf){width="87mm"}
> **Figure 3.28:** The relation $9C = 5(F - 32)$ expressed as a
> constraint network.
Computation by such a network proceeds as follows: When a connector is
given a value (by the user or by a constraint box to which it is
linked), it awakens all of its associated constraints (except for the
constraint that just awakened it) to inform them that it has a value.
Each awakened constraint box then polls its connectors to see if there
is enough information to determine a value for a connector. If so, the
box sets that connector, which then awakens all of its associated
constraints, and so on. For instance, in conversion between Celsius and
Fahrenheit, $w$, $x$, and $y$ are immediately set by the constant boxes
to 9, 5, and 32, respectively. The connectors awaken the multipliers and
the adder, which determine that there is not enough information to
proceed. If the user (or some other part of the network) sets $C$ to a
value (say 25), the leftmost multiplier will be awakened, and it will
set $u$ to $25 \cdot 9 = 225$. Then $u$ awakens the second multiplier,
which sets $v$ to 45, and $v$ awakens the adder, which sets $f$ to 77.
#### Using the constraint system {#using-the-constraint-system .unnumbered}
To use the constraint system to carry out the temperature computation
outlined above, we first create two connectors, `C` and `F`, by calling
the constructor `make/connector`, and link `C` and `F` in an appropriate
network:
::: scheme
(define C (make-connector)) (define F (make-connector))
(celsius-fahrenheit-converter C F) *ok*
:::
The procedure that creates the network is defined as follows:
::: scheme
(define (celsius-fahrenheit-converter c f) (let ((u (make-connector)) (v
(make-connector)) (w (make-connector)) (x (make-connector)) (y
(make-connector))) (multiplier c w u) (multiplier v x u) (adder v y f)
(constant 9 w) (constant 5 x) (constant 32 y) 'ok))
:::
This procedure creates the internal connectors `u`, `v`, `w`, `x`, and
`y`, and links them as shown in [Figure 3.28](#Figure 3.28) using the
primitive constraint constructors `adder`, `multiplier`, and `constant`.
Just as with the digital-circuit simulator of [Section
3.3.4](#Section 3.3.4), expressing these combinations of primitive
elements in terms of procedures automatically provides our language with
a means of abstraction for compound objects.
To watch the network in action, we can place probes on the connectors
`C` and `F`, using a `probe` procedure similar to the one we used to
monitor wires in [Section 3.3.4](#Section 3.3.4). Placing a probe on a
connector will cause a message to be printed whenever the connector is
given a value:
::: scheme
(probe \"Celsius temp\" C) (probe \"Fahrenheit temp\" F)
:::
Next we set the value of `C` to 25. (The third argument to `set/value!`
tells `C` that this directive comes from the `user`.)
::: scheme
(set-value! C 25 'user) *Probe: Celsius temp = 25* *Probe: Fahrenheit
temp = 77* *done*
:::
The probe on `C` awakens and reports the value. `C` also propagates its
value through the network as described above. This sets `F` to 77, which
is reported by the probe on `F`.
Now we can try to set `F` to a new value, say 212:
::: scheme
(set-value! F 212 'user) *Error! Contradiction (77 212)*
:::
The connector complains that it has sensed a contradiction: Its value is
77, and someone is trying to set it to 212. If we really want to reuse
the network with new values, we can tell `C` to forget its old value:
::: scheme
(forget-value! C 'user) *Probe: Celsius temp = ?* *Probe: Fahrenheit
temp = ?* *done*
:::
`C` finds that the `user`, who set its value originally, is now
retracting that value, so `C` agrees to lose its value, as shown by the
probe, and informs the rest of the network of this fact. This
information eventually propagates to `F`, which now finds that it has no
reason for continuing to believe that its own value is 77. Thus, `F`
also gives up its value, as shown by the probe.
Now that `F` has no value, we are free to set it to 212:
::: scheme
(set-value! F 212 'user) *Probe: Fahrenheit temp = 212* *Probe:
Celsius temp = 100* *done*
:::
This new value, when propagated through the network, forces `C` to have
a value of 100, and this is registered by the probe on `C`. Notice that
the very same network is being used to compute `C` given `F` and to
compute `F` given `C`. This nondirectionality of computation is the
distinguishing feature of constraint-based systems.
#### Implementing the constraint system {#implementing-the-constraint-system .unnumbered}
The constraint system is implemented via procedural objects with local
state, in a manner very similar to the digital-circuit simulator of
[Section 3.3.4](#Section 3.3.4). Although the primitive objects of the
constraint system are somewhat more complex, the overall system is
simpler, since there is no concern about agendas and logic delays.
The basic operations on connectors are the following:
- `(has/value? `$\langle$*`connector`*$\rangle$`)` tells whether the
connector has a value.
- `(get/value `$\langle$*`connector`*$\rangle$`)` returns the
connector's current value.
- `(set/value! `$\langle$*`connector`*$\rangle$` `$\langle$*`new/value`*$\rangle$` `$\langle$*`informant`*$\rangle$`)`
indicates that the informant is requesting the connector to set its
value to the new value.
- `(forget/value! `$\langle$*`connector`*$\rangle$` `$\langle$*`retractor`*$\rangle$`)`
tells the connector that the retractor is requesting it to forget
its value.
- `(connect `$\langle$*`connector`*$\rangle$` `$\langle$*`new/constraint`*$\rangle$`)`
tells the connector to participate in the new constraint.
The connectors communicate with the constraints by means of the
procedures `inform/about/value`, which tells the given constraint that
the connector has a value, and `inform/about/no/value`, which tells the
constraint that the connector has lost its value.
`adder` constructs an adder constraint among summand connectors `a1` and
`a2` and a `sum` connector. An adder is implemented as a procedure with
local state (the procedure `me` below):
::: scheme
(define (adder a1 a2 sum) (define (process-new-value) (cond ((and
(has-value? a1) (has-value? a2)) (set-value! sum (+ (get-value a1)
(get-value a2)) me)) ((and (has-value? a1) (has-value? sum)) (set-value!
a2 (- (get-value sum) (get-value a1)) me)) ((and (has-value? a2)
(has-value? sum)) (set-value! a1 (- (get-value sum) (get-value a2))
me)))) (define (process-forget-value) (forget-value! sum me)
(forget-value! a1 me) (forget-value! a2 me) (process-new-value)) (define
(me request) (cond ((eq? request 'I-have-a-value) (process-new-value))
((eq? request 'I-lost-my-value) (process-forget-value)) (else (error
\"Unknown request: ADDER\" request)))) (connect a1 me) (connect a2 me)
(connect sum me) me)
:::
`adder` connects the new adder to the designated connectors and returns
it as its value. The procedure `me`, which represents the adder, acts as
a dispatch to the local procedures. The following "syntax interfaces"
(see [Footnote 27](#Footnote 27) in [Section 3.3.4](#Section 3.3.4)) are
used in conjunction with the dispatch:
::: scheme
(define (inform-about-value constraint) (constraint 'I-have-a-value))
(define (inform-about-no-value constraint) (constraint
'I-lost-my-value))
:::
The adder's local procedure `process/new/value` is called when the adder
is informed that one of its connectors has a value. The adder first
checks to see if both `a1` and `a2` have values. If so, it tells `sum`
to set its value to the sum of the two addends. The `informant` argument
to `set/value!` is `me`, which is the adder object itself. If `a1` and
`a2` do not both have values, then the adder checks to see if perhaps
`a1` and `sum` have values. If so, it sets `a2` to the difference of
these two. Finally, if `a2` and `sum` have values, this gives the adder
enough information to set `a1`. If the adder is told that one of its
connectors has lost a value, it requests that all of its connectors now
lose their values. (Only those values that were set by this adder are
actually lost.) Then it runs `process/new/value`. The reason for this
last step is that one or more connectors may still have a value (that
is, a connector may have had a value that was not originally set by the
adder), and these values may need to be propagated back through the
adder.
A multiplier is very similar to an adder. It will set its `product` to 0
if either of the factors is 0, even if the other factor is not known.
::: scheme
(define (multiplier m1 m2 product) (define (process-new-value) (cond
((or (and (has-value? m1) (= (get-value m1) 0)) (and (has-value? m2) (=
(get-value m2) 0))) (set-value! product 0 me)) ((and (has-value? m1)
(has-value? m2)) (set-value! product (\* (get-value m1) (get-value m2))
me)) ((and (has-value? product) (has-value? m1)) (set-value! m2 (/
(get-value product) (get-value m1)) me)) ((and (has-value? product)
(has-value? m2)) (set-value! m1 (/ (get-value product) (get-value m2))
me)))) (define (process-forget-value) (forget-value! product me)
(forget-value! m1 me) (forget-value! m2 me) (process-new-value)) (define
(me request) (cond ((eq? request 'I-have-a-value) (process-new-value))
((eq? request 'I-lost-my-value) (process-forget-value)) (else (error
\"Unknown request: MULTIPLIER\" request)))) (connect m1 me) (connect m2
me) (connect product me) me)
:::
A `constant` constructor simply sets the value of the designated
connector. Any `I/have/a/value` or `I/lost/my/value` message sent to the
constant box will produce an error.
::: scheme
(define (constant value connector) (define (me request) (error \"Unknown
request: CONSTANT\" request)) (connect connector me) (set-value!
connector value me) me)
:::
Finally, a probe prints a message about the setting or unsetting of the
designated connector:
::: scheme
(define (probe name connector) (define (print-probe value) (newline)
(display \"Probe: \") (display name) (display \" = \") (display value))
(define (process-new-value) (print-probe (get-value connector))) (define
(process-forget-value) (print-probe \"?\")) (define (me request) (cond
((eq? request 'I-have-a-value) (process-new-value)) ((eq? request
'I-lost-my-value) (process-forget-value)) (else (error \"Unknown
request: PROBE\" request)))) (connect connector me) me)
:::
#### Representing connectors {#representing-connectors .unnumbered}
A connector is represented as a procedural object with local state
variables `value`, the current value of the connector; `informant`, the
object that set the connector's value; and `constraints`, a list of the
constraints in which the connector participates.
::: scheme
(define (make-connector) (let ((value false) (informant false)
(constraints '())) (define (set-my-value newval setter) (cond ((not
(has-value? me)) (set! value newval) (set! informant setter)
(for-each-except setter inform-about-value constraints)) ((not (= value
newval)) (error \"Contradiction\" (list value newval))) (else
'ignored))) (define (forget-my-value retractor) (if (eq? retractor
informant) (begin (set! informant false) (for-each-except retractor
inform-about-no-value constraints)) 'ignored)) (define (connect
new-constraint) (if (not (memq new-constraint constraints)) (set!
constraints (cons new-constraint constraints))) (if (has-value? me)
(inform-about-value new-constraint)) 'done) (define (me request) (cond
((eq? request 'has-value?) (if informant true false)) ((eq? request
'value) value) ((eq? request 'set-value!) set-my-value) ((eq? request
'forget) forget-my-value) ((eq? request 'connect) connect) (else (error
\"Unknown operation: CONNECTOR\" request)))) me))
:::
The connector's local procedure `set/my/value` is called when there is a
request to set the connector's value. If the connector does not
currently have a value, it will set its value and remember as
`informant` the constraint that requested the value to be set.[^160]
Then the connector will notify all of its participating constraints
except the constraint that requested the value to be set. This is
accomplished using the following iterator, which applies a designated
procedure to all items in a list except a given one:
::: scheme
(define (for-each-except exception procedure list) (define (loop items)
(cond ((null? items) 'done) ((eq? (car items) exception) (loop (cdr
items))) (else (procedure (car items)) (loop (cdr items))))) (loop
list))
:::
If a connector is asked to forget its value, it runs the local procedure
`forget/my/value`, which first checks to make sure that the request is
coming from the same object that set the value originally. If so, the
connector informs its associated constraints about the loss of the
value.
The local procedure `connect` adds the designated new constraint to the
list of constraints if it is not already in that list. Then, if the
connector has a value, it informs the new constraint of this fact.
The connector's procedure `me` serves as a dispatch to the other
internal procedures and also represents the connector as an object. The
following procedures provide a syntax interface for the dispatch:
::: scheme
(define (has-value? connector) (connector 'has-value?)) (define
(get-value connector) (connector 'value)) (define (set-value! connector
new-value informant) ((connector 'set-value!) new-value informant))
(define (forget-value! connector retractor) ((connector 'forget)
retractor)) (define (connect connector new-constraint) ((connector
'connect) new-constraint))
:::
> **[]{#Exercise 3.33 label="Exercise 3.33"}Exercise 3.33:** Using
> primitive multiplier, adder, and constant constraints, define a
> procedure `averager` that takes three connectors `a`, `b`, and `c` as
> inputs and establishes the constraint that the value of `c` is the
> average of the values of `a` and `b`.
> **[]{#Exercise 3.34 label="Exercise 3.34"}Exercise 3.34:** Louis
> Reasoner wants to build a squarer, a constraint device with two
> terminals such that the value of connector `b` on the second terminal
> will always be the square of the value `a` on the first terminal. He
> proposes the following simple device made from a multiplier:
>
> ::: scheme
> (define (squarer a b) (multiplier a a b))
> :::
>
> There is a serious flaw in this idea. Explain.
> **[]{#Exercise 3.35 label="Exercise 3.35"}Exercise 3.35:** Ben
> Bitdiddle tells Louis that one way to avoid the trouble in [Exercise
> 3.34](#Exercise 3.34) is to define a squarer as a new primitive
> constraint. Fill in the missing portions in Ben's outline for a
> procedure to implement such a constraint:
>
> ::: scheme
> (define (squarer a b) (define (process-new-value) (if (has-value? b)
> (if (\< (get-value b) 0) (error \"square less than 0: SQUARER\"
> (get-value b))
> $\color{SchemeDark}\langle$ *alternative1* $\color{SchemeDark}\rangle$ )
> $\color{SchemeDark}\langle$ *alternative2* $\color{SchemeDark}\rangle$ ))
> (define (process-forget-value)
> $\color{SchemeDark}\langle$ *body1* $\color{SchemeDark}\rangle$ )
> (define (me request)
> $\color{SchemeDark}\langle$ *body2* $\color{SchemeDark}\rangle$ )
> $\color{SchemeDark}\langle$ *rest of
> definition* $\color{SchemeDark}\rangle$ me)
> :::
> **[]{#Exercise 3.36 label="Exercise 3.36"}Exercise 3.36:** Suppose we
> evaluate the following sequence of expressions in the global
> environment:
>
> ::: scheme
> (define a (make-connector)) (define b (make-connector)) (set-value! a
> 10 'user)
> :::
>
> At some time during evaluation of the `set/value!`, the following
> expression from the connector's local procedure is evaluated:
>
> ::: scheme
> (for-each-except setter inform-about-value constraints)
> :::
>
> Draw an environment diagram showing the environment in which the above
> expression is evaluated.
> **[]{#Exercise 3.37 label="Exercise 3.37"}Exercise 3.37:** The
> `celsius/fahrenheit/converter` procedure is cumbersome when compared
> with a more expression-oriented style of definition, such as
>
> ::: scheme
> (define (celsius-fahrenheit-converter x) (c+ (c\* (c/ (cv 9) (cv 5))
> x) (cv 32))) (define C (make-connector)) (define F
> (celsius-fahrenheit-converter C))
> :::
>
> Here `c+`, `c*`, etc. are the "constraint" versions of the arithmetic
> operations. For example, `c+` takes two connectors as arguments and
> returns a connector that is related to these by an adder constraint:
>
> ::: scheme
> (define (c+ x y) (let ((z (make-connector))) (adder x y z) z))
> :::
>
> Define analogous procedures `c-`, `c*`, `c/`, and `cv` (constant
> value) that enable us to define compound constraints as in the
> converter example above.[^161]
## Concurrency: Time Is of the Essence {#Section 3.4}
We've seen the power of computational objects with local state as tools
for modeling. Yet, as [Section 3.1.3](#Section 3.1.3) warned, this power
extracts a price: the loss of referential transparency, giving rise to a
thicket of questions about sameness and change, and the need to abandon
the substitution model of evaluation in favor of the more intricate
environment model.
The central issue lurking beneath the complexity of state, sameness, and
change is that by introducing assignment we are forced to admit *time*
into our computational models. Before we introduced assignment, all our
programs were timeless, in the sense that any expression that has a
value always has the same value. In contrast, recall the example of
modeling withdrawals from a bank account and returning the resulting
balance, introduced at the beginning of [Section 3.1.1](#Section 3.1.1):
::: scheme
(withdraw 25) *75* (withdraw 25) *50*
:::
Here successive evaluations of the same expression yield different
values. This behavior arises from the fact that the execution of
assignment statements (in this case, assignments to the variable
`balance`) delineates *moments in time* when values change. The result
of evaluating an expression depends not only on the expression itself,
but also on whether the evaluation occurs before or after these moments.
Building models in terms of computational objects with local state
forces us to confront time as an essential concept in programming.
We can go further in structuring computational models to match our
perception of the physical world. Objects in the world do not change one
at a time in sequence. Rather we perceive them as acting
*concurrently*---all at once. So it is often natural to model systems as
collections of computational processes that execute concurrently. Just
as we can make our programs modular by organizing models in terms of
objects with separate local state, it is often appropriate to divide
computational models into parts that evolve separately and concurrently.
Even if the programs are to be executed on a sequential computer, the
practice of writing programs as if they were to be executed concurrently
forces the programmer to avoid inessential timing constraints and thus
makes programs more modular.
In addition to making programs more modular, concurrent computation can
provide a speed advantage over sequential computation. Sequential
computers execute only one operation at a time, so the amount of time it
takes to perform a task is proportional to the total number of
operations performed.[^162] However, if it is possible to decompose a
problem into pieces that are relatively independent and need to
communicate only rarely, it may be possible to allocate pieces to
separate computing processors, producing a speed advantage proportional
to the number of processors available.
Unfortunately, the complexities introduced by assignment become even
more problematic in the presence of concurrency. The fact of concurrent
execution, either because the world operates in parallel or because our
computers do, entails additional complexity in our understanding of
time.
### The Nature of Time in Concurrent Systems {#Section 3.4.1}
On the surface, time seems straightforward. It is an ordering imposed on
events.[^163] For any events $A$ and $B$, either $A$ occurs before $B$,
$A$ and $B$ are simultaneous, or $A$ occurs after $B$. For instance,
returning to the bank account example, suppose that Peter withdraws \$10
and Paul withdraws \$25 from a joint account that initially contains
\$100, leaving \$65 in the account. Depending on the order of the two
withdrawals, the sequence of balances in the account is either
$\,\$100 \to \$90 \to \$65\,$ or $\,\$100 \to \$75
\to \$65\,$. In a computer implementation of the banking system, this
changing sequence of balances could be modeled by successive assignments
to a variable `balance`.
In complex situations, however, such a view can be problematic. Suppose
that Peter and Paul, and other people besides, are accessing the same
bank account through a network of banking machines distributed all over
the world. The actual sequence of balances in the account will depend
critically on the detailed timing of the accesses and the details of the
communication among the machines.
This indeterminacy in the order of events can pose serious problems in
the design of concurrent systems. For instance, suppose that the
withdrawals made by Peter and Paul are implemented as two separate
processes sharing a common variable `balance`, each process specified by
the procedure given in [Section 3.1.1](#Section 3.1.1):
::: scheme
(define (withdraw amount) (if (\>= balance amount) (begin (set! balance
(- balance amount)) balance) \"Insufficient funds\"))
:::
If the two processes operate independently, then Peter might test the
balance and attempt to withdraw a legitimate amount. However, Paul might
withdraw some funds in between the time that Peter checks the balance
and the time Peter completes the withdrawal, thus invalidating Peter's
test.
Things can be worse still. Consider the expression
::: scheme
(set! balance (- balance amount))
:::
executed as part of each withdrawal process. This consists of three
steps: (1) accessing the value of the `balance` variable; (2) computing
the new balance; (3) setting `balance` to this new value. If Peter and
Paul's withdrawals execute this statement concurrently, then the two
withdrawals might interleave the order in which they access `balance`
and set it to the new value.
[]{#Figure 3.29 label="Figure 3.29"}
![image](fig/chap3/Fig3.29b.pdf){width="109mm"}
> **Figure 3.29:** Timing diagram showing how interleaving the order of
> events in two banking withdrawals can lead to an incorrect final
> balance.
The timing diagram in [Figure 3.29](#Figure 3.29) depicts an order of
events where `balance` starts at 100, Peter withdraws 10, Paul withdraws
25, and yet the final value of `balance` is 75. As shown in the diagram,
the reason for this anomaly is that Paul's assignment of 75 to `balance`
is made under the assumption that the value of `balance` to be
decremented is 100. That assumption, however, became invalid when Peter
changed `balance` to 90. This is a catastrophic failure for the banking
system, because the total amount of money in the system is not
conserved. Before the transactions, the total amount of money was \$100.
Afterwards, Peter has \$10, Paul has \$25, and the bank has \$75.[^164]
The general phenomenon illustrated here is that several processes may
share a common state variable. What makes this complicated is that more
than one process may be trying to manipulate the shared state at the
same time. For the bank account example, during each transaction, each
customer should be able to act as if the other customers did not exist.
When a customer changes the balance in a way that depends on the
balance, he must be able to assume that, just before the moment of
change, the balance is still what he thought it was.
#### Correct behavior of concurrent programs {#correct-behavior-of-concurrent-programs .unnumbered}
The above example typifies the subtle bugs that can creep into
concurrent programs. The root of this complexity lies in the assignments
to variables that are shared among the different processes. We already
know that we must be careful in writing programs that use `set!`,
because the results of a computation depend on the order in which the
assignments occur.[^165] With concurrent processes we must be especially
careful about assignments, because we may not be able to control the
order of the assignments made by the different processes. If several
such changes might be made concurrently (as with two depositors
accessing a joint account) we need some way to ensure that our system
behaves correctly. For example, in the case of withdrawals from a joint
bank account, we must ensure that money is conserved. To make concurrent
programs behave correctly, we may have to place some restrictions on
concurrent execution.
One possible restriction on concurrency would stipulate that no two
operations that change any shared state variables can occur at the same
time. This is an extremely stringent requirement. For distributed
banking, it would require the system designer to ensure that only one
transaction could proceed at a time. This would be both inefficient and
overly conservative. [Figure 3.30](#Figure 3.30) shows Peter and Paul
sharing a bank account, where Paul has a private account as well. The
diagram illustrates two withdrawals from the shared account (one by
Peter and one by Paul) and a deposit to Paul's private account.[^166]
The two withdrawals from the shared account must not be concurrent
(since both access and update the same account), and Paul's deposit and
withdrawal must not be concurrent (since both access and update the
amount in Paul's wallet). But there should be no problem permitting
Paul's deposit to his private account to proceed concurrently with
Peter's withdrawal from the shared account.
[]{#Figure 3.30 label="Figure 3.30"}
![image](fig/chap3/Fig3.30b.pdf){width="94mm"}
> **Figure 3.30:** Concurrent deposits and withdrawals from a joint
> account in Bank1 and a private account in Bank2.
A less stringent restriction on concurrency would ensure that a
concurrent system produces the same result as if the processes had run
sequentially in some order. There are two important aspects to this
requirement. First, it does not require the processes to actually run
sequentially, but only to produce results that are the same *as if* they
had run sequentially. For the example in [Figure 3.30](#Figure 3.30),
the designer of the bank account system can safely allow Paul's deposit
and Peter's withdrawal to happen concurrently, because the net result
will be the same as if the two operations had happened sequentially.
Second, there may be more than one possible "correct" result produced by
a concurrent program, because we require only that the result be the
same as for *some* sequential order. For example, suppose that Peter and
Paul's joint account starts out with \$100, and Peter deposits \$40
while Paul concurrently withdraws half the money in the account. Then
sequential execution could result in the account balance being either
\$70 or \$90 (see [Exercise 3.38](#Exercise 3.38)).[^167]
There are still weaker requirements for correct execution of concurrent
programs. A program for simulating diffusion (say, the flow of heat in
an object) might consist of a large number of processes, each one
representing a small volume of space, that update their values
concurrently. Each process repeatedly changes its value to the average
of its own value and its neighbors' values. This algorithm converges to
the right answer independent of the order in which the operations are
done; there is no need for any restrictions on concurrent use of the
shared values.
> **[]{#Exercise 3.38 label="Exercise 3.38"}Exercise 3.38:** Suppose
> that Peter, Paul, and Mary share a joint bank account that initially
> contains \$100. Concurrently, Peter deposits \$10, Paul withdraws
> \$20, and Mary withdraws half the money in the account, by executing
> the following commands:
>
> ::: scheme
> Peter: (set! balance (+ balance 10)) Paul: (set! balance (- balance
> 20)) Mary: (set! balance (- balance (/ balance 2)))
> :::
>
> a. List all the different possible values for `balance` after these
> three transactions have been completed, assuming that the banking
> system forces the three processes to run sequentially in some
> order.
>
> b. What are some other values that could be produced if the system
> allows the processes to be interleaved? Draw timing diagrams like
> the one in [Figure 3.29](#Figure 3.29) to explain how these values
> can occur.
### Mechanisms for Controlling Concurrency {#Section 3.4.2}
We've seen that the difficulty in dealing with concurrent processes is
rooted in the need to consider the interleaving of the order of events
in the different processes. For example, suppose we have two processes,
one with three ordered events $(a, b, c)$ and one with three ordered
events $(x, y, z)$. If the two processes run concurrently, with no
constraints on how their execution is interleaved, then there are 20
different possible orderings for the events that are consistent with the
individual orderings for the two processes:
(a,b,c,x,y,z) (a,x,b,y,c,z) (x,a,b,c,y,z) (x,a,y,z,b,c) (a,b,x,c,y,z)
(a,x,b,y,z,c) (x,a,b,y,c,z) (x,y,a,b,c,z) (a,b,x,y,c,z) (a,x,y,b,c,z)
(x,a,b,y,z,c) (x,y,a,b,z,c) (a,b,x,y,z,c) (a,x,y,b,z,c) (x,a,y,b,c,z)
(x,y,a,z,b,c) (a,x,b,c,y,z) (a,x,y,z,b,c) (x,a,y,b,z,c) (x,y,z,a,b,c)
As programmers designing this system, we would have to consider the
effects of each of these 20 orderings and check that each behavior is
acceptable. Such an approach rapidly becomes unwieldy as the numbers of
processes and events increase.
A more practical approach to the design of concurrent systems is to
devise general mechanisms that allow us to constrain the interleaving of
concurrent processes so that we can be sure that the program behavior is
correct. Many mechanisms have been developed for this purpose. In this
section, we describe one of them, the *serializer*.
#### Serializing access to shared state {#serializing-access-to-shared-state .unnumbered}
Serialization implements the following idea: Processes will execute
concurrently, but there will be certain collections of procedures that
cannot be executed concurrently. More precisely, serialization creates
distinguished sets of procedures such that only one execution of a
procedure in each serialized set is permitted to happen at a time. If
some procedure in the set is being executed, then a process that
attempts to execute any procedure in the set will be forced to wait
until the first execution has finished.
We can use serialization to control access to shared variables. For
example, if we want to update a shared variable based on the previous
value of that variable, we put the access to the previous value of the
variable and the assignment of the new value to the variable in the same
procedure. We then ensure that no other procedure that assigns to the
variable can run concurrently with this procedure by serializing all of
these procedures with the same serializer. This guarantees that the
value of the variable cannot be changed between an access and the
corresponding assignment.
#### Serializers in Scheme {#serializers-in-scheme .unnumbered}
To make the above mechanism more concrete, suppose that we have extended
Scheme to include a procedure called `parallel/execute`:
::: scheme
(parallel-execute
$\color{SchemeDark}\langle$ *p* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *p* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *p* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize k}}\rangle$ )
:::
Each $\langle p \rangle$ must be a procedure of no arguments.
`parallel/execute` creates a separate process for each
$\langle p \rangle$, which applies $\langle p \rangle$ (to no
arguments). These processes all run concurrently.[^168]
As an example of how this is used, consider
::: scheme
(define x 10) (parallel-execute (lambda () (set! x (\* x x))) (lambda ()
(set! x (+ x 1))))
:::
This creates two concurrent processes---$P_1$, which sets `x` to `x`
times `x`, and $P_2$, which increments `x`. After execution is complete,
`x` will be left with one of five possible values, depending on the
interleaving of the events of $P_1$ and $P_2$:
::: scheme
101: [$P_1$ sets `x` to 100 and then $P_2$ increments `x` to
101.]{.roman} 121: [$P_2$ increments `x` to 11 and then $P_1$ sets `x`
to `x` `*` `x`.]{.roman} 110: [$P_2$ changes `x` from 10 to 11 between
the two times that]{.roman} [$P_1$ accesses the value of `x` during
the evaluation of `(* x x)`.]{.roman} 11: [$P_2$ accesses `x`, then
$P_1$ sets `x` to 100, then $P_2$ sets `x`.]{.roman} 100: [$P_1$
accesses `x` (twice), then $P_2$ sets `x` to 11, then $P_1$ sets
`x`.]{.roman}
:::
We can constrain the concurrency by using serialized procedures, which
are created by *serializers*. Serializers are constructed by
`make/serializer`, whose implementation is given below. A serializer
takes a procedure as argument and returns a serialized procedure that
behaves like the original procedure. All calls to a given serializer
return serialized procedures in the same set.
Thus, in contrast to the example above, executing
::: scheme
(define x 10) (define s (make-serializer)) (parallel-execute (s (lambda
() (set! x (\* x x)))) (s (lambda () (set! x (+ x 1)))))
:::
can produce only two possible values for `x`, 101 or 121. The other
possibilities are eliminated, because the execution of $P_1$ and $P_2$
cannot be interleaved.
Here is a version of the `make/account` procedure from [Section
3.1.1](#Section 3.1.1), where the deposits and withdrawals have been
serialized:
::: scheme
(define (make-account balance) (define (withdraw amount) (if (\>=
balance amount) (begin (set! balance (- balance amount)) balance)
\"Insufficient funds\")) (define (deposit amount) (set! balance (+
balance amount)) balance) (let ((protected (make-serializer))) (define
(dispatch m) (cond ((eq? m 'withdraw) (protected withdraw)) ((eq? m
'deposit) (protected deposit)) ((eq? m 'balance) balance) (else (error
\"Unknown request: MAKE-ACCOUNT\" m)))) dispatch))
:::
With this implementation, two processes cannot be withdrawing from or
depositing into a single account concurrently. This eliminates the
source of the error illustrated in [Figure 3.29](#Figure 3.29), where
Peter changes the account balance between the times when Paul accesses
the balance to compute the new value and when Paul actually performs the
assignment. On the other hand, each account has its own serializer, so
that deposits and withdrawals for different accounts can proceed
concurrently.
> **[]{#Exercise 3.39 label="Exercise 3.39"}Exercise 3.39:** Which of
> the five possibilities in the parallel execution shown above remain if
> we instead serialize execution as follows:
>
> ::: scheme
> (define x 10) (define s (make-serializer)) (parallel-execute (lambda
> () (set! x ((s (lambda () (\* x x)))))) (s (lambda () (set! x (+ x
> 1)))))
> :::
> **[]{#Exercise 3.40 label="Exercise 3.40"}Exercise 3.40:** Give all
> possible values of `x` that can result from executing
>
> ::: scheme
> (define x 10) (parallel-execute (lambda () (set! x (\* x x))) (lambda
> () (set! x (\* x x x))))
> :::
>
> Which of these possibilities remain if we instead use serialized
> procedures:
>
> ::: scheme
> (define x 10) (define s (make-serializer)) (parallel-execute (s
> (lambda () (set! x (\* x x)))) (s (lambda () (set! x (\* x x x)))))
> :::
> **[]{#Exercise 3.41 label="Exercise 3.41"}Exercise 3.41:** Ben
> Bitdiddle worries that it would be better to implement the bank
> account as follows (where the commented line has been changed):
>
> ::: scheme
> (define (make-account balance) (define (withdraw amount) (if (\>=
> balance amount) (begin (set! balance (- balance amount)) balance)
> \"Insufficient funds\")) (define (deposit amount) (set! balance (+
> balance amount)) balance) (let ((protected (make-serializer))) (define
> (dispatch m) (cond ((eq? m 'withdraw) (protected withdraw)) ((eq? m
> 'deposit) (protected deposit)) ((eq? m 'balance) ((protected (lambda
> () balance)))) [; serialized]{.roman} (else (error \"Unknown
> request: MAKE-ACCOUNT\" m)))) dispatch))
> :::
>
> because allowing unserialized access to the bank balance can result in
> anomalous behavior. Do you agree? Is there any scenario that
> demonstrates Ben's concern?
> **[]{#Exercise 3.42 label="Exercise 3.42"}Exercise 3.42:** Ben
> Bitdiddle suggests that it's a waste of time to create a new
> serialized procedure in response to every `withdraw` and `deposit`
> message. He says that `make/account` could be changed so that the
> calls to `protected` are done outside the `dispatch` procedure. That
> is, an account would return the same serialized procedure (which was
> created at the same time as the account) each time it is asked for a
> withdrawal procedure.
>
> ::: scheme
> (define (make-account balance) (define (withdraw amount) (if (\>=
> balance amount) (begin (set! balance (- balance amount)) balance)
> \"Insufficient funds\")) (define (deposit amount) (set! balance (+
> balance amount)) balance) (let ((protected (make-serializer))) (let
> ((protected-withdraw (protected withdraw)) (protected-deposit
> (protected deposit))) (define (dispatch m) (cond ((eq? m 'withdraw)
> protected-withdraw) ((eq? m 'deposit) protected-deposit) ((eq? m
> 'balance) balance) (else (error \"Unknown request: MAKE-ACCOUNT\"
> m)))) dispatch)))
> :::
>
> Is this a safe change to make? In particular, is there any difference
> in what concurrency is allowed by these two versions of
> `make/account`?
#### Complexity of using multiple shared resources {#complexity-of-using-multiple-shared-resources .unnumbered}
Serializers provide a powerful abstraction that helps isolate the
complexities of concurrent programs so that they can be dealt with
carefully and (hopefully) correctly. However, while using serializers is
relatively straightforward when there is only a single shared resource
(such as a single bank account), concurrent programming can be
treacherously difficult when there are multiple shared resources.
To illustrate one of the difficulties that can arise, suppose we wish to
swap the balances in two bank accounts. We access each account to find
the balance, compute the difference between the balances, withdraw this
difference from one account, and deposit it in the other account. We
could implement this as follows:[^169]
::: scheme
(define (exchange account1 account2) (let ((difference (- (account1
'balance) (account2 'balance)))) ((account1 'withdraw) difference)
((account2 'deposit) difference)))
:::
This procedure works well when only a single process is trying to do the
exchange. Suppose, however, that Peter and Paul both have access to
accounts $a$`<!-- -->`{=html}1, $a$`<!-- -->`{=html}2, and
$a$`<!-- -->`{=html}3, and that Peter exchanges $a$`<!-- -->`{=html}1
and $a$`<!-- -->`{=html}2 while Paul concurrently exchanges
$a$`<!-- -->`{=html}1 and $a$`<!-- -->`{=html}3. Even with account
deposits and withdrawals serialized for individual accounts (as in the
`make/account` procedure shown above in this section), `exchange` can
still produce incorrect results. For example, Peter might compute the
difference in the balances for $a$`<!-- -->`{=html}1 and
$a$`<!-- -->`{=html}2, but then Paul might change the balance in
$a$`<!-- -->`{=html}1 before Peter is able to complete the
exchange.[^170] For correct behavior, we must arrange for the `exchange`
procedure to lock out any other concurrent accesses to the accounts
during the entire time of the exchange.
One way we can accomplish this is by using both accounts' serializers to
serialize the entire `exchange` procedure. To do this, we will arrange
for access to an account's serializer. Note that we are deliberately
breaking the modularity of the bank-account object by exposing the
serializer. The following version of `make/account` is identical to the
original version given in [Section 3.1.1](#Section 3.1.1), except that a
serializer is provided to protect the balance variable, and the
serializer is exported via message passing:
::: scheme
(define (make-account-and-serializer balance) (define (withdraw amount)
(if (\>= balance amount) (begin (set! balance (- balance amount))
balance) \"Insufficient funds\")) (define (deposit amount) (set! balance
(+ balance amount)) balance) (let ((balance-serializer
(make-serializer))) (define (dispatch m) (cond ((eq? m 'withdraw)
withdraw) ((eq? m 'deposit) deposit) ((eq? m 'balance) balance) ((eq? m
'serializer) balance-serializer) (else (error \"Unknown request:
MAKE-ACCOUNT\" m)))) dispatch))
:::
We can use this to do serialized deposits and withdrawals. However,
unlike our earlier serialized account, it is now the responsibility of
each user of bank-account objects to explicitly manage the
serialization, for example as follows:[^171]
::: scheme
(define (deposit account amount) (let ((s (account 'serializer)) (d
(account 'deposit))) ((s d) amount)))
:::
Exporting the serializer in this way gives us enough flexibility to
implement a serialized exchange program. We simply serialize the
original `exchange` procedure with the serializers for both accounts:
::: scheme
(define (serialized-exchange account1 account2) (let ((serializer1
(account1 'serializer)) (serializer2 (account2 'serializer)))
((serializer1 (serializer2 exchange)) account1 account2)))
:::
> **[]{#Exercise 3.43 label="Exercise 3.43"}Exercise 3.43:** Suppose
> that the balances in three accounts start out as \$10, \$20, and \$30,
> and that multiple processes run, exchanging the balances in the
> accounts. Argue that if the processes are run sequentially, after any
> number of concurrent exchanges, the account balances should be \$10,
> \$20, and \$30 in some order. Draw a timing diagram like the one in
> [Figure 3.29](#Figure 3.29) to show how this condition can be violated
> if the exchanges are implemented using the first version of the
> account-exchange program in this section. On the other hand, argue
> that even with this `exchange` program, the sum of the balances in the
> accounts will be preserved. Draw a timing diagram to show how even
> this condition would be violated if we did not serialize the
> transactions on individual accounts.
> **[]{#Exercise 3.44 label="Exercise 3.44"}Exercise 3.44:** Consider
> the problem of transferring an amount from one account to another. Ben
> Bitdiddle claims that this can be accomplished with the following
> procedure, even if there are multiple people concurrently transferring
> money among multiple accounts, using any account mechanism that
> serializes deposit and withdrawal transactions, for example, the
> version of `make/account` in the text above.
>
> ::: scheme
> (define (transfer from-account to-account amount) ((from-account
> 'withdraw) amount) ((to-account 'deposit) amount))
> :::
>
> Louis Reasoner claims that there is a problem here, and that we need
> to use a more sophisticated method, such as the one required for
> dealing with the exchange problem. Is Louis right? If not, what is the
> essential difference between the transfer problem and the exchange
> problem? (You should assume that the balance in `from/account` is at
> least `amount`.)
> **[]{#Exercise 3.45 label="Exercise 3.45"}Exercise 3.45:** Louis
> Reasoner thinks our bank-account system is unnecessarily complex and
> error-prone now that deposits and withdrawals aren't automatically
> serialized. He suggests that `make/account/and/serializer` should have
> exported the serializer (for use by such procedures as
> `serialized/exchange`) in addition to (rather than instead of) using
> it to serialize accounts and deposits as `make/account` did. He
> proposes to redefine accounts as follows:
>
> ::: smallscheme
> (define (make-account-and-serializer balance) (define (withdraw
> amount) (if (\>= balance amount) (begin (set! balance (- balance
> amount)) balance) \"Insufficient funds\")) (define (deposit amount)
> (set! balance (+ balance amount)) balance) (let ((balance-serializer
> (make-serializer))) (define (dispatch m) (cond ((eq? m 'withdraw)
> (balance-serializer withdraw)) ((eq? m 'deposit) (balance-serializer
> deposit)) ((eq? m 'balance) balance) ((eq? m 'serializer)
> balance-serializer) (else (error \"Unknown request: MAKE-ACCOUNT\"
> m)))) dispatch))
> :::
>
> Then deposits are handled as with the original `make/account`:
>
> ::: scheme
> (define (deposit account amount) ((account 'deposit) amount))
> :::
>
> Explain what is wrong with Louis's reasoning. In particular, consider
> what happens when `serialized/exchange` is called.
#### Implementing serializers {#implementing-serializers .unnumbered}
We implement serializers in terms of a more primitive synchronization
mechanism called a *mutex*. A mutex is an object that supports two
operations---the mutex can be *acquired*, and the mutex can be
*released*. Once a mutex has been acquired, no other acquire operations
on that mutex may proceed until the mutex is released.[^172] In our
implementation, each serializer has an associated mutex. Given a
procedure `p`, the serializer returns a procedure that acquires the
mutex, runs `p`, and then releases the mutex. This ensures that only one
of the procedures produced by the serializer can be running at once,
which is precisely the serialization property that we need to guarantee.
::: scheme
(define (make-serializer) (let ((mutex (make-mutex))) (lambda (p)
(define (serialized-p . args) (mutex 'acquire) (let ((val (apply p
args))) (mutex 'release) val)) serialized-p)))
:::
The mutex is a mutable object (here we'll use a one-element list, which
we'll refer to as a *cell*) that can hold the value true or false. When
the value is false, the mutex is available to be acquired. When the
value is true, the mutex is unavailable, and any process that attempts
to acquire the mutex must wait.
Our mutex constructor `make/mutex` begins by initializing the cell
contents to false. To acquire the mutex, we test the cell. If the mutex
is available, we set the cell contents to true and proceed. Otherwise,
we wait in a loop, attempting to acquire over and over again, until we
find that the mutex is available.[^173] To release the mutex, we set the
cell contents to false.
::: scheme
(define (make-mutex) (let ((cell (list false))) (define (the-mutex m)
(cond ((eq? m 'acquire) (if (test-and-set! cell) (the-mutex 'acquire)))
[; retry]{.roman} ((eq? m 'release) (clear! cell)))) the-mutex))
(define (clear! cell) (set-car! cell false))
:::
`test/and/set!` tests the cell and returns the result of the test. In
addition, if the test was false, `test/and/set!` sets the cell contents
to true before returning false. We can express this behavior as the
following procedure:
::: scheme
(define (test-and-set! cell) (if (car cell) true (begin (set-car! cell
true) false)))
:::
However, this implementation of `test/and/set!` does not suffice as it
stands. There is a crucial subtlety here, which is the essential place
where concurrency control enters the system: The `test/and/set!`
operation must be performed *atomically*. That is, we must guarantee
that, once a process has tested the cell and found it to be false, the
cell contents will actually be set to true before any other process can
test the cell. If we do not make this guarantee, then the mutex can fail
in a way similar to the bank-account failure in [Figure
3.29](#Figure 3.29). (See [Exercise 3.46](#Exercise 3.46).)
The actual implementation of `test/and/set!` depends on the details of
how our system runs concurrent processes. For example, we might be
executing concurrent processes on a sequential processor using a
time-slicing mechanism that cycles through the processes, permitting
each process to run for a short time before interrupting it and moving
on to the next process. In that case, `test/and/set!` can work by
disabling time slicing during the testing and setting.[^174]
Alternatively, multiprocessing computers provide instructions that
support atomic operations directly in hardware.[^175]
> **[]{#Exercise 3.46 label="Exercise 3.46"}Exercise 3.46:** Suppose
> that we implement `test/and/set!` using an ordinary procedure as shown
> in the text, without attempting to make the operation atomic. Draw a
> timing diagram like the one in [Figure 3.29](#Figure 3.29) to
> demonstrate how the mutex implementation can fail by allowing two
> processes to acquire the mutex at the same time.
> **[]{#Exercise 3.47 label="Exercise 3.47"}Exercise 3.47:** A semaphore
> (of size $n$) is a generalization of a mutex. Like a mutex, a
> semaphore supports acquire and release operations, but it is more
> general in that up to $n$ processes can acquire it concurrently.
> Additional processes that attempt to acquire the semaphore must wait
> for release operations. Give implementations of semaphores
>
> a. in terms of mutexes
>
> b. in terms of atomic `test/and/set!` operations.
#### Deadlock {#deadlock .unnumbered}
Now that we have seen how to implement serializers, we can see that
account exchanging still has a problem, even with the
`serialized/exchange` procedure above. Imagine that Peter attempts to
exchange $a$`<!-- -->`{=html}1 with $a$`<!-- -->`{=html}2 while Paul
concurrently attempts to exchange $a$`<!-- -->`{=html}2 with
$a$`<!-- -->`{=html}1. Suppose that Peter's process reaches the point
where it has entered a serialized procedure protecting
$a$`<!-- -->`{=html}1 and, just after that, Paul's process enters a
serialized procedure protecting $a$`<!-- -->`{=html}2. Now Peter cannot
proceed (to enter a serialized procedure protecting
$a$`<!-- -->`{=html}2) until Paul exits the serialized procedure
protecting $a$`<!-- -->`{=html}2. Similarly, Paul cannot proceed until
Peter exits the serialized procedure protecting $a$`<!-- -->`{=html}1.
Each process is stalled forever, waiting for the other. This situation
is called a *deadlock*. Deadlock is always a danger in systems that
provide concurrent access to multiple shared resources.
One way to avoid the deadlock in this situation is to give each account
a unique identification number and rewrite `serialized/exchange` so that
a process will always attempt to enter a procedure protecting the
lowest-numbered account first. Although this method works well for the
exchange problem, there are other situations that require more
sophisticated deadlock-avoidance techniques, or where deadlock cannot be
avoided at all. (See [Exercise 3.48](#Exercise 3.48) and [Exercise
3.49](#Exercise 3.49).)[^176]
> **[]{#Exercise 3.48 label="Exercise 3.48"}Exercise 3.48:** Explain in
> detail why the deadlock-avoidance method described above, (i.e., the
> accounts are numbered, and each process attempts to acquire the
> smaller-numbered account first) avoids deadlock in the exchange
> problem. Rewrite `serialized/exchange` to incorporate this idea. (You
> will also need to modify `make/account` so that each account is
> created with a number, which can be accessed by sending an appropriate
> message.)
> **[]{#Exercise 3.49 label="Exercise 3.49"}Exercise 3.49:** Give a
> scenario where the deadlock-avoidance mechanism described above does
> not work. (Hint: In the exchange problem, each process knows in
> advance which accounts it will need to get access to. Consider a
> situation where a process must get access to some shared resources
> before it can know which additional shared resources it will require.)
#### Concurrency, time, and communication {#concurrency-time-and-communication .unnumbered}
We've seen how programming concurrent systems requires controlling the
ordering of events when different processes access shared state, and
we've seen how to achieve this control through judicious use of
serializers. But the problems of concurrency lie deeper than this,
because, from a fundamental point of view, it's not always clear what is
meant by "shared state."
Mechanisms such as `test/and/set!` require processes to examine a global
shared flag at arbitrary times. This is problematic and inefficient to
implement in modern high-speed processors, where due to optimization
techniques such as pipelining and cached memory, the contents of memory
may not be in a consistent state at every instant. In contemporary
multiprocessing systems, therefore, the serializer paradigm is being
supplanted by new approaches to concurrency control.[^177]
The problematic aspects of shared state also arise in large, distributed
systems. For instance, imagine a distributed banking system where
individual branch banks maintain local values for bank balances and
periodically compare these with values maintained by other branches. In
such a system the value of "the account balance" would be undetermined,
except right after synchronization. If Peter deposits money in an
account he holds jointly with Paul, when should we say that the account
balance has changed---when the balance in the local branch changes, or
not until after the synchronization? And if Paul accesses the account
from a different branch, what are the reasonable constraints to place on
the banking system such that the behavior is "correct"? The only thing
that might matter for correctness is the behavior observed by Peter and
Paul individually and the "state" of the account immediately after
synchronization. Questions about the "real" account balance or the order
of events between synchronizations may be irrelevant or
meaningless.[^178]
The basic phenomenon here is that synchronizing different processes,
establishing shared state, or imposing an order on events requires
communication among the processes. In essence, any notion of time in
concurrency control must be intimately tied to communication.[^179] It
is intriguing that a similar connection between time and communication
also arises in the Theory of Relativity, where the speed of light (the
fastest signal that can be used to synchronize events) is a fundamental
constant relating time and space. The complexities we encounter in
dealing with time and state in our computational models may in fact
mirror a fundamental complexity of the physical universe.
## Streams {#Section 3.5}
We've gained a good understanding of assignment as a tool in modeling,
as well as an appreciation of the complex problems that assignment
raises. It is time to ask whether we could have gone about things in a
different way, so as to avoid some of these problems. In this section,
we explore an alternative approach to modeling state, based on data
structures called *streams*. As we shall see, streams can mitigate some
of the complexity of modeling state.
Let's step back and review where this complexity comes from. In an
attempt to model real-world phenomena, we made some apparently
reasonable decisions: We modeled real-world objects with local state by
computational objects with local variables. We identified time variation
in the real world with time variation in the computer. We implemented
the time variation of the states of the model objects in the computer
with assignments to the local variables of the model objects.
Is there another approach? Can we avoid identifying time in the computer
with time in the modeled world? Must we make the model change with time
in order to model phenomena in a changing world? Think about the issue
in terms of mathematical functions. We can describe the time-varying
behavior of a quantity $x$ as a function of time $x(t)$. If we
concentrate on $x$ instant by instant, we think of it as a changing
quantity. Yet if we concentrate on the entire time history of values, we
do not emphasize change---the function itself does not change.[^180]
If time is measured in discrete steps, then we can model a time function
as a (possibly infinite) sequence. In this section, we will see how to
model change in terms of sequences that represent the time histories of
the systems being modeled. To accomplish this, we introduce new data
structures called *streams*. From an abstract point of view, a stream is
simply a sequence. However, we will find that the straightforward
implementation of streams as lists (as in [Section
2.2.1](#Section 2.2.1)) doesn't fully reveal the power of stream
processing. As an alternative, we introduce the technique of *delayed
evaluation*, which enables us to represent very large (even infinite)
sequences as streams.
Stream processing lets us model systems that have state without ever
using assignment or mutable data. This has important implications, both
theoretical and practical, because we can build models that avoid the
drawbacks inherent in introducing assignment. On the other hand, the
stream framework raises difficulties of its own, and the question of
which modeling technique leads to more modular and more easily
maintained systems remains open.
### Streams Are Delayed Lists {#Section 3.5.1}
As we saw in [Section 2.2.3](#Section 2.2.3), sequences can serve as
standard interfaces for combining program modules. We formulated
powerful abstractions for manipulating sequences, such as `map`,
`filter`, and `accumulate`, that capture a wide variety of operations in
a manner that is both succinct and elegant.
Unfortunately, if we represent sequences as lists, this elegance is
bought at the price of severe inefficiency with respect to both the time
and space required by our computations. When we represent manipulations
on sequences as transformations of lists, our programs must construct
and copy data structures (which may be huge) at every step of a process.
To see why this is true, let us compare two programs for computing the
sum of all the prime numbers in an interval. The first program is
written in standard iterative style:[^181]
::: scheme
(define (sum-primes a b) (define (iter count accum) (cond ((\> count b)
accum) ((prime? count) (iter (+ count 1) (+ count accum))) (else (iter
(+ count 1) accum)))) (iter a 0))
:::
The second program performs the same computation using the sequence
operations of [Section 2.2.3](#Section 2.2.3):
::: scheme
(define (sum-primes a b) (accumulate + 0 (filter prime?
(enumerate-interval a b))))
:::
In carrying out the computation, the first program needs to store only
the sum being accumulated. In contrast, the filter in the second program
cannot do any testing until `enumerate/interval` has constructed a
complete list of the numbers in the interval. The filter generates
another list, which in turn is passed to `accumulate` before being
collapsed to form a sum. Such large intermediate storage is not needed
by the first program, which we can think of as enumerating the interval
incrementally, adding each prime to the sum as it is generated.
The inefficiency in using lists becomes painfully apparent if we use the
sequence paradigm to compute the second prime in the interval from
10,000 to 1,000,000 by evaluating the expression
::: scheme
(car (cdr (filter prime? (enumerate-interval 10000 1000000))))
:::
This expression does find the second prime, but the computational
overhead is outrageous. We construct a list of almost a million
integers, filter this list by testing each element for primality, and
then ignore almost all of the result. In a more traditional programming
style, we would interleave the enumeration and the filtering, and stop
when we reached the second prime.
Streams are a clever idea that allows one to use sequence manipulations
without incurring the costs of manipulating sequences as lists. With
streams we can achieve the best of both worlds: We can formulate
programs elegantly as sequence manipulations, while attaining the
efficiency of incremental computation. The basic idea is to arrange to
construct a stream only partially, and to pass the partial construction
to the program that consumes the stream. If the consumer attempts to
access a part of the stream that has not yet been constructed, the
stream will automatically construct just enough more of itself to
produce the required part, thus preserving the illusion that the entire
stream exists. In other words, although we will write programs as if we
were processing complete sequences, we design our stream implementation
to automatically and transparently interleave the construction of the
stream with its use.
On the surface, streams are just lists with different names for the
procedures that manipulate them. There is a constructor, `cons/stream`,
and two selectors, `stream/car` and `stream/cdr`, which satisfy the
constraints
::: scheme
(stream-car (cons-stream x y)) = x (stream-cdr (cons-stream x y)) = y
:::
There is a distinguishable object, `the/empty/stream`, which cannot be
the result of any `cons/stream` operation, and which can be identified
with the predicate `stream/null?`.[^182] Thus we can make and use
streams, in just the same way as we can make and use lists, to represent
aggregate data arranged in a sequence. In particular, we can build
stream analogs of the list operations from [Chapter 2](#Chapter 2), such
as `list/ref`, `map`, and `for/each`:[^183]
::: scheme
(define (stream-ref s n) (if (= n 0) (stream-car s) (stream-ref
(stream-cdr s) (- n 1)))) (define (stream-map proc s) (if (stream-null?
s) the-empty-stream (cons-stream (proc (stream-car s)) (stream-map proc
(stream-cdr s))))) (define (stream-for-each proc s) (if (stream-null? s)
'done (begin (proc (stream-car s)) (stream-for-each proc (stream-cdr
s)))))
:::
`stream/for/each` is useful for viewing streams:
::: scheme
(define (display-stream s) (stream-for-each display-line s)) (define
(display-line x) (newline) (display x))
:::
To make the stream implementation automatically and transparently
interleave the construction of a stream with its use, we will arrange
for the `cdr` of a stream to be evaluated when it is accessed by the
`stream/cdr` procedure rather than when the stream is constructed by
`cons/stream`. This implementation choice is reminiscent of our
discussion of rational numbers in [Section 2.1.2](#Section 2.1.2), where
we saw that we can choose to implement rational numbers so that the
reduction of numerator and denominator to lowest terms is performed
either at construction time or at selection time. The two
rational-number implementations produce the same data abstraction, but
the choice has an effect on efficiency. There is a similar relationship
between streams and ordinary lists. As a data abstraction, streams are
the same as lists. The difference is the time at which the elements are
evaluated. With ordinary lists, both the `car` and the `cdr` are
evaluated at construction time. With streams, the `cdr` is evaluated at
selection time.
Our implementation of streams will be based on a special form called
`delay`. Evaluating `(delay `$\langle$*`exp`*$\rangle$`)` does not
evaluate the expression $\langle$*exp*$\kern0.08em\rangle$, but rather
returns a so-called *delayed object*, which we can think of as a
"promise" to evaluate $\langle$*exp*$\kern0.08em\rangle$ at some future
time. As a companion to `delay`, there is a procedure called `force`
that takes a delayed object as argument and performs the evaluation---in
effect, forcing the `delay` to fulfill its promise. We will see below
how `delay` and `force` can be implemented, but first let us use these
to construct streams.
`cons/stream` is a special form defined so that
::: scheme
(cons-stream
$\color{SchemeDark}\langle$ *a* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *b* $\color{SchemeDark}\rangle$ )
:::
is equivalent to
::: scheme
(cons $\color{SchemeDark}\langle$ *a* $\color{SchemeDark}\rangle$
(delay $\color{SchemeDark}\langle$ *b* $\color{SchemeDark}\rangle$ ))
:::
What this means is that we will construct streams using pairs. However,
rather than placing the value of the rest of the stream into the `cdr`
of the pair we will put there a promise to compute the rest if it is
ever requested. `stream/car` and `stream/cdr` can now be defined as
procedures:
::: scheme
(define (stream-car stream) (car stream)) (define (stream-cdr stream)
(force (cdr stream)))
:::
`stream/car` selects the `car` of the pair; `stream/cdr` selects the
`cdr` of the pair and evaluates the delayed expression found there to
obtain the rest of the stream.[^184]
#### The stream implementation in action {#the-stream-implementation-in-action .unnumbered}
To see how this implementation behaves, let us analyze the "outrageous"
prime computation we saw above, reformulated in terms of streams:
::: scheme
(stream-car (stream-cdr (stream-filter prime? (stream-enumerate-interval
10000 1000000))))
:::
We will see that it does indeed work efficiently.
We begin by calling `stream/enumerate/interval` with the arguments
10,000 and 1,000,000. `stream/enumerate/interval` is the stream analog
of `enumerate/interval` ([Section 2.2.3](#Section 2.2.3)):
::: scheme
(define (stream-enumerate-interval low high) (if (\> low high)
the-empty-stream (cons-stream low (stream-enumerate-interval (+ low 1)
high))))
:::
and thus the result returned by `stream/enumerate/interval`, formed by
the `cons/stream`, is[^185]
::: scheme
(cons 10000 (delay (stream-enumerate-interval 10001 1000000)))
:::
That is, `stream/enumerate/interval` returns a stream represented as a
pair whose `car` is 10,000 and whose `cdr` is a promise to enumerate
more of the interval if so requested. This stream is now filtered for
primes, using the stream analog of the `filter` procedure ([Section
2.2.3](#Section 2.2.3)):
::: scheme
(define (stream-filter pred stream) (cond ((stream-null? stream)
the-empty-stream) ((pred (stream-car stream)) (cons-stream (stream-car
stream) (stream-filter pred (stream-cdr stream)))) (else (stream-filter
pred (stream-cdr stream)))))
:::
`stream/filter` tests the `stream/car` of the stream (the `car` of the
pair, which is 10,000). Since this is not prime, `stream/filter`
examines the `stream/cdr` of its input stream. The call to `stream/cdr`
forces evaluation of the delayed `stream/enumerate/interval`, which now
returns
::: scheme
(cons 10001 (delay (stream-enumerate-interval 10002 1000000)))
:::
`stream/filter` now looks at the `stream/car` of this stream, 10,001,
sees that this is not prime either, forces another `stream/cdr`, and so
on, until `stream/enumerate/interval` yields the prime 10,007, whereupon
`stream/filter`, according to its definition, returns
::: scheme
(cons-stream (stream-car stream) (stream-filter pred (stream-cdr
stream)))
:::
which in this case is
::: scheme
(cons 10007 (delay (stream-filter prime? (cons 10008 (delay
(stream-enumerate-interval 10009 1000000))))))
:::
This result is now passed to `stream/cdr` in our original expression.
This forces the delayed `stream/filter`, which in turn keeps forcing the
delayed `stream/enumerate/interval` until it finds the next prime, which
is 10,009. Finally, the result passed to `stream/car` in our original
expression is
::: scheme
(cons 10009 (delay (stream-filter prime? (cons 10010 (delay
(stream-enumerate-interval 10011 1000000))))))
:::
`stream/car` returns 10,009, and the computation is complete. Only as
many integers were tested for primality as were necessary to find the
second prime, and the interval was enumerated only as far as was
necessary to feed the prime filter.
In general, we can think of delayed evaluation as "demand-driven"
programming, whereby each stage in the stream process is activated only
enough to satisfy the next stage. What we have done is to decouple the
actual order of events in the computation from the apparent structure of
our procedures. We write procedures as if the streams existed "all at
once" when, in reality, the computation is performed incrementally, as
in traditional programming styles.
#### Implementing `delay` and `force` {#implementing-delay-and-force .unnumbered}
Although `delay` and `force` may seem like mysterious operations, their
implementation is really quite straightforward. `delay` must package an
expression so that it can be evaluated later on demand, and we can
accomplish this simply by treating the expression as the body of a
procedure. `delay` can be a special form such that
::: scheme
(delay
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}\rangle$ )
:::
is syntactic sugar for
::: scheme
(lambda ()
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}\rangle$ )
:::
`force` simply calls the procedure (of no arguments) produced by
`delay`, so we can implement `force` as a procedure:
::: scheme
(define (force delayed-object) (delayed-object))
:::
This implementation suffices for `delay` and `force` to work as
advertised, but there is an important optimization that we can include.
In many applications, we end up forcing the same delayed object many
times. This can lead to serious inefficiency in recursive programs
involving streams. (See [Exercise 3.57](#Exercise 3.57).) The solution
is to build delayed objects so that the first time they are forced, they
store the value that is computed. Subsequent forcings will simply return
the stored value without repeating the computation. In other words, we
implement `delay` as a special-purpose memoized procedure similar to the
one described in [Exercise 3.27](#Exercise 3.27). One way to accomplish
this is to use the following procedure, which takes as argument a
procedure (of no arguments) and returns a memoized version of the
procedure. The first time the memoized procedure is run, it saves the
computed result. On subsequent evaluations, it simply returns the
result.
::: scheme
(define (memo-proc proc) (let ((already-run? false) (result false))
(lambda () (if (not already-run?) (begin (set! result (proc)) (set!
already-run? true) result) result))))
:::
`delay` is then defined so that `(delay `$\langle$*`exp`*$\rangle$`)` is
equivalent to
::: scheme
(memo-proc (lambda ()
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}\rangle$ ))
:::
and `force` is as defined previously.[^186]
> **[]{#Exercise 3.50 label="Exercise 3.50"}Exercise 3.50:** Complete
> the following definition, which generalizes `stream/map` to allow
> procedures that take multiple arguments, analogous to `map` in
> [Section 2.2.1](#Section 2.2.1), [Footnote 12](#Footnote 12).
>
> ::: scheme
> (define (stream-map proc . argstreams) (if
> ( $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ (car
> argstreams)) the-empty-stream
> ( $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ (apply
> proc (map $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> argstreams)) (apply stream-map (cons proc (map
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> argstreams))))))
> :::
> **[]{#Exercise 3.51 label="Exercise 3.51"}Exercise 3.51:** In order to
> take a closer look at delayed evaluation, we will use the following
> procedure, which simply returns its argument after printing it:
>
> ::: scheme
> (define (show x) (display-line x) x)
> :::
>
> What does the interpreter print in response to evaluating each
> expression in the following sequence?[^187]
>
> ::: scheme
> (define x (stream-map show (stream-enumerate-interval 0 10)))
> (stream-ref x 5) (stream-ref x 7)
> :::
> **[]{#Exercise 3.52 label="Exercise 3.52"}Exercise 3.52:** Consider
> the sequence of expressions
>
> ::: scheme
> (define sum 0) (define (accum x) (set! sum (+ x sum)) sum) (define seq
> (stream-map accum (stream-enumerate-interval 1 20))) (define y
> (stream-filter even? seq)) (define z (stream-filter (lambda (x) (=
> (remainder x 5) 0)) seq)) (stream-ref y 7) (display-stream z)
> :::
>
> What is the value of `sum` after each of the above expressions is
> evaluated? What is the printed response to evaluating the `stream/ref`
> and `display/stream` expressions? Would these responses differ if we
> had implemented `(delay `$\langle$*`exp`*$\rangle$`)` simply as
> `(lambda () `$\langle$*`exp`*$\rangle$`)` without using the
> optimization provided by `memo/proc`? Explain.
### Infinite Streams {#Section 3.5.2}
We have seen how to support the illusion of manipulating streams as
complete entities even though, in actuality, we compute only as much of
the stream as we need to access. We can exploit this technique to
represent sequences efficiently as streams, even if the sequences are
very long. What is more striking, we can use streams to represent
sequences that are infinitely long. For instance, consider the following
definition of the stream of positive integers:
::: scheme
(define (integers-starting-from n) (cons-stream n
(integers-starting-from (+ n 1)))) (define integers
(integers-starting-from 1))
:::
This makes sense because `integers` will be a pair whose `car` is 1 and
whose `cdr` is a promise to produce the integers beginning with 2. This
is an infinitely long stream, but in any given time we can examine only
a finite portion of it. Thus, our programs will never know that the
entire infinite stream is not there.
Using `integers` we can define other infinite streams, such as the
stream of integers that are not divisible by 7:
::: scheme
(define (divisible? x y) (= (remainder x y) 0)) (define no-sevens
(stream-filter (lambda (x) (not (divisible? x 7))) integers))
:::
Then we can find integers not divisible by 7 simply by accessing
elements of this stream:
::: scheme
(stream-ref no-sevens 100) *117*
:::
In analogy with `integers`, we can define the infinite stream of
Fibonacci numbers:
::: scheme
(define (fibgen a b) (cons-stream a (fibgen b (+ a b)))) (define fibs
(fibgen 0 1))
:::
`fibs` is a pair whose `car` is 0 and whose `cdr` is a promise to
evaluate `(fibgen 1 1)`. When we evaluate this delayed `(fibgen 1 1)`,
it will produce a pair whose `car` is 1 and whose `cdr` is a promise to
evaluate `(fibgen 1 2)`, and so on.
For a look at a more exciting infinite stream, we can generalize the
`no/sevens` example to construct the infinite stream of prime numbers,
using a method known as the *sieve of Eratosthenes*.[^188] We start with
the integers beginning with 2, which is the first prime. To get the rest
of the primes, we start by filtering the multiples of 2 from the rest of
the integers. This leaves a stream beginning with 3, which is the next
prime. Now we filter the multiples of 3 from the rest of this stream.
This leaves a stream beginning with 5, which is the next prime, and so
on. In other words, we construct the primes by a sieving process,
described as follows: To sieve a stream `S`, form a stream whose first
element is the first element of `S` and the rest of which is obtained by
filtering all multiples of the first element of `S` out of the rest of
`S` and sieving the result. This process is readily described in terms
of stream operations:
::: scheme
(define (sieve stream) (cons-stream (stream-car stream) (sieve
(stream-filter (lambda (x) (not (divisible? x (stream-car stream))))
(stream-cdr stream))))) (define primes (sieve (integers-starting-from
2)))
:::
Now to find a particular prime we need only ask for it:
::: scheme
(stream-ref primes 50) *233*
:::
It is interesting to contemplate the signal-processing system set up by
`sieve`, shown in the "Henderson diagram" in [Figure
3.31](#Figure 3.31).[^189] The input stream feeds into an "un`cons`er"
that separates the first element of the stream from the rest of the
stream. The first element is used to construct a divisibility filter,
through which the rest is passed, and the output of the filter is fed to
another sieve box. Then the original first element is `cons`ed onto the
output of the internal sieve to form the output stream. Thus, not only
is the stream infinite, but the signal processor is also infinite,
because the sieve contains a sieve within it.
[]{#Figure 3.31 label="Figure 3.31"}
![image](fig/chap3/Fig3.31.pdf){width="111mm"}
> **Figure 3.31:** The prime sieve viewed as a signal-processing system.
#### Defining streams implicitly {#defining-streams-implicitly .unnumbered}
The `integers` and `fibs` streams above were defined by specifying
"generating" procedures that explicitly compute the stream elements one
by one. An alternative way to specify streams is to take advantage of
delayed evaluation to define streams implicitly. For example, the
following expression defines the stream `ones` to be an infinite stream
of ones:
::: scheme
(define ones (cons-stream 1 ones))
:::
This works much like the definition of a recursive procedure: `ones` is
a pair whose `car` is 1 and whose `cdr` is a promise to evaluate `ones`.
Evaluating the `cdr` gives us again a 1 and a promise to evaluate
`ones`, and so on.
We can do more interesting things by manipulating streams with
operations such as `add/streams`, which produces the elementwise sum of
two given streams:[^190]
::: scheme
(define (add-streams s1 s2) (stream-map + s1 s2))
:::
Now we can define the integers as follows:
::: scheme
(define integers (cons-stream 1 (add-streams ones integers)))
:::
This defines `integers` to be a stream whose first element is 1 and the
rest of which is the sum of `ones` and `integers`. Thus, the second
element of `integers` is 1 plus the first element of `integers`, or 2;
the third element of `integers` is 1 plus the second element of
`integers`, or 3; and so on. This definition works because, at any
point, enough of the `integers` stream has been generated so that we can
feed it back into the definition to produce the next integer.
We can define the Fibonacci numbers in the same style:
::: scheme
(define fibs (cons-stream 0 (cons-stream 1 (add-streams (stream-cdr
fibs) fibs))))
:::
This definition says that `fibs` is a stream beginning with 0 and 1,
such that the rest of the stream can be generated by adding `fibs` to
itself shifted by one place:
::: scheme
1 1 2 3 5 8 13 21 $\dots$ = `(stream/cdr fibs)` 0 1 1 2 3 5 8 13
$\dots$ = `fibs` 0 1 1 2 3 5 8 13 21 34 $\dots$ = `fibs`
:::
`scale/stream` is another useful procedure in formulating such stream
definitions. This multiplies each item in a stream by a given constant:
::: scheme
(define (scale-stream stream factor) (stream-map (lambda (x) (\* x
factor)) stream))
:::
For example,
::: scheme
(define double (cons-stream 1 (scale-stream double 2)))
:::
produces the stream of powers of 2: 1, 2, 4, 8, 16, 32, $\dots$.
An alternate definition of the stream of primes can be given by starting
with the integers and filtering them by testing for primality. We will
need the first prime, 2, to get started:
::: scheme
(define primes (cons-stream 2 (stream-filter prime?
(integers-starting-from 3))))
:::
This definition is not so straightforward as it appears, because we will
test whether a number $n$ is prime by checking whether $n$ is divisible
by a prime (not by just any integer) less than or equal to $\sqrt{n}$:
::: scheme
(define (prime? n) (define (iter ps) (cond ((\> (square (stream-car ps))
n) true) ((divisible? n (stream-car ps)) false) (else (iter (stream-cdr
ps))))) (iter primes))
:::
This is a recursive definition, since `primes` is defined in terms of
the `prime?` predicate, which itself uses the `primes` stream. The
reason this procedure works is that, at any point, enough of the
`primes` stream has been generated to test the primality of the numbers
we need to check next. That is, for every $n$ we test for primality,
either $n$ is not prime (in which case there is a prime already
generated that divides it) or $n$ is prime (in which case there is a
prime already generated---i.e., a prime less than $n$---that is greater
than $\sqrt{n}$).[^191]
> **[]{#Exercise 3.53 label="Exercise 3.53"}Exercise 3.53:** Without
> running the program, describe the elements of the stream defined by
>
> ::: scheme
> (define s (cons-stream 1 (add-streams s s)))
> :::
> **[]{#Exercise 3.54 label="Exercise 3.54"}Exercise 3.54:** Define a
> procedure `mul/streams`, analogous to `add/streams`, that produces the
> elementwise product of its two input streams. Use this together with
> the stream of `integers` to complete the following definition of the
> stream whose $n^{\mathrm{th}}$ element (counting from 0) is $n + 1$
> factorial:
>
> ::: scheme
> (define factorials (cons-stream 1 (mul-streams
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ )))
> :::
> **[]{#Exercise 3.55 label="Exercise 3.55"}Exercise 3.55:** Define a
> procedure `partial/sums` that takes as argument a stream $S$ and
> returns the stream whose elements are $S_0$, $S_0 + S_1$,
> $S_0 + S_1 + S_2, \dots$. For example, `(partial/sums integers)`
> should be the stream 1, 3, 6, 10, 15, $\dots$.
> **[]{#Exercise 3.56 label="Exercise 3.56"}Exercise 3.56:** A famous
> problem, first raised by R. Hamming, is to enumerate, in ascending
> order with no repetitions, all positive integers with no prime factors
> other than 2, 3, or 5. One obvious way to do this is to simply test
> each integer in turn to see whether it has any factors other than 2,
> 3, and 5. But this is very inefficient, since, as the integers get
> larger, fewer and fewer of them fit the requirement. As an
> alternative, let us call the required stream of numbers `S` and notice
> the following facts about it.
>
> - `S` begins with 1.
>
> - The elements of `(scale/stream S 2)` are also elements of `S`.
>
> - The same is true for `(scale/stream S 3)` and
> `(scale/stream 5 S)`.
>
> - These are all the elements of `S`.
>
> Now all we have to do is combine elements from these sources. For this
> we define a procedure `merge` that combines two ordered streams into
> one ordered result stream, eliminating repetitions:
>
> ::: scheme
> (define (merge s1 s2) (cond ((stream-null? s1) s2) ((stream-null? s2)
> s1) (else (let ((s1car (stream-car s1)) (s2car (stream-car s2))) (cond
> ((\< s1car s2car) (cons-stream s1car (merge (stream-cdr s1) s2))) ((\>
> s1car s2car) (cons-stream s2car (merge s1 (stream-cdr s2)))) (else
> (cons-stream s1car (merge (stream-cdr s1) (stream-cdr s2)))))))))
> :::
>
> Then the required stream may be constructed with `merge`, as follows:
>
> ::: scheme
> (define S (cons-stream 1 (merge
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ )))
> :::
>
> Fill in the missing expressions in the places marked
> $\langle$??$\kern0.08em\rangle$ above.
> **[]{#Exercise 3.57 label="Exercise 3.57"}Exercise 3.57:** How many
> additions are performed when we compute the $n^{\mathrm{th}}$
> Fibonacci number using the definition of `fibs` based on the
> `add/streams` procedure? Show that the number of additions would be
> exponentially greater if we had implemented
> `(delay `$\langle$*`exp`*$\rangle$`)` simply as
> `(lambda () `$\langle$*`exp`*$\rangle$`)`, without using the
> optimization provided by the `memo/proc` procedure described in
> [Section 3.5.1](#Section 3.5.1).[^192]
> **[]{#Exercise 3.58 label="Exercise 3.58"}Exercise 3.58:** Give an
> interpretation of the stream computed by the following procedure:
>
> ::: scheme
> (define (expand num den radix) (cons-stream (quotient (\* num radix)
> den) (expand (remainder (\* num radix) den) den radix)))
> :::
>
> (`quotient` is a primitive that returns the integer quotient of two
> integers.) What are the successive elements produced by
> `(expand 1 7 10)`? What is produced by `(expand 3 8 10)`?
> **[]{#Exercise 3.59 label="Exercise 3.59"}Exercise 3.59:** In [Section
> 2.5.3](#Section 2.5.3) we saw how to implement a polynomial arithmetic
> system representing polynomials as lists of terms. In a similar way,
> we can work with *power series*, such as
>
> $$e^x = 1 + x + \displaystyle\frac{x^2}{2} + \displaystyle\frac{x^3}{3 \cdot 2} + \displaystyle\frac{x^4}{4 \cdot 3 \cdot 2} + \dots,$$
>
> $$\cos x = 1 - \displaystyle\frac{x^2}{2} + \displaystyle\frac{x^4}{4 \cdot 3 \cdot 2} - \dots,$$
>
> $$\sin x = x - \displaystyle\frac{x^3}{3 \cdot 2} + \displaystyle\frac{x^5}{5 \cdot 4 \cdot 3 \cdot 2} - \dots$$
>
> represented as infinite streams. We will represent the series $a_0 +
> a_1 x + a_2 x^2 + a_3 x^3 + \dots$ as the stream whose elements are
> the coefficients $a_0$, $a_1$, $a_2$, $a_3$, $\dots$.
>
> a. The integral of the series
> $a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \dots$ is the series
>
> $$c + a_0 x + {1\over2} a_1 x^2 + {1\over3} a_2 x^3 + {1\over4} a_3 x^4 + \dots,$$
>
> where $c$ is any constant. Define a procedure `integrate/series`
> that takes as input a stream $a_0$, $a_1$, $a_2$, $\dots$
> representing a power series and returns the stream $a_0$,
> ${1\over2}a_1$, ${1\over3}a_2$, $\dots$ of coefficients of the
> non-constant terms of the integral of the series. (Since the
> result has no constant term, it doesn't represent a power series;
> when we use `integrate/series`, we will `cons` on the appropriate
> constant.)
>
> b. The function $x \mapsto e^x$ is its own derivative. This implies
> that $e^x$ and the integral of $e^x$ are the same series, except
> for the constant term, which is $e^0 = 1$. Accordingly, we can
> generate the series for $e^x$ as
>
> ::: scheme
> (define exp-series (cons-stream 1 (integrate-series exp-series)))
> :::
>
> Show how to generate the series for sine and cosine, starting from
> the facts that the derivative of sine is cosine and the derivative
> of cosine is the negative of sine:
>
> ::: scheme
> (define cosine-series (cons-stream 1
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ))
> (define sine-series (cons-stream 0
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ ))
> :::
> **[]{#Exercise 3.60 label="Exercise 3.60"}Exercise 3.60:** With power
> series represented as streams of coefficients as in [Exercise
> 3.59](#Exercise 3.59), adding series is implemented by `add/streams`.
> Complete the definition of the following procedure for multiplying
> series:
>
> ::: scheme
> (define (mul-series s1 s2) (cons-stream
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> (add-streams
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ )))
> :::
>
> You can test your procedure by verifying that
> $\sin^2\!x + \cos^2\!x = 1$, using the series from [Exercise
> 3.59](#Exercise 3.59).
> **[]{#Exercise 3.61 label="Exercise 3.61"}Exercise 3.61:** Let $S$ be
> a power series ([Exercise 3.59](#Exercise 3.59)) whose constant term
> is 1. Suppose we want to find the power series $1 / S$, that is, the
> series $X$ such that $SX = 1$. Write $S = 1 + S_R$ where $S_R$ is the
> part of $S$ after the constant term. Then we can solve for $X$ as
> follows:
>
> $$\begin{array}{r@{{}={}}l}
> S \cdot X & 1, \\
> (1 + S_R) \cdot X & 1, \\
> X + S_R \cdot X & 1, \\
> X & 1 - S_R \cdot X.
> \end{array}$$
>
> In other words, $X$ is the power series whose constant term is 1 and
> whose higher-order terms are given by the negative of $S_R$ times $X$.
> Use this idea to write a procedure `invert/unit/series` that computes
> $1 / S$ for a power series $S$ with constant term 1. You will need to
> use `mul/series` from [Exercise 3.60](#Exercise 3.60).
> **[]{#Exercise 3.62 label="Exercise 3.62"}Exercise 3.62:** Use the
> results of [Exercise 3.60](#Exercise 3.60) and [Exercise
> 3.61](#Exercise 3.61) to define a procedure `div/series` that divides
> two power series. `div/series` should work for any two series,
> provided that the denominator series begins with a nonzero constant
> term. (If the denominator has a zero constant term, then `div/series`
> should signal an error.) Show how to use `div/series` together with
> the result of [Exercise 3.59](#Exercise 3.59) to generate the power
> series for tangent.
### Exploiting the Stream Paradigm {#Section 3.5.3}
Streams with delayed evaluation can be a powerful modeling tool,
providing many of the benefits of local state and assignment. Moreover,
they avoid some of the theoretical tangles that accompany the
introduction of assignment into a programming language.
The stream approach can be illuminating because it allows us to build
systems with different module boundaries than systems organized around
assignment to state variables. For example, we can think of an entire
time series (or signal) as a focus of interest, rather than the values
of the state variables at individual moments. This makes it convenient
to combine and compare components of state from different moments.
#### Formulating iterations as stream processes {#formulating-iterations-as-stream-processes .unnumbered}
In [Section 1.2.1](#Section 1.2.1), we introduced iterative processes,
which proceed by updating state variables. We know now that we can
represent state as a "timeless" stream of values rather than as a set of
variables to be updated. Let's adopt this perspective in revisiting the
square-root procedure from [Section 1.1.7](#Section 1.1.7). Recall that
the idea is to generate a sequence of better and better guesses for the
square root of $x$ by applying over and over again the procedure that
improves guesses:
::: scheme
(define (sqrt-improve guess x) (average guess (/ x guess)))
:::
In our original `sqrt` procedure, we made these guesses be the
successive values of a state variable. Instead we can generate the
infinite stream of guesses, starting with an initial guess of 1:[^193]
::: scheme
(define (sqrt-stream x) (define guesses (cons-stream 1.0 (stream-map
(lambda (guess) (sqrt-improve guess x)) guesses))) guesses)
(display-stream (sqrt-stream 2)) *1.* *1.5* *1.4166666666666665*
*1.4142156862745097* *1.4142135623746899* $\dots$
:::
We can generate more and more terms of the stream to get better and
better guesses. If we like, we can write a procedure that keeps
generating terms until the answer is good enough. (See [Exercise
3.64](#Exercise 3.64).)
Another iteration that we can treat in the same way is to generate an
approximation to $\pi$, based upon the alternating series that we saw in
[Section 1.3.1](#Section 1.3.1):
$${\pi\over4} = 1 - {1\over3} + {1\over5} - {1\over7} + \dots.$$
We first generate the stream of summands of the series (the reciprocals
of the odd integers, with alternating signs). Then we take the stream of
sums of more and more terms (using the `partial/sums` procedure of
[Exercise 3.55](#Exercise 3.55)) and scale the result by 4:
::: scheme
(define (pi-summands n) (cons-stream (/ 1.0 n) (stream-map -
(pi-summands (+ n 2))))) (define pi-stream (scale-stream (partial-sums
(pi-summands 1)) 4))
(display-stream pi-stream) *4.* *2.666666666666667*
*3.466666666666667* *2.8952380952380956* *3.3396825396825403*
*2.9760461760461765* *3.2837384837384844* *3.017071817071818*
$\dots$
:::
This gives us a stream of better and better approximations to $\pi$,
although the approximations converge rather slowly. Eight terms of the
sequence bound the value of $\pi$ between 3.284 and 3.017.
So far, our use of the stream of states approach is not much different
from updating state variables. But streams give us an opportunity to do
some interesting tricks. For example, we can transform a stream with a
*sequence accelerator* that converts a sequence of approximations to a
new sequence that converges to the same value as the original, only
faster.
One such accelerator, due to the eighteenth-century Swiss mathematician
Leonhard Euler, works well with sequences that are partial sums of
alternating series (series of terms with alternating signs). In Euler's
technique, if $S_n$ is the $n^{\mathrm{th}}$ term of the original sum
sequence, then the accelerated sequence has terms
$$S_{n+1} - {(S_{n+1} - S_n)^2 \over S_{n-1} - 2S_n + S_{n+1}}\,.$$
Thus, if the original sequence is represented as a stream of values, the
transformed sequence is given by
::: scheme
(define (euler-transform s) (let ((s0 (stream-ref s 0)) [;
$S_{n-1}$]{.roman} (s1 (stream-ref s 1)) [; $S_n$]{.roman} (s2
(stream-ref s 2))) [; $S_{n+1}$]{.roman} (cons-stream (- s2 (/ (square
(- s2 s1)) (+ s0 (\* -2 s1) s2))) (euler-transform (stream-cdr s)))))
:::
We can demonstrate Euler acceleration with our sequence of
approximations to $\pi$:
::: scheme
(display-stream (euler-transform pi-stream)) *3.166666666666667*
*3.1333333333333337* *3.1452380952380956* *3.13968253968254*
*3.1427128427128435* *3.1408813408813416* *3.142071817071818*
*3.1412548236077655* $\dots$
:::
Even better, we can accelerate the accelerated sequence, and recursively
accelerate that, and so on. Namely, we create a stream of streams (a
structure we'll call a *tableau*) in which each stream is the transform
of the preceding one:
::: scheme
(define (make-tableau transform s) (cons-stream s (make-tableau
transform (transform s))))
:::
The tableau has the form
$$\vbox{
\offinterlineskip
\halign{
\strut \hfil \ #\ \hfil &
\hfil \ #\ \hfil &
\hfil \ #\ \hfil &
\hfil \ #\ \hfil &
\hfil \ #\ \hfil &
\hfil \ #\ \hfil \cr
$ s_{00} $ & $ s_{01} $ & $ s_{02} $ & $ s_{03} $ & $ s_{04} $ & $ \dots $ \cr
& $ s_{10} $ & $ s_{11} $ & $ s_{12} $ & $ s_{13} $ & $ \dots $ \cr
& & $ s_{20} $ & $ s_{21} $ & $ s_{22} $ & $ \dots $ \cr
& & & $ \dots $ & & \cr }
}$$
Finally, we form a sequence by taking the first term in each row of the
tableau:
::: scheme
(define (accelerated-sequence transform s) (stream-map stream-car
(make-tableau transform s)))
:::
We can demonstrate this kind of "super-acceleration" of the $\pi$
sequence:
::: scheme
(display-stream (accelerated-sequence euler-transform pi-stream)) *4.*
*3.166666666666667* *3.142105263157895* *3.141599357319005*
*3.1415927140337785* *3.1415926539752927* *3.1415926535911765*
*3.141592653589778* $\dots$
:::
The result is impressive. Taking eight terms of the sequence yields the
correct value of $\pi$ to 14 decimal places. If we had used only the
original $\pi$ sequence, we would need to compute on the order of
$10^{13}$ terms (i.e., expanding the series far enough so that the
individual terms are less than $10^{-13}$) to get that much accuracy!
We could have implemented these acceleration techniques without using
streams. But the stream formulation is particularly elegant and
convenient because the entire sequence of states is available to us as a
data structure that can be manipulated with a uniform set of operations.
> **[]{#Exercise 3.63 label="Exercise 3.63"}Exercise 3.63:** Louis
> Reasoner asks why the `sqrt/stream` procedure was not written in the
> following more straightforward way, without the local variable
> `guesses`:
>
> ::: scheme
> (define (sqrt-stream x) (cons-stream 1.0 (stream-map (lambda (guess)
> (sqrt-improve guess x)) (sqrt-stream x))))
> :::
>
> Alyssa P. Hacker replies that this version of the procedure is
> considerably less efficient because it performs redundant computation.
> Explain Alyssa's answer. Would the two versions still differ in
> efficiency if our implementation of `delay` used only
> `(lambda () `$\langle$*`exp`*$\rangle$`)` without using the
> optimization provided by `memo/proc` ([Section
> 3.5.1](#Section 3.5.1))?
> **[]{#Exercise 3.64 label="Exercise 3.64"}Exercise 3.64:** Write a
> procedure `stream/limit` that takes as arguments a stream and a number
> (the tolerance). It should examine the stream until it finds two
> successive elements that differ in absolute value by less than the
> tolerance, and return the second of the two elements. Using this, we
> could compute square roots up to a given tolerance by
>
> ::: scheme
> (define (sqrt x tolerance) (stream-limit (sqrt-stream x) tolerance))
> :::
> **[]{#Exercise 3.65 label="Exercise 3.65"}Exercise 3.65:** Use the
> series
>
> $$\ln 2 = 1 - {1\over2} + {1\over3} - {1\over4} + \dots$$
>
> to compute three sequences of approximations to the natural logarithm
> of 2, in the same way we did above for $\pi$. How rapidly do these
> sequences converge?
#### Infinite streams of pairs {#infinite-streams-of-pairs .unnumbered}
In [Section 2.2.3](#Section 2.2.3), we saw how the sequence paradigm
handles traditional nested loops as processes defined on sequences of
pairs. If we generalize this technique to infinite streams, then we can
write programs that are not easily represented as loops, because the
"looping" must range over an infinite set.
For example, suppose we want to generalize the `prime/sum/pairs`
procedure of [Section 2.2.3](#Section 2.2.3) to produce the stream of
pairs of *all* integers $(i, j)$ with $i \le j$ such that $i + j$ is
prime. If `int/pairs` is the sequence of all pairs of integers $(i, j)$
with $i \le j$, then our required stream is simply[^194]
::: scheme
(stream-filter (lambda (pair) (prime? (+ (car pair) (cadr pair))))
int-pairs)
:::
Our problem, then, is to produce the stream `int/pairs`. More generally,
suppose we have two streams $S = (S_i)$ and $T = (T_j)$, and imagine the
infinite rectangular array
$$\vbox{
\offinterlineskip
\halign{
\strut \hfil \ #\ \hfil &
\hfil \ #\ \hfil &
\hfil \ #\ \hfil &
\hfil \ #\ \hfil \cr
$ (S_0, T_0) $ & $ (S_0, T_1) $ & $ (S_0, T_2) $ & $ \dots $ \cr
$ (S_1, T_0) $ & $ (S_1, T_1) $ & $ (S_1, T_2) $ & $ \dots $ \cr
$ (S_2, T_0) $ & $ (S_2, T_1) $ & $ (S_2, T_2) $ & $ \dots $ \cr
$ \dots $ & & & \cr }
}$$
We wish to generate a stream that contains all the pairs in the array
that lie on or above the diagonal, i.e., the pairs
$$\vbox{
\offinterlineskip
\halign{
\strut \hfil \ #\ \hfil &
\hfil \ #\ \hfil &
\hfil \ #\ \hfil &
\hfil \ #\ \hfil \cr
$ (S_0, T_0) $ & $ (S_0, T_1) $ & $ (S_0, T_2) $ & $ \dots $ \cr
& $ (S_1, T_1) $ & $ (S_1, T_2) $ & $ \dots $ \cr
& & $ (S_2, T_2) $ & $ \dots $ \cr
& & & $ \dots $ \cr }
}$$
(If we take both $S$ and $T$ to be the stream of integers, then this
will be our desired stream `int/pairs`.)
Call the general stream of pairs `(pairs S T)`, and consider it to be
composed of three parts: the pair $(S_0, T_0)$, the rest of the pairs in
the first row, and the remaining pairs:[^195]
$$\vbox{
\offinterlineskip
\halign{
\strut \hfil \ #\ \hfil & \vrule
\hfil \ #\ \hfil &
\hfil \ #\ \hfil &
\hfil \ #\ \hfil \cr
$ (S_0, T_0) $ & $ (S_0, T_1) $ & $ (S_0, T_2) $ & $ \dots $ \cr
\noalign{\hrule}
& $ (S_1, T_1) $ & $ (S_1, T_2) $ & $ \dots $ \cr
& & $ (S_2, T_2) $ & $ \dots $ \cr
& & & $ \dots $ \cr }
}$$
Observe that the third piece in this decomposition (pairs that are not
in the first row) is (recursively) the pairs formed from
`(stream/cdr S)` and `(stream/cdr T)`. Also note that the second piece
(the rest of the first row) is
::: scheme
(stream-map (lambda (x) (list (stream-car s) x)) (stream-cdr t))
:::
Thus we can form our stream of pairs as follows:
::: scheme
(define (pairs s t) (cons-stream (list (stream-car s) (stream-car t))
( $\color{SchemeDark}\langle$ *combine-in-some-way* $\color{SchemeDark}\rangle$
(stream-map (lambda (x) (list (stream-car s) x)) (stream-cdr t)) (pairs
(stream-cdr s) (stream-cdr t)))))
:::
In order to complete the procedure, we must choose some way to combine
the two inner streams. One idea is to use the stream analog of the
`append` procedure from [Section 2.2.1](#Section 2.2.1):
::: scheme
(define (stream-append s1 s2) (if (stream-null? s1) s2 (cons-stream
(stream-car s1) (stream-append (stream-cdr s1) s2))))
:::
This is unsuitable for infinite streams, however, because it takes all
the elements from the first stream before incorporating the second
stream. In particular, if we try to generate all pairs of positive
integers using
::: scheme
(pairs integers integers)
:::
our stream of results will first try to run through all pairs with the
first integer equal to 1, and hence will never produce pairs with any
other value of the first integer.
To handle infinite streams, we need to devise an order of combination
that ensures that every element will eventually be reached if we let our
program run long enough. An elegant way to accomplish this is with the
following `interleave` procedure:[^196]
::: scheme
(define (interleave s1 s2) (if (stream-null? s1) s2 (cons-stream
(stream-car s1) (interleave s2 (stream-cdr s1)))))
:::
Since `interleave` takes elements alternately from the two streams,
every element of the second stream will eventually find its way into the
interleaved stream, even if the first stream is infinite.
We can thus generate the required stream of pairs as
::: scheme
(define (pairs s t) (cons-stream (list (stream-car s) (stream-car t))
(interleave (stream-map (lambda (x) (list (stream-car s) x)) (stream-cdr
t)) (pairs (stream-cdr s) (stream-cdr t)))))
:::
> **[]{#Exercise 3.66 label="Exercise 3.66"}Exercise 3.66:** Examine the
> stream `(pairs integers integers)`. Can you make any general comments
> about the order in which the pairs are placed into the stream? For
> example, approximately how many pairs precede the pair (1, 100)? the
> pair (99, 100)? the pair (100, 100)? (If you can make precise
> mathematical statements here, all the better. But feel free to give
> more qualitative answers if you find yourself getting bogged down.)
> **[]{#Exercise 3.67 label="Exercise 3.67"}Exercise 3.67:** Modify the
> `pairs` procedure so that `(pairs integers integers)` will produce the
> stream of *all* pairs of integers $(i, j)$ (without the condition
> $i \le j$). Hint: You will need to mix in an additional stream.
> **[]{#Exercise 3.68 label="Exercise 3.68"}Exercise 3.68:** Louis
> Reasoner thinks that building a stream of pairs from three parts is
> unnecessarily complicated. Instead of separating the pair $(S_0, T_0)$
> from the rest of the pairs in the first row, he proposes to work with
> the whole first row, as follows:
>
> ::: scheme
> (define (pairs s t) (interleave (stream-map (lambda (x) (list
> (stream-car s) x)) t) (pairs (stream-cdr s) (stream-cdr t))))
> :::
>
> Does this work? Consider what happens if we evaluate
> `(pairs integers integers)` using Louis's definition of `pairs`.
> **[]{#Exercise 3.69 label="Exercise 3.69"}Exercise 3.69:** Write a
> procedure `triples` that takes three infinite streams, $S$, $T$, and
> $U$, and produces the stream of triples $(S_i, T_j, U_k)$ such that
> $i \le j \le k$. Use `triples` to generate the stream of all
> Pythagorean triples of positive integers, i.e., the triples
> $(i, j, k)$ such that $i \le j$ and $i^2 + j^2 = k^2$.
> **[]{#Exercise 3.70 label="Exercise 3.70"}Exercise 3.70:** It would be
> nice to be able to generate streams in which the pairs appear in some
> useful order, rather than in the order that results from an *ad hoc*
> interleaving process. We can use a technique similar to the `merge`
> procedure of [Exercise 3.56](#Exercise 3.56), if we define a way to
> say that one pair of integers is "less than" another. One way to do
> this is to define a "weighting function" $W(i, j)$ and stipulate that
> $(i_1, j_1)$ is less than $(i_2, j_2)$ if $W(i_1, j_1) < W(i_2, j_2)$.
> Write a procedure `merge/weighted` that is like `merge`, except that
> `merge/weighted` takes an additional argument `weight`, which is a
> procedure that computes the weight of a pair, and is used to determine
> the order in which elements should appear in the resulting merged
> stream.[^197] Using this, generalize `pairs` to a procedure
> `weighted/pairs` that takes two streams, together with a procedure
> that computes a weighting function, and generates the stream of pairs,
> ordered according to weight. Use your procedure to generate
>
> a. the stream of all pairs of positive integers $(i, j)$ with
> $i \le j$ ordered according to the sum $i + j$,
>
> b. the stream of all pairs of positive integers $(i, j)$ with
> $i \le j$, where neither $i$ nor $j$ is divisible by 2, 3, or 5,
> and the pairs are ordered according to the sum
> $2i + 3\!j + 5i\!j$.
> **[]{#Exercise 3.71 label="Exercise 3.71"}Exercise 3.71:** Numbers
> that can be expressed as the sum of two cubes in more than one way are
> sometimes called *Ramanujan numbers*, in honor of the mathematician
> Srinivasa Ramanujan.[^198] Ordered streams of pairs provide an elegant
> solution to the problem of computing these numbers. To find a number
> that can be written as the sum of two cubes in two different ways, we
> need only generate the stream of pairs of integers $(i, j)$ weighted
> according to the sum $i^3 + j^3$ (see [Exercise
> 3.70](#Exercise 3.70)), then search the stream for two consecutive
> pairs with the same weight. Write a procedure to generate the
> Ramanujan numbers. The first such number is 1,729. What are the next
> five?
> **[]{#Exercise 3.72 label="Exercise 3.72"}Exercise 3.72:** In a
> similar way to [Exercise 3.71](#Exercise 3.71) generate a stream of
> all numbers that can be written as the sum of two squares in three
> different ways (showing how they can be so written).
#### Streams as signals {#streams-as-signals .unnumbered}
We began our discussion of streams by describing them as computational
analogs of the "signals" in signal-processing systems. In fact, we can
use streams to model signal-processing systems in a very direct way,
representing the values of a signal at successive time intervals as
consecutive elements of a stream. For instance, we can implement an
*integrator* or *summer* that, for an input stream $x = (x_i)$, an
initial value $C$, and a small increment $dt$, accumulates the sum
$$S_i = C + \sum_{j=1}^i x_{\kern-0.07em j} \kern0.1em dt$$
and returns the stream of values $S = (S_i)$. The following `integral`
procedure is reminiscent of the "implicit style" definition of the
stream of integers ([Section 3.5.2](#Section 3.5.2)):
::: scheme
(define (integral integrand initial-value dt) (define int (cons-stream
initial-value (add-streams (scale-stream integrand dt) int))) int)
:::
[Figure 3.32](#Figure 3.32) is a picture of a signal-processing system
that corresponds to the `integral` procedure. The input stream is scaled
by $dt$ and passed through an adder, whose output is passed back through
the same adder. The self-reference in the definition of `int` is
reflected in the figure by the feedback loop that connects the output of
the adder to one of the inputs.
[]{#Figure 3.32 label="Figure 3.32"}
![image](fig/chap3/Fig3.32.pdf){width="102mm"}
> **Figure 3.32:** The `integral` procedure viewed as a
> signal-processing system.
> **[]{#Exercise 3.73 label="Exercise 3.73"}Exercise 3.73:** We can
> model electrical circuits using streams to represent the values of
> currents or voltages at a sequence of times. For instance, suppose we
> have an *RC circuit* consisting of a resistor of resistance $R$ and a
> capacitor of capacitance $C$ in series. The voltage response $v$ of
> the circuit to an injected current $i$ is determined by the formula in
> [Figure 3.33](#Figure 3.33), whose structure is shown by the
> accompanying signal-flow diagram.
>
> []{#Figure 3.33 label="Figure 3.33"}
>
> ![image](fig/chap3/Fig3.33.pdf){width="94mm"}
>
> **Figure 3.33:** An RC circuit and the associated signal-flow diagram.
>
> Write a procedure `RC` that models this circuit. `RC` should take as
> inputs the values of $R$, $C$, and $dt$ and should return a procedure
> that takes as inputs a stream representing the current $i$ and an
> initial value for the capacitor voltage $v_0$ and produces as output
> the stream of voltages $v$. For example, you should be able to use
> `RC` to model an RC circuit with $R$ = 5 ohms, $C$ = 1 farad, and a
> 0.5-second time step by evaluating `(define RC1 (RC 5 1 0.5))`. This
> defines `RC1` as a procedure that takes a stream representing the time
> sequence of currents and an initial capacitor voltage and produces the
> output stream of voltages.
> **[]{#Exercise 3.74 label="Exercise 3.74"}Exercise 3.74:** Alyssa P.
> Hacker is designing a system to process signals coming from physical
> sensors. One important feature she wishes to produce is a signal that
> describes the *zero crossings* of the input signal. That is, the
> resulting signal should be $+1$ whenever the input signal changes from
> negative to positive, $-1$ whenever the input signal changes from
> positive to negative, and 0 otherwise. (Assume that the sign of a 0
> input is positive.) For example, a typical input signal with its
> associated zero-crossing signal would be
>
> ::: scheme
> $\dots$ 1 2 1.5 1 0.5 -0.1 -2 -3 -2 -0.5 0.2 3 4 $\dots$ $\dots$
> 0 0 0 0 0 -1 0 0 0 0 1 0 0 $\dots$
> :::
>
> In Alyssa's system, the signal from the sensor is represented as a
> stream `sense/data` and the stream `zero/crossings` is the
> corresponding stream of zero crossings. Alyssa first writes a
> procedure `sign/change/detector` that takes two values as arguments
> and compares the signs of the values to produce an appropriate 0, 1,
> or - 1. She then constructs her zero-crossing stream as follows:
>
> ::: scheme
> (define (make-zero-crossings input-stream last-value) (cons-stream
> (sign-change-detector (stream-car input-stream) last-value)
> (make-zero-crossings (stream-cdr input-stream) (stream-car
> input-stream)))) (define zero-crossings (make-zero-crossings
> sense-data 0))
> :::
>
> Alyssa's boss, Eva Lu Ator, walks by and suggests that this program is
> approximately equivalent to the following one, which uses the
> generalized version of `stream/map` from [Exercise
> 3.50](#Exercise 3.50):
>
> ::: scheme
> (define zero-crossings (stream-map sign-change-detector sense-data
> $\color{SchemeDark}\langle$ *expression* $\color{SchemeDark}\rangle$ ))
> :::
>
> Complete the program by supplying the indicated
> $\langle$*expression*$\rangle$.
> **[]{#Exercise 3.75 label="Exercise 3.75"}Exercise 3.75:**
> Unfortunately, Alyssa's zero-crossing detector in [Exercise
> 3.74](#Exercise 3.74) proves to be insufficient, because the noisy
> signal from the sensor leads to spurious zero crossings. Lem E.
> Tweakit, a hardware specialist, suggests that Alyssa smooth the signal
> to filter out the noise before extracting the zero crossings. Alyssa
> takes his advice and decides to extract the zero crossings from the
> signal constructed by averaging each value of the sense data with the
> previous value. She explains the problem to her assistant, Louis
> Reasoner, who attempts to implement the idea, altering Alyssa's
> program as follows:
>
> ::: scheme
> (define (make-zero-crossings input-stream last-value) (let ((avpt (/
> (+ (stream-car input-stream) last-value) 2))) (cons-stream
> (sign-change-detector avpt last-value) (make-zero-crossings
> (stream-cdr input-stream) avpt))))
> :::
>
> This does not correctly implement Alyssa's plan. Find the bug that
> Louis has installed and fix it without changing the structure of the
> program. (Hint: You will need to increase the number of arguments to
> `make/zero/crossings`.)
> **[]{#Exercise 3.76 label="Exercise 3.76"}Exercise 3.76:** Eva Lu Ator
> has a criticism of Louis's approach in [Exercise
> 3.75](#Exercise 3.75). The program he wrote is not modular, because it
> intermixes the operation of smoothing with the zero-crossing
> extraction. For example, the extractor should not have to be changed
> if Alyssa finds a better way to condition her input signal. Help Louis
> by writing a procedure `smooth` that takes a stream as input and
> produces a stream in which each element is the average of two
> successive input stream elements. Then use `smooth` as a component to
> implement the zero-crossing detector in a more modular style.
### Streams and Delayed Evaluation {#Section 3.5.4}
The `integral` procedure at the end of the preceding section shows how
we can use streams to model signal-processing systems that contain
feedback loops. The feedback loop for the adder shown in [Figure
3.32](#Figure 3.32) is modeled by the fact that `integral`'s internal
stream `int` is defined in terms of itself:
::: scheme
(define int (cons-stream initial-value (add-streams (scale-stream
integrand dt) int)))
:::
The interpreter's ability to deal with such an implicit definition
depends on the `delay` that is incorporated into `cons/stream`. Without
this `delay`, the interpreter could not construct `int` before
evaluating both arguments to `cons/stream`, which would require that
`int` already be defined. In general, `delay` is crucial for using
streams to model signal-processing systems that contain loops. Without
`delay`, our models would have to be formulated so that the inputs to
any signal-processing component would be fully evaluated before the
output could be produced. This would outlaw loops.
[]{#Figure 3.34 label="Figure 3.34"}
![image](fig/chap3/Fig3.34.pdf){width="67mm"}
> **Figure 3.34:** An "analog computer circuit" that solves the equation
> $dy / dt = f(y)$.
Unfortunately, stream models of systems with loops may require uses of
`delay` beyond the "hidden" `delay` supplied by `cons/stream`. For
instance, [Figure 3.34](#Figure 3.34) shows a signal-processing system
for solving the differential equation $dy / dt = f(y)$ where $f$ is a
given function. The figure shows a mapping component, which applies $f$
to its input signal, linked in a feedback loop to an integrator in a
manner very similar to that of the analog computer circuits that are
actually used to solve such equations.
Assuming we are given an initial value $y_0$ for $y$, we could try to
model this system using the procedure
::: scheme
(define (solve f y0 dt) (define y (integral dy y0 dt)) (define dy
(stream-map f y)) y)
:::
This procedure does not work, because in the first line of `solve` the
call to `integral` requires that the input `dy` be defined, which does
not happen until the second line of `solve`.
On the other hand, the intent of our definition does make sense, because
we can, in principle, begin to generate the `y` stream without knowing
`dy`. Indeed, `integral` and many other stream operations have
properties similar to those of `cons/stream`, in that we can generate
part of the answer given only partial information about the arguments.
For `integral`, the first element of the output stream is the specified
`initial/value`. Thus, we can generate the first element of the output
stream without evaluating the integrand `dy`. Once we know the first
element of `y`, the `stream/map` in the second line of `solve` can begin
working to generate the first element of `dy`, which will produce the
next element of `y`, and so on.
To take advantage of this idea, we will redefine `integral` to expect
the integrand stream to be a *delayed argument*. `integral` will `force`
the integrand to be evaluated only when it is required to generate more
than the first element of the output stream:
::: scheme
(define (integral delayed-integrand initial-value dt) (define int
(cons-stream initial-value (let ((integrand (force delayed-integrand)))
(add-streams (scale-stream integrand dt) int)))) int)
:::
Now we can implement our `solve` procedure by delaying the evaluation of
`dy` in the definition of `y`:[^199]
::: scheme
(define (solve f y0 dt) (define y (integral (delay dy) y0 dt)) (define
dy (stream-map f y)) y)
:::
In general, every caller of `integral` must now `delay` the integrand
argument. We can demonstrate that the `solve` procedure works by
approximating $e \approx 2.718$ by computing the value at $y = 1$ of the
solution to the differential equation $dy / dt = y$ with initial
condition $y(0) = 1$:
::: scheme
(stream-ref (solve (lambda (y) y) 1 0.001) 1000) *2.716924*
:::
> **[]{#Exercise 3.77 label="Exercise 3.77"}Exercise 3.77:** The
> `integral` procedure used above was analogous to the "implicit"
> definition of the infinite stream of integers in [Section
> 3.5.2](#Section 3.5.2). Alternatively, we can give a definition of
> `integral` that is more like `integers/starting/from` (also in
> [Section 3.5.2](#Section 3.5.2)):
>
> ::: smallscheme
> (define (integral integrand initial-value dt) (cons-stream
> initial-value (if (stream-null? integrand) the-empty-stream (integral
> (stream-cdr integrand) (+ (\* dt (stream-car integrand))
> initial-value) dt))))
> :::
>
> When used in systems with loops, this procedure has the same problem
> as does our original version of `integral`. Modify the procedure so
> that it expects the `integrand` as a delayed argument and hence can be
> used in the `solve` procedure shown above.
[]{#Figure 3.35 label="Figure 3.35"}
![image](fig/chap3/Fig3.35a.pdf){width="91mm"}
> **Figure 3.35:** Signal-flow diagram for the solution to a
> second-order linear differential equation.
> **[]{#Exercise 3.78 label="Exercise 3.78"}Exercise 3.78:** Consider
> the problem of designing a signal-processing system to study the
> homogeneous second-order linear differential equation
>
> $${d^2\!y \over dt^2} - a {dy \over dt} - by = 0.$$
>
> The output stream, modeling $y$, is generated by a network that
> contains a loop. This is because the value of $d^2\!y / dt^2$ depends
> upon the values of $y$ and $dy / dt$ and both of these are determined
> by integrating $d^2\!y / dt^2$. The diagram we would like to encode is
> shown in [Figure 3.35](#Figure 3.35). Write a procedure `solve/2nd`
> that takes as arguments the constants $a$, $b$, and $dt$ and the
> initial values $y_0$ and $dy_0$ for $y$ and $dy / dt$ and generates
> the stream of successive values of $y$.
> **[]{#Exercise 3.79 label="Exercise 3.79"}Exercise 3.79:** Generalize
> the `solve/2nd` procedure of [Exercise 3.78](#Exercise 3.78) so that
> it can be used to solve general second-order differential equations
> $d^2\!y / dt^2 =
> f(dy / dt, y)$.
[]{#Figure 3.36 label="Figure 3.36"}
![image](fig/chap3/Fig3.36.pdf){width="60mm"}
**Figure 3.36:** A series RLC circuit.
> **[]{#Exercise 3.80 label="Exercise 3.80"}Exercise 3.80:** A *series
> RLC circuit* consists of a resistor, a capacitor, and an inductor
> connected in series, as shown in [Figure 3.36](#Figure 3.36). If $R$,
> $L$, and $C$ are the resistance, inductance, and capacitance, then the
> relations between voltage ($v$) and current ($i$) for the three
> components are described by the equations
>
> $$v_R = i_R R, \qquad\quad
> v_L = L {di_L \over dt}\,, \qquad\quad
> i_C = C {dv_C \over dt}\,,$$
>
> and the circuit connections dictate the relations
>
> $$i_R = i_L = -i_C\,, \qquad\quad
> v_C = v_L + v_R\,.$$
>
> Combining these equations shows that the state of the circuit
> (summarized by $v_C$, the voltage across the capacitor, and $i_L$, the
> current in the inductor) is described by the pair of differential
> equations
>
> $${dv_C \over dt} = -{i_L \over C}\,, \qquad\quad
> {di_L \over dt} = {1 \over L} v_C - {R \over L} i_L\,.$$
>
> The signal-flow diagram representing this system of differential
> equations is shown in [Figure 3.37](#Figure 3.37).
[]{#Figure 3.37 label="Figure 3.37"}
![image](fig/chap3/Fig3.37a.pdf){width="68mm"}
> **Figure 3.37:** A signal-flow diagram for the solution to a series
> RLC circuit.
> Write a procedure `RLC` that takes as arguments the parameters $R$,
> $L$, and $C$ of the circuit and the time increment $dt$. In a manner
> similar to that of the `RC` procedure of [Exercise
> 3.73](#Exercise 3.73), `RLC` should produce a procedure that takes the
> initial values of the state variables, $v_{C_0}$ and $i_{L_0}$, and
> produces a pair (using `cons`) of the streams of states $v_C$ and
> $i_L$. Using `RLC`, generate the pair of streams that models the
> behavior of a series RLC circuit with $R$ = 1 ohm, $C$ = 0.2 farad,
> $L$ = 1 henry, $dt$ = 0.1 second, and initial values $i_{L_0}$ = 0
> amps and $v_{C_0}$ = 10 volts.
#### Normal-order evaluation {#normal-order-evaluation .unnumbered}
The examples in this section illustrate how the explicit use of `delay`
and `force` provides great programming flexibility, but the same
examples also show how this can make our programs more complex. Our new
`integral` procedure, for instance, gives us the power to model systems
with loops, but we must now remember that `integral` should be called
with a delayed integrand, and every procedure that uses `integral` must
be aware of this. In effect, we have created two classes of procedures:
ordinary procedures and procedures that take delayed arguments. In
general, creating separate classes of procedures forces us to create
separate classes of higher-order procedures as well.[^200]
One way to avoid the need for two different classes of procedures is to
make all procedures take delayed arguments. We could adopt a model of
evaluation in which all arguments to procedures are automatically
delayed and arguments are forced only when they are actually needed (for
example, when they are required by a primitive operation). This would
transform our language to use normal-order evaluation, which we first
described when we introduced the substitution model for evaluation in
[Section 1.1.5](#Section 1.1.5). Converting to normal-order evaluation
provides a uniform and elegant way to simplify the use of delayed
evaluation, and this would be a natural strategy to adopt if we were
concerned only with stream processing. In [Section 4.2](#Section 4.2),
after we have studied the evaluator, we will see how to transform our
language in just this way. Unfortunately, including delays in procedure
calls wreaks havoc with our ability to design programs that depend on
the order of events, such as programs that use assignment, mutate data,
or perform input or output. Even the single `delay` in `cons/stream` can
cause great confusion, as illustrated by [Exercise 3.51](#Exercise 3.51)
and [Exercise 3.52](#Exercise 3.52). As far as anyone knows, mutability
and delayed evaluation do not mix well in programming languages, and
devising ways to deal with both of these at once is an active area of
research.
### Modularity of Functional Programs and Modularity of Objects {#Section 3.5.5}
As we saw in [Section 3.1.2](#Section 3.1.2), one of the major benefits
of introducing assignment is that we can increase the modularity of our
systems by encapsulating, or "hiding," parts of the state of a large
system within local variables. Stream models can provide an equivalent
modularity without the use of assignment. As an illustration, we can
reimplement the Monte Carlo estimation of $\pi$, which we examined in
[Section 3.1.2](#Section 3.1.2), from a stream-processing point of view.
The key modularity issue was that we wished to hide the internal state
of a random-number generator from programs that used random numbers. We
began with a procedure `rand/update`, whose successive values furnished
our supply of random numbers, and used this to produce a random-number
generator:
::: scheme
(define rand (let ((x random-init)) (lambda () (set! x (rand-update x))
x)))
:::
In the stream formulation there is no random-number generator *per se*,
just a stream of random numbers produced by successive calls to
`rand/update`:
::: scheme
(define random-numbers (cons-stream random-init (stream-map rand-update
random-numbers)))
:::
We use this to construct the stream of outcomes of the Cesàro experiment
performed on consecutive pairs in the `random/numbers` stream:
::: scheme
(define cesaro-stream (map-successive-pairs (lambda (r1 r2) (= (gcd r1
r2) 1)) random-numbers)) (define (map-successive-pairs f s) (cons-stream
(f (stream-car s) (stream-car (stream-cdr s))) (map-successive-pairs f
(stream-cdr (stream-cdr s)))))
:::
The `cesaro/stream` is now fed to a `monte/carlo` procedure, which
produces a stream of estimates of probabilities. The results are then
converted into a stream of estimates of $\pi$. This version of the
program doesn't need a parameter telling how many trials to perform.
Better estimates of $\pi$ (from performing more experiments) are
obtained by looking farther into the `pi` stream:
::: scheme
(define (monte-carlo experiment-stream passed failed) (define (next
passed failed) (cons-stream (/ passed (+ passed failed)) (monte-carlo
(stream-cdr experiment-stream) passed failed))) (if (stream-car
experiment-stream) (next (+ passed 1) failed) (next passed (+ failed
1)))) (define pi (stream-map (lambda (p) (sqrt (/ 6 p))) (monte-carlo
cesaro-stream 0 0)))
:::
There is considerable modularity in this approach, because we still can
formulate a general `monte/carlo` procedure that can deal with arbitrary
experiments. Yet there is no assignment or local state.
> **[]{#Exercise 3.81 label="Exercise 3.81"}Exercise 3.81:** [Exercise
> 3.6](#Exercise 3.6) discussed generalizing the random-number generator
> to allow one to reset the random-number sequence so as to produce
> repeatable sequences of "random" numbers. Produce a stream formulation
> of this same generator that operates on an input stream of requests to
> `generate` a new random number or to `reset` the sequence to a
> specified value and that produces the desired stream of random
> numbers. Don't use assignment in your solution.
> **[]{#Exercise 3.82 label="Exercise 3.82"}Exercise 3.82:** Redo
> [Exercise 3.5](#Exercise 3.5) on Monte Carlo integration in terms of
> streams. The stream version of `estimate/integral` will not have an
> argument telling how many trials to perform. Instead, it will produce
> a stream of estimates based on successively more trials.
#### A functional-programming view of time {#a-functional-programming-view-of-time .unnumbered}
Let us now return to the issues of objects and state that were raised at
the beginning of this chapter and examine them in a new light. We
introduced assignment and mutable objects to provide a mechanism for
modular construction of programs that model systems with state. We
constructed computational objects with local state variables and used
assignment to modify these variables. We modeled the temporal behavior
of the objects in the world by the temporal behavior of the
corresponding computational objects.
Now we have seen that streams provide an alternative way to model
objects with local state. We can model a changing quantity, such as the
local state of some object, using a stream that represents the time
history of successive states. In essence, we represent time explicitly,
using streams, so that we decouple time in our simulated world from the
sequence of events that take place during evaluation. Indeed, because of
the presence of `delay` there may be little relation between simulated
time in the model and the order of events during the evaluation.
In order to contrast these two approaches to modeling, let us reconsider
the implementation of a "withdrawal processor" that monitors the balance
in a bank account. In [Section 3.1.3](#Section 3.1.3) we implemented a
simplified version of such a processor:
::: scheme
(define (make-simplified-withdraw balance) (lambda (amount) (set!
balance (- balance amount)) balance))
:::
Calls to `make/simplified/withdraw` produce computational objects, each
with a local state variable `balance` that is decremented by successive
calls to the object. The object takes an `amount` as an argument and
returns the new balance. We can imagine the user of a bank account
typing a sequence of inputs to such an object and observing the sequence
of returned values shown on a display screen.
Alternatively, we can model a withdrawal processor as a procedure that
takes as input a balance and a stream of amounts to withdraw and
produces the stream of successive balances in the account:
::: scheme
(define (stream-withdraw balance amount-stream) (cons-stream balance
(stream-withdraw (- balance (stream-car amount-stream)) (stream-cdr
amount-stream))))
:::
`stream/withdraw` implements a well-defined mathematical function whose
output is fully determined by its input. Suppose, however, that the
input `amount/stream` is the stream of successive values typed by the
user and that the resulting stream of balances is displayed. Then, from
the perspective of the user who is typing values and watching results,
the stream process has the same behavior as the object created by
`make/simplified/withdraw`. However, with the stream version, there is
no assignment, no local state variable, and consequently none of the
theoretical difficulties that we encountered in [Section
3.1.3](#Section 3.1.3). Yet the system has state!
This is really remarkable. Even though `stream/withdraw` implements a
well-defined mathematical function whose behavior does not change, the
user's perception here is one of interacting with a system that has a
changing state. One way to resolve this paradox is to realize that it is
the user's temporal existence that imposes state on the system. If the
user could step back from the interaction and think in terms of streams
of balances rather than individual transactions, the system would appear
stateless.[^201]
From the point of view of one part of a complex process, the other parts
appear to change with time. They have hidden time-varying local state.
If we wish to write programs that model this kind of natural
decomposition in our world (as we see it from our viewpoint as a part of
that world) with structures in our computer, we make computational
objects that are not functional---they must change with time. We model
state with local state variables, and we model the changes of state with
assignments to those variables. By doing this we make the time of
execution of a computation model time in the world that we are part of,
and thus we get "objects" in our computer.
Modeling with objects is powerful and intuitive, largely because this
matches the perception of interacting with a world of which we are part.
However, as we've seen repeatedly throughout this chapter, these models
raise thorny problems of constraining the order of events and of
synchronizing multiple processes. The possibility of avoiding these
problems has stimulated the development of *functional programming
languages*, which do not include any provision for assignment or mutable
data. In such a language, all procedures implement well-defined
mathematical functions of their arguments, whose behavior does not
change. The functional approach is extremely attractive for dealing with
concurrent systems.[^202]
On the other hand, if we look closely, we can see time-related problems
creeping into functional models as well. One particularly troublesome
area arises when we wish to design interactive systems, especially ones
that model interactions between independent entities. For instance,
consider once more the implementation a banking system that permits
joint bank accounts. In a conventional system using assignment and
objects, we would model the fact that Peter and Paul share an account by
having both Peter and Paul send their transaction requests to the same
bank-account object, as we saw in [Section 3.1.3](#Section 3.1.3). From
the stream point of view, where there are no "objects" *per se*, we have
already indicated that a bank account can be modeled as a process that
operates on a stream of transaction requests to produce a stream of
responses. Accordingly, we could model the fact that Peter and Paul have
a joint bank account by merging Peter's stream of transaction requests
with Paul's stream of requests and feeding the result to the
bank-account stream process, as shown in [Figure 3.38](#Figure 3.38).
[]{#Figure 3.38 label="Figure 3.38"}
![image](fig/chap3/Fig3.38.pdf){width="88mm"}
> **Figure 3.38:** A joint bank account, modeled by merging two streams
> of transaction requests.
The trouble with this formulation is in the notion of *merge*. It will
not do to merge the two streams by simply taking alternately one request
from Peter and one request from Paul. Suppose Paul accesses the account
only very rarely. We could hardly force Peter to wait for Paul to access
the account before he could issue a second transaction. However such a
merge is implemented, it must interleave the two transaction streams in
some way that is constrained by "real time" as perceived by Peter and
Paul, in the sense that, if Peter and Paul meet, they can agree that
certain transactions were processed before the meeting, and other
transactions were processed after the meeting.[^203] This is precisely
the same constraint that we had to deal with in [Section
3.4.1](#Section 3.4.1), where we found the need to introduce explicit
synchronization to ensure a "correct" order of events in concurrent
processing of objects with state. Thus, in an attempt to support the
functional style, the need to merge inputs from different agents
reintroduces the same problems that the functional style was meant to
eliminate.
We began this chapter with the goal of building computational models
whose structure matches our perception of the real world we are trying
to model. We can model the world as a collection of separate,
time-bound, interacting objects with state, or we can model the world as
a single, timeless, stateless unity. Each view has powerful advantages,
but neither view alone is completely satisfactory. A grand unification
has yet to emerge.[^204]
# Metalinguistic Abstraction {#Chapter 4}
> $\dots$ It's in words that the magic is---Abracadabra, Open Sesame,
> and the rest---but the magic words in one story aren't magical in the
> next. The real magic is to understand which words work, and when, and
> for what; the trick is to learn the trick.
>
> $\dots$ And those words are made from the letters of our alphabet: a
> couple-dozen squiggles we can draw with the pen. This is the key! And
> the treasure, too, if we can only get our hands on it! It's as if---as
> if the key to the treasure *is* the treasure!
>
> ---John Barth, *Chimera*
In our study of program design, we have seen
that expert programmers control the complexity of their designs with the
same general techniques used by designers of all complex systems. They
combine primitive elements to form compound objects, they abstract
compound objects to form higher-level building blocks, and they preserve
modularity by adopting appropriate large-scale views of system
structure. In illustrating these techniques, we have used Lisp as a
language for describing processes and for constructing computational
data objects and processes to model complex phenomena in the real world.
However, as we confront increasingly complex problems, we will find that
Lisp, or indeed any fixed programming language, is not sufficient for
our needs. We must constantly turn to new languages in order to express
our ideas more effectively. Establishing new languages is a powerful
strategy for controlling complexity in engineering design; we can often
enhance our ability to deal with a complex problem by adopting a new
language that enables us to describe (and hence to think about) the
problem in a different way, using primitives, means of combination, and
means of abstraction that are particularly well suited to the problem at
hand.[^205]
Programming is endowed with a multitude of languages. There are physical
languages, such as the machine languages for particular computers. These
languages are concerned with the representation of data and control in
terms of individual bits of storage and primitive machine instructions.
The machine-language programmer is concerned with using the given
hardware to erect systems and utilities for the efficient implementation
of resource-limited computations. High-level languages, erected on a
machine-language substrate, hide concerns about the representation of
data as collections of bits and the representation of programs as
sequences of primitive instructions. These languages have means of
combination and abstraction, such as procedure definition, that are
appropriate to the larger-scale organization of systems.
*Metalinguistic abstraction*---establishing new languages---plays an
important role in all branches of engineering design. It is particularly
important to computer programming, because in programming not only can
we formulate new languages but we can also implement these languages by
constructing evaluators. An *evaluator* (or *interpreter*) for a
programming language is a procedure that, when applied to an expression
of the language, performs the actions required to evaluate that
expression.
It is no exaggeration to regard this as the most fundamental idea in
programming:
> The evaluator, which determines the meaning of expressions in a
> programming language, is just another program.
To appreciate this point is to change our images of ourselves as
programmers. We come to see ourselves as designers of languages, rather
than only users of languages designed by others.
In fact, we can regard almost any program as the evaluator for some
language. For instance, the polynomial manipulation system of [Section
2.5.3](#Section 2.5.3) embodies the rules of polynomial arithmetic and
implements them in terms of operations on list-structured data. If we
augment this system with procedures to read and print polynomial
expressions, we have the core of a special-purpose language for dealing
with problems in symbolic mathematics. The digital-logic simulator of
[Section 3.3.4](#Section 3.3.4) and the constraint propagator of
[Section 3.3.5](#Section 3.3.5) are legitimate languages in their own
right, each with its own primitives, means of combination, and means of
abstraction. Seen from this perspective, the technology for coping with
large-scale computer systems merges with the technology for building new
computer languages, and computer science itself becomes no more (and no
less) than the discipline of constructing appropriate descriptive
languages.
We now embark on a tour of the technology by which languages are
established in terms of other languages. In this chapter we shall use
Lisp as a base, implementing evaluators as Lisp procedures. Lisp is
particularly well suited to this task, because of its ability to
represent and manipulate symbolic expressions. We will take the first
step in understanding how languages are implemented by building an
evaluator for Lisp itself. The language implemented by our evaluator
will be a subset of the Scheme dialect of Lisp that we use in this book.
Although the evaluator described in this chapter is written for a
particular dialect of Lisp, it contains the essential structure of an
evaluator for any expression-oriented language designed for writing
programs for a sequential machine. (In fact, most language processors
contain, deep within them, a little "Lisp" evaluator.) The evaluator has
been simplified for the purposes of illustration and discussion, and
some features have been left out that would be important to include in a
production-quality Lisp system. Nevertheless, this simple evaluator is
adequate to execute most of the programs in this book.[^206]
An important advantage of making the evaluator accessible as a Lisp
program is that we can implement alternative evaluation rules by
describing these as modifications to the evaluator program. One place
where we can use this power to good effect is to gain extra control over
the ways in which computational models embody the notion of time, which
was so central to the discussion in [Chapter 3](#Chapter 3). There, we
mitigated some of the complexities of state and assignment by using
streams to decouple the representation of time in the world from time in
the computer. Our stream programs, however, were sometimes cumbersome,
because they were constrained by the applicative-order evaluation of
Scheme. In [Section 4.2](#Section 4.2), we'll change the underlying
language to provide for a more elegant approach, by modifying the
evaluator to provide for *normal-order evaluation*.
[Section 4.3](#Section 4.3) implements a more ambitious linguistic
change, whereby expressions have many values, rather than just a single
value. In this language of *nondeterministic computing*, it is natural
to express processes that generate all possible values for expressions
and then search for those values that satisfy certain constraints. In
terms of models of computation and time, this is like having time branch
into a set of "possible futures" and then searching for appropriate time
lines. With our nondeterministic evaluator, keeping track of multiple
values and performing searches are handled automatically by the
underlying mechanism of the language.
In [Section 4.4](#Section 4.4) we implement a *logic-programming*
language in which knowledge is expressed in terms of relations, rather
than in terms of computations with inputs and outputs. Even though this
makes the language drastically different from Lisp, or indeed from any
conventional language, we will see that the logic-programming evaluator
shares the essential structure of the Lisp evaluator.
## The Metacircular Evaluator {#Section 4.1}
Our evaluator for Lisp will be implemented as a Lisp program. It may
seem circular to think about evaluating Lisp programs using an evaluator
that is itself implemented in Lisp. However, evaluation is a process, so
it is appropriate to describe the evaluation process using Lisp, which,
after all, is our tool for describing processes.[^207] An evaluator that
is written in the same language that it evaluates is said to be
*metacircular*.
The metacircular evaluator is essentially a Scheme formulation of the
environment model of evaluation described in [Section
3.2](#Section 3.2). Recall that the model has two basic parts:
1. To evaluate a combination (a compound expression other than a
special form), evaluate the subexpressions and then apply the value
of the operator subexpression to the values of the operand
subexpressions.
2. To apply a compound procedure to a set of arguments, evaluate the
body of the procedure in a new environment. To construct this
environment, extend the environment part of the procedure object by
a frame in which the formal parameters of the procedure are bound to
the arguments to which the procedure is applied.
These two rules describe the essence of the evaluation process, a basic
cycle in which expressions to be evaluated in environments are reduced
to procedures to be applied to arguments, which in turn are reduced to
new expressions to be evaluated in new environments, and so on, until we
get down to symbols, whose values are looked up in the environment, and
to primitive procedures, which are applied directly (see [Figure
4.1](#Figure 4.1)).[^208] This evaluation cycle will be embodied by the
interplay between the two critical procedures in the evaluator, `eval`
and `apply`, which are described in [Section 4.1.1](#Section 4.1.1) (see
[Figure 4.1](#Figure 4.1)).
The implementation of the evaluator will depend upon procedures that
define the *syntax* of the expressions to be evaluated. We will use data
abstraction to make the evaluator independent of the representation of
the language. For example, rather than committing to a choice that an
assignment is to be represented by a list beginning with the symbol
`set!` we use an abstract predicate `assignment?` to test for an
assignment, and we use abstract selectors `assignment/variable` and
`assignment/value` to access the parts of an assignment. Implementation
of expressions will be described in detail in [Section
4.1.2](#Section 4.1.2). There are also operations, described in [Section
4.1.3](#Section 4.1.3), that specify the representation of procedures
and environments. For example, `make/procedure` constructs compound
procedures, `lookup/variable/value` accesses the values of variables,
and `apply/primitive/procedure` applies a primitive procedure to a given
list of arguments.
[]{#Figure 4.1 label="Figure 4.1"}
![image](fig/chap4/Fig4.1.pdf){width="100mm"}
> **Figure 4.1:** The `eval`-`apply` cycle exposes the essence of a
> computer language.
### The Core of the Evaluator {#Section 4.1.1}
The evaluation process can be described as the interplay between two
procedures: `eval` and `apply`.
#### Eval {#eval .unnumbered}
`eval` takes as arguments an expression and an environment. It
classifies the expression and directs its evaluation. `eval` is
structured as a case analysis of the syntactic type of the expression to
be evaluated. In order to keep the procedure general, we express the
determination of the type of an expression abstractly, making no
commitment to any particular representation for the various types of
expressions. Each type of expression has a predicate that tests for it
and an abstract means for selecting its parts. This *abstract syntax*
makes it easy to see how we can change the syntax of the language by
using the same evaluator, but with a different collection of syntax
procedures.
**Primitive expressions**
- For self-evaluating expressions, such as numbers, `eval` returns the
expression itself.
- `eval` must look up variables in the environment to find their
values.
**Special forms**
- For quoted expressions, `eval` returns the expression that was
quoted.
- An assignment to (or a definition of) a variable must recursively
call `eval` to compute the new value to be associated with the
variable. The environment must be modified to change (or create) the
binding of the variable.
- An `if` expression requires special processing of its parts, so as
to evaluate the consequent if the predicate is true, and otherwise
to evaluate the alternative.
- A `lambda` expression must be transformed into an applicable
procedure by packaging together the parameters and body specified by
the `lambda` expression with the environment of the evaluation.
- A `begin` expression requires evaluating its sequence of expressions
in the order in which they appear.
- A case analysis (`cond`) is transformed into a nest of `if`
expressions and then evaluated.
**Combinations**
- For a procedure application, `eval` must recursively evaluate the
operator part and the operands of the combination. The resulting
procedure and arguments are passed to `apply`, which handles the
actual procedure application.
Here is the definition of `eval`:
::: scheme
(define (eval exp env) (cond ((self-evaluating? exp) exp) ((variable?
exp) (lookup-variable-value exp env)) ((quoted? exp) (text-of-quotation
exp)) ((assignment? exp) (eval-assignment exp env)) ((definition? exp)
(eval-definition exp env)) ((if? exp) (eval-if exp env)) ((lambda? exp)
(make-procedure (lambda-parameters exp) (lambda-body exp) env)) ((begin?
exp) (eval-sequence (begin-actions exp) env)) ((cond? exp) (eval
(cond-\>if exp) env)) ((application? exp) (apply (eval (operator exp)
env) (list-of-values (operands exp) env))) (else (error \"Unknown
expression type: EVAL\" exp))))
:::
For clarity, `eval` has been implemented as a case analysis using
`cond`. The disadvantage of this is that our procedure handles only a
few distinguishable types of expressions, and no new ones can be defined
without editing the definition of `eval`. In most Lisp implementations,
dispatching on the type of an expression is done in a data-directed
style. This allows a user to add new types of expressions that `eval`
can distinguish, without modifying the definition of `eval` itself. (See
[Exercise 4.3](#Exercise 4.3).)
#### Apply {#apply .unnumbered}
`apply` takes two arguments, a procedure and a list of arguments to
which the procedure should be applied. `apply` classifies procedures
into two kinds: It calls `apply/primitive/procedure` to apply
primitives; it applies compound procedures by sequentially evaluating
the expressions that make up the body of the procedure. The environment
for the evaluation of the body of a compound procedure is constructed by
extending the base environment carried by the procedure to include a
frame that binds the parameters of the procedure to the arguments to
which the procedure is to be applied. Here is the definition of `apply`:
::: scheme
(define (apply procedure arguments) (cond ((primitive-procedure?
procedure) (apply-primitive-procedure procedure arguments))
((compound-procedure? procedure) (eval-sequence (procedure-body
procedure) (extend-environment (procedure-parameters procedure)
arguments (procedure-environment procedure)))) (else (error \"Unknown
procedure type: APPLY\" procedure))))
:::
#### Procedure arguments {#procedure-arguments .unnumbered}
When `eval` processes a procedure application, it uses `list/of/values`
to produce the list of arguments to which the procedure is to be
applied. `list/of/values` takes as an argument the operands of the
combination. It evaluates each operand and returns a list of the
corresponding values:[^209]
::: scheme
(define (list-of-values exps env) (if (no-operands? exps) '() (cons
(eval (first-operand exps) env) (list-of-values (rest-operands exps)
env))))
:::
#### Conditionals {#conditionals .unnumbered}
`eval/if` evaluates the predicate part of an `if` expression in the
given environment. If the result is true, `eval/if` evaluates the
consequent, otherwise it evaluates the alternative:
::: scheme
(define (eval-if exp env) (if (true? (eval (if-predicate exp) env))
(eval (if-consequent exp) env) (eval (if-alternative exp) env)))
:::
The use of `true?` in `eval/if` highlights the issue of the connection
between an implemented language and an implementation language. The
`if/predicate` is evaluated in the language being implemented and thus
yields a value in that language. The interpreter predicate `true?`
translates that value into a value that can be tested by the `if` in the
implementation language: The metacircular representation of truth might
not be the same as that of the underlying Scheme.[^210]
#### Sequences {#sequences .unnumbered}
`eval/sequence` is used by `apply` to evaluate the sequence of
expressions in a procedure body and by `eval` to evaluate the sequence
of expressions in a `begin` expression. It takes as arguments a sequence
of expressions and an environment, and evaluates the expressions in the
order in which they occur. The value returned is the value of the final
expression.
::: scheme
(define (eval-sequence exps env) (cond ((last-exp? exps) (eval
(first-exp exps) env)) (else (eval (first-exp exps) env) (eval-sequence
(rest-exps exps) env))))
:::
#### Assignments and definitions {#assignments-and-definitions .unnumbered}
The following procedure handles assignments to variables. It calls
`eval` to find the value to be assigned and transmits the variable and
the resulting value to `set/variable/value!` to be installed in the
designated environment.
::: scheme
(define (eval-assignment exp env) (set-variable-value!
(assignment-variable exp) (eval (assignment-value exp) env) env) 'ok)
:::
Definitions of variables are handled in a similar manner.[^211]
::: scheme
(define (eval-definition exp env) (define-variable! (definition-variable
exp) (eval (definition-value exp) env) env) 'ok)
:::
We have chosen here to return the symbol `ok` as the value of an
assignment or a definition.[^212]
> **[]{#Exercise 4.1 label="Exercise 4.1"}Exercise 4.1:** Notice that we
> cannot tell whether the metacircular evaluator evaluates operands from
> left to right or from right to left. Its evaluation order is inherited
> from the underlying Lisp: If the arguments to `cons` in
> `list/of/values` are evaluated from left to right, then
> `list/of/values` will evaluate operands from left to right; and if the
> arguments to `cons` are evaluated from right to left, then
> `list/of/values` will evaluate operands from right to left.
>
> Write a version of `list/of/values` that evaluates operands from left
> to right regardless of the order of evaluation in the underlying Lisp.
> Also write a version of `list/of/values` that evaluates operands from
> right to left.
### Representing Expressions {#Section 4.1.2}
The evaluator is reminiscent of the symbolic differentiation program
discussed in [Section 2.3.2](#Section 2.3.2). Both programs operate on
symbolic expressions. In both programs, the result of operating on a
compound expression is determined by operating recursively on the pieces
of the expression and combining the results in a way that depends on the
type of the expression. In both programs we used data abstraction to
decouple the general rules of operation from the details of how
expressions are represented. In the differentiation program this meant
that the same differentiation procedure could deal with algebraic
expressions in prefix form, in infix form, or in some other form. For
the evaluator, this means that the syntax of the language being
evaluated is determined solely by the procedures that classify and
extract pieces of expressions.
Here is the specification of the syntax of our language:
- The only self-evaluating items are numbers and strings:
::: scheme
(define (self-evaluating? exp) (cond ((number? exp) true) ((string?
exp) true) (else false)))
:::
- Variables are represented by symbols:
::: scheme
(define (variable? exp) (symbol? exp))
:::
- Quotations have the form
`(quote `$\langle$*`text/of/quotation`*$\rangle$`)`:[^213]
::: scheme
(define (quoted? exp) (tagged-list? exp 'quote)) (define
(text-of-quotation exp) (cadr exp))
:::
`quoted?` is defined in terms of the procedure `tagged/list?`, which
identifies lists beginning with a designated symbol:
::: scheme
(define (tagged-list? exp tag) (if (pair? exp) (eq? (car exp) tag)
false))
:::
- Assignments have the form
`(set! `$\langle$*`var`*$\rangle$` `$\langle$*`value`*$\rangle$`)`:
::: scheme
(define (assignment? exp) (tagged-list? exp 'set!)) (define
(assignment-variable exp) (cadr exp)) (define (assignment-value exp)
(caddr exp))
:::
- Definitions have the form
::: scheme
(define
$\color{SchemeDark}\langle$ *var* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *value* $\color{SchemeDark}\rangle$ )
:::
or the form
::: scheme
(define
( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *parameter* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *parameter* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
$\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
:::
The latter form (standard procedure definition) is syntactic sugar
for
::: scheme
(define
$\color{SchemeDark}\langle$ *var* $\color{SchemeDark}\rangle$
(lambda
( $\color{SchemeDark}\langle$ *parameter* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *parameter* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
$\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ ))
:::
The corresponding syntax procedures are the following:
::: scheme
(define (definition? exp) (tagged-list? exp 'define)) (define
(definition-variable exp) (if (symbol? (cadr exp)) (cadr exp) (caadr
exp))) (define (definition-value exp) (if (symbol? (cadr exp))
(caddr exp) (make-lambda (cdadr exp) [; formal parameters]{.roman}
(cddr exp)))) [; body]{.roman}
:::
- `lambda` expressions are lists that begin with the symbol `lambda`:
::: scheme
(define (lambda? exp) (tagged-list? exp 'lambda)) (define
(lambda-parameters exp) (cadr exp)) (define (lambda-body exp) (cddr
exp))
:::
We also provide a constructor for `lambda` expressions, which is
used by `definition/value`, above:
::: scheme
(define (make-lambda parameters body) (cons 'lambda (cons parameters
body)))
:::
- Conditionals begin with `if` and have a predicate, a consequent, and
an (optional) alternative. If the expression has no alternative
part, we provide `false` as the alternative.[^214]
::: scheme
(define (if? exp) (tagged-list? exp 'if)) (define (if-predicate exp)
(cadr exp)) (define (if-consequent exp) (caddr exp)) (define
(if-alternative exp) (if (not (null? (cdddr exp))) (cadddr exp)
'false))
:::
We also provide a constructor for `if` expressions, to be used by
`cond/>if` to transform `cond` expressions into `if` expressions:
::: scheme
(define (make-if predicate consequent alternative) (list 'if
predicate consequent alternative))
:::
- `begin` packages a sequence of expressions into a single expression.
We include syntax operations on `begin` expressions to extract the
actual sequence from the `begin` expression, as well as selectors
that return the first expression and the rest of the expressions in
the sequence.[^215]
::: scheme
(define (begin? exp) (tagged-list? exp 'begin)) (define
(begin-actions exp) (cdr exp)) (define (last-exp? seq) (null? (cdr
seq))) (define (first-exp seq) (car seq)) (define (rest-exps seq)
(cdr seq))
:::
We also include a constructor `sequence/>exp` (for use by
`cond/>if`) that transforms a sequence into a single expression,
using `begin` if necessary:
::: scheme
(define (sequence-\>exp seq) (cond ((null? seq) seq) ((last-exp?
seq) (first-exp seq)) (else (make-begin seq)))) (define (make-begin
seq) (cons 'begin seq))
:::
- A procedure application is any compound expression that is not one
of the above expression types. The `car` of the expression is the
operator, and the `cdr` is the list of operands:
::: scheme
(define (application? exp) (pair? exp)) (define (operator exp) (car
exp)) (define (operands exp) (cdr exp)) (define (no-operands? ops)
(null? ops)) (define (first-operand ops) (car ops)) (define
(rest-operands ops) (cdr ops))
:::
#### Derived expressions {#derived-expressions .unnumbered}
Some special forms in our language can be defined in terms of
expressions involving other special forms, rather than being implemented
directly. One example is `cond`, which can be implemented as a nest of
`if` expressions. For example, we can reduce the problem of evaluating
the expression
::: scheme
(cond ((\> x 0) x) ((= x 0) (display 'zero) 0) (else (- x)))
:::
to the problem of evaluating the following expression involving `if` and
`begin` expressions:
::: scheme
(if (\> x 0) x (if (= x 0) (begin (display 'zero) 0) (- x)))
:::
Implementing the evaluation of `cond` in this way simplifies the
evaluator because it reduces the number of special forms for which the
evaluation process must be explicitly specified.
We include syntax procedures that extract the parts of a `cond`
expression, and a procedure `cond/>if` that transforms `cond`
expressions into `if` expressions. A case analysis begins with `cond`
and has a list of predicate-action clauses. A clause is an `else` clause
if its predicate is the symbol `else`.[^216]
::: scheme
(define (cond? exp) (tagged-list? exp 'cond)) (define (cond-clauses exp)
(cdr exp)) (define (cond-else-clause? clause) (eq? (cond-predicate
clause) 'else)) (define (cond-predicate clause) (car clause)) (define
(cond-actions clause) (cdr clause)) (define (cond-\>if exp)
(expand-clauses (cond-clauses exp))) (define (expand-clauses clauses)
(if (null? clauses) 'false [; no `else` clause]{.roman} (let ((first
(car clauses)) (rest (cdr clauses))) (if (cond-else-clause? first) (if
(null? rest) (sequence-\>exp (cond-actions first)) (error \"ELSE clause
isn't last: COND-\>IF\" clauses)) (make-if (cond-predicate first)
(sequence-\>exp (cond-actions first)) (expand-clauses rest))))))
:::
Expressions (such as `cond`) that we choose to implement as syntactic
transformations are called *derived expressions*. `let` expressions are
also derived expressions (see [Exercise 4.6](#Exercise 4.6)).[^217]
> **[]{#Exercise 4.2 label="Exercise 4.2"}Exercise 4.2:** Louis Reasoner
> plans to reorder the `cond` clauses in `eval` so that the clause for
> procedure applications appears before the clause for assignments. He
> argues that this will make the interpreter more efficient: Since
> programs usually contain more applications than assignments,
> definitions, and so on, his modified `eval` will usually check fewer
> clauses than the original `eval` before identifying the type of an
> expression.
>
> a. What is wrong with Louis's plan? (Hint: What will Louis's
> evaluator do with the expression `(define x 3)`?)
>
> b. Louis is upset that his plan didn't work. He is willing to go to
> any lengths to make his evaluator recognize procedure applications
> before it checks for most other kinds of expressions. Help him by
> changing the syntax of the evaluated language so that procedure
> applications start with `call`. For example, instead of
> `(factorial 3)` we will now have to write `(call factorial 3)` and
> instead of `(+ 1 2)` we will have to write `(call + 1 2)`.
> **[]{#Exercise 4.3 label="Exercise 4.3"}Exercise 4.3:** Rewrite `eval`
> so that the dispatch is done in data-directed style. Compare this with
> the data-directed differentiation procedure of [Exercise
> 2.73](#Exercise 2.73). (You may use the `car` of a compound expression
> as the type of the expression, as is appropriate for the syntax
> implemented in this section.)
> **[]{#Exercise 4.4 label="Exercise 4.4"}Exercise 4.4:** Recall the
> definitions of the special forms `and` and `or` from [Chapter
> 1](#Chapter 1):
>
> - `and`: The expressions are evaluated from left to right. If any
> expression evaluates to false, false is returned; any remaining
> expressions are not evaluated. If all the expressions evaluate to
> true values, the value of the last expression is returned. If
> there are no expressions then true is returned.
>
> - `or`: The expressions are evaluated from left to right. If any
> expression evaluates to a true value, that value is returned; any
> remaining expressions are not evaluated. If all expressions
> evaluate to false, or if there are no expressions, then false is
> returned.
>
> Install `and` and `or` as new special forms for the evaluator by
> defining appropriate syntax procedures and evaluation procedures
> `eval/and` and `eval/or`. Alternatively, show how to implement `and`
> and `or` as derived expressions.
> **[]{#Exercise 4.5 label="Exercise 4.5"}Exercise 4.5:** Scheme allows
> an additional syntax for `cond` clauses,
> `(`$\langle$*`test`*$\rangle$` => `$\langle$*`recipient`*$\rangle$`)`.
> If $\langle$*test*$\kern0.08em\rangle$ evaluates to a true value, then
> $\langle$*recipient*$\kern0.08em\rangle$ is evaluated. Its value must
> be a procedure of one argument; this procedure is then invoked on the
> value of the $\langle$*test*$\kern0.08em\rangle$, and the result is
> returned as the value of the `cond` expression. For example
>
> ::: scheme
> (cond ((assoc 'b '((a 1) (b 2))) =\> cadr) (else false))
> :::
>
> returns 2. Modify the handling of `cond` so that it supports this
> extended syntax.
> **[]{#Exercise 4.6 label="Exercise 4.6"}Exercise 4.6:** `let`
> expressions are derived expressions, because
>
> ::: scheme
> (let
> (( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
> $\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
> $\dots$
> ( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$
> $\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ ))
> $\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
> :::
>
> is equivalent to
>
> ::: scheme
> ((lambda
> ( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
> $\dots$
> $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
> $\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
> $\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
> $\dots$
> $\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
> :::
>
> Implement a syntactic transformation `let/>combination` that reduces
> evaluating `let` expressions to evaluating combinations of the type
> shown above, and add the appropriate clause to `eval` to handle `let`
> expressions.
> **[]{#Exercise 4.7 label="Exercise 4.7"}Exercise 4.7:** `let*` is
> similar to `let`, except that the bindings of the `let*` variables are
> performed sequentially from left to right, and each binding is made in
> an environment in which all of the preceding bindings are visible. For
> example
>
> ::: scheme
> (let\* ((x 3) (y (+ x 2)) (z (+ x y 5))) (\* x z))
> :::
>
> returns 39. Explain how a `let*` expression can be rewritten as a set
> of nested `let` expressions, and write a procedure `let*/>nested/lets`
> that performs this transformation. If we have already implemented
> `let` ([Exercise 4.6](#Exercise 4.6)) and we want to extend the
> evaluator to handle `let*`, is it sufficient to add a clause to `eval`
> whose action is
>
> ::: scheme
> (eval (let\*-\>nested-lets exp) env)
> :::
>
> or must we explicitly expand `let*` in terms of non-derived
> expressions?
> **[]{#Exercise 4.8 label="Exercise 4.8"}Exercise 4.8:** "Named `let`"
> is a variant of `let` that has the form
>
> ::: scheme
> (let $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *bindings* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
> :::
>
> The $\langle$*bindings*$\kern0.08em\rangle$ and
> $\langle$*body*$\kern0.08em\rangle$ are just as in ordinary `let`,
> except that $\langle$*var*$\kern0.08em\rangle$ is bound within
> $\langle$*body*$\kern0.08em\rangle$ to a procedure whose body is
> $\langle$*body*$\kern0.08em\rangle$ and whose parameters are the
> variables in the $\langle$*bindings*$\kern0.08em\rangle$. Thus, one
> can repeatedly execute the $\langle$*body*$\kern0.08em\rangle$ by
> invoking the procedure named $\langle$*var*$\kern0.08em\rangle$. For
> example, the iterative Fibonacci procedure ([Section
> 1.2.2](#Section 1.2.2)) can be rewritten using named `let` as follows:
>
> ::: scheme
> (define (fib n) (let fib-iter ((a 1) (b 0) (count n)) (if (= count 0)
> b (fib-iter (+ a b) a (- count 1)))))
> :::
>
> Modify `let/>combination` of [Exercise 4.6](#Exercise 4.6) to also
> support named `let`.
> **[]{#Exercise 4.9 label="Exercise 4.9"}Exercise 4.9:** Many languages
> support a variety of iteration constructs, such as `do`, `for`,
> `while`, and `until`. In Scheme, iterative processes can be expressed
> in terms of ordinary procedure calls, so special iteration constructs
> provide no essential gain in computational power. On the other hand,
> such constructs are often convenient. Design some iteration
> constructs, give examples of their use, and show how to implement them
> as derived expressions.
> **[]{#Exercise 4.10 label="Exercise 4.10"}Exercise 4.10:** By using
> data abstraction, we were able to write an `eval` procedure that is
> independent of the particular syntax of the language to be evaluated.
> To illustrate this, design and implement a new syntax for Scheme by
> modifying the procedures in this section, without changing `eval` or
> `apply`.
### Evaluator Data Structures {#Section 4.1.3}
In addition to defining the external syntax of expressions, the
evaluator implementation must also define the data structures that the
evaluator manipulates internally, as part of the execution of a program,
such as the representation of procedures and environments and the
representation of true and false.
#### Testing of predicates {#testing-of-predicates .unnumbered}
For conditionals, we accept anything to be true that is not the explicit
`false` object.
::: scheme
(define (true? x) (not (eq? x false))) (define (false? x) (eq? x false))
:::
#### Representing procedures {#representing-procedures .unnumbered}
To handle primitives, we assume that we have available the following
procedures:
- `(apply/primitive/procedure `$\langle$*`proc`*$\rangle$` `$\langle$*`args`*$\rangle$`)`
applies the given primitive procedure to the argument values in the
list $\langle$*args*$\kern0.08em\rangle$ and returns the result of
the application.
- `(primitive/procedure? `$\langle$*`proc`*$\rangle$`)`
tests whether $\langle$*proc*$\kern0.08em\rangle$ is a primitive
procedure.
These mechanisms for handling primitives are further described in
[Section 4.1.4](#Section 4.1.4).
Compound procedures are constructed from parameters, procedure bodies,
and environments using the constructor `make/procedure`:
::: scheme
(define (make-procedure parameters body env) (list 'procedure parameters
body env)) (define (compound-procedure? p) (tagged-list? p 'procedure))
(define (procedure-parameters p) (cadr p)) (define (procedure-body p)
(caddr p)) (define (procedure-environment p) (cadddr p))
:::
#### Operations on Environments {#operations-on-environments .unnumbered}
The evaluator needs operations for manipulating environments. As
explained in [Section 3.2](#Section 3.2), an environment is a sequence
of frames, where each frame is a table of bindings that associate
variables with their corresponding values. We use the following
operations for manipulating environments:
- `(lookup/variable/value `$\langle$*`var`*$\rangle$` `$\langle$*`env`*$\rangle$`)`
returns the value that is bound to the symbol
$\langle$*var*$\kern0.08em\rangle$ in the environment
$\langle$*env*$\kern0.08em\rangle$, or signals an error if the
variable is unbound.
- `(extend/environment `$\langle$*`variables`*$\rangle$` `$\langle$*`values`*$\rangle$` `$\langle$*`base/env`*$\rangle$`)`
returns a new environment, consisting of a new frame in which the
symbols in the list $\langle$*variables*$\kern0.08em\rangle$ are
bound to the corresponding elements in the list
$\langle$*values*$\kern0.08em\rangle$, where the enclosing
environment is the environment
$\langle$*base-env*$\kern0.08em\rangle$.
- `(define/variable! `$\langle$*`var`*$\rangle$` `$\langle$*`value`*$\rangle$` `$\langle$*`env`*$\rangle$`)`
adds to the first frame in the environment
$\langle$*env*$\kern0.08em\rangle$ a new binding that associates the
variable $\langle$*var*$\kern0.08em\rangle$ with the value
$\langle$*value*$\kern0.08em\rangle$.
- `(set/variable/value! `$\langle$*`var`*$\rangle$` `$\langle$*`value`*$\rangle$` `$\langle$*`env`*$\rangle$`)`
changes the binding of the variable
$\langle$*var*$\kern0.08em\rangle$ in the environment
$\langle$*env*$\kern0.08em\rangle$ so that the variable is now bound
to the value $\langle$*value*$\kern0.08em\rangle$, or signals an
error if the variable is unbound.
To implement these operations we represent an environment as a list of
frames. The enclosing environment of an environment is the `cdr` of the
list. The empty environment is simply the empty list.
::: scheme
(define (enclosing-environment env) (cdr env)) (define (first-frame env)
(car env)) (define the-empty-environment '())
:::
Each frame of an environment is represented as a pair of lists: a list
of the variables bound in that frame and a list of the associated
values.[^218]
::: scheme
(define (make-frame variables values) (cons variables values)) (define
(frame-variables frame) (car frame)) (define (frame-values frame) (cdr
frame)) (define (add-binding-to-frame! var val frame) (set-car! frame
(cons var (car frame))) (set-cdr! frame (cons val (cdr frame))))
:::
To extend an environment by a new frame that associates variables with
values, we make a frame consisting of the list of variables and the list
of values, and we adjoin this to the environment. We signal an error if
the number of variables does not match the number of values.
::: scheme
(define (extend-environment vars vals base-env) (if (= (length vars)
(length vals)) (cons (make-frame vars vals) base-env) (if (\< (length
vars) (length vals)) (error \"Too many arguments supplied\" vars vals)
(error \"Too few arguments supplied\" vars vals))))
:::
To look up a variable in an environment, we scan the list of variables
in the first frame. If we find the desired variable, we return the
corresponding element in the list of values. If we do not find the
variable in the current frame, we search the enclosing environment, and
so on. If we reach the empty environment, we signal an "unbound
variable" error.
::: scheme
(define (lookup-variable-value var env) (define (env-loop env) (define
(scan vars vals) (cond ((null? vars) (env-loop (enclosing-environment
env))) ((eq? var (car vars)) (car vals)) (else (scan (cdr vars) (cdr
vals))))) (if (eq? env the-empty-environment) (error \"Unbound
variable\" var) (let ((frame (first-frame env))) (scan (frame-variables
frame) (frame-values frame))))) (env-loop env))
:::
To set a variable to a new value in a specified environment, we scan for
the variable, just as in `lookup/variable/value`, and change the
corresponding value when we find it.
::: scheme
(define (set-variable-value! var val env) (define (env-loop env) (define
(scan vars vals) (cond ((null? vars) (env-loop (enclosing-environment
env))) ((eq? var (car vars)) (set-car! vals val)) (else (scan (cdr vars)
(cdr vals))))) (if (eq? env the-empty-environment) (error \"Unbound
variable: SET!\" var) (let ((frame (first-frame env))) (scan
(frame-variables frame) (frame-values frame))))) (env-loop env))
:::
To define a variable, we search the first frame for a binding for the
variable, and change the binding if it exists (just as in
`set/variable/value!`). If no such binding exists, we adjoin one to the
first frame.
::: scheme
(define (define-variable! var val env) (let ((frame (first-frame env)))
(define (scan vars vals) (cond ((null? vars) (add-binding-to-frame! var
val frame)) ((eq? var (car vars)) (set-car! vals val)) (else (scan (cdr
vars) (cdr vals))))) (scan (frame-variables frame) (frame-values
frame))))
:::
The method described here is only one of many plausible ways to
represent environments. Since we used data abstraction to isolate the
rest of the evaluator from the detailed choice of representation, we
could change the environment representation if we wanted to. (See
[Exercise 4.11](#Exercise 4.11).) In a production-quality Lisp system,
the speed of the evaluator's environment operations---especially that of
variable lookup---has a major impact on the performance of the system.
The representation described here, although conceptually simple, is not
efficient and would not ordinarily be used in a production system.[^219]
> **[]{#Exercise 4.11 label="Exercise 4.11"}Exercise 4.11:** Instead of
> representing a frame as a pair of lists, we can represent a frame as a
> list of bindings, where each binding is a name-value pair. Rewrite the
> environment operations to use this alternative representation.
> **[]{#Exercise 4.12 label="Exercise 4.12"}Exercise 4.12:** The
> procedures `set/variable/value!`, `define/variable!` and
> `lookup/variable/value` can be expressed in terms of more abstract
> procedures for traversing the environment structure. Define
> abstractions that capture the common patterns and redefine the three
> procedures in terms of these abstractions.
> **[]{#Exercise 4.13 label="Exercise 4.13"}Exercise 4.13:** Scheme
> allows us to create new bindings for variables by means of `define`,
> but provides no way to get rid of bindings. Implement for the
> evaluator a special form `make/unbound!` that removes the binding of a
> given symbol from the environment in which the `make/unbound!`
> expression is evaluated. This problem is not completely specified. For
> example, should we remove only the binding in the first frame of the
> environment? Complete the specification and justify any choices you
> make.
### Running the Evaluator as a Program {#Section 4.1.4}
Given the evaluator, we have in our hands a description (expressed in
Lisp) of the process by which Lisp expressions are evaluated. One
advantage of expressing the evaluator as a program is that we can run
the program. This gives us, running within Lisp, a working model of how
Lisp itself evaluates expressions. This can serve as a framework for
experimenting with evaluation rules, as we shall do later in this
chapter.
Our evaluator program reduces expressions ultimately to the application
of primitive procedures. Therefore, all that we need to run the
evaluator is to create a mechanism that calls on the underlying Lisp
system to model the application of primitive procedures.
There must be a binding for each primitive procedure name, so that when
`eval` evaluates the operator of an application of a primitive, it will
find an object to pass to `apply`. We thus set up a global environment
that associates unique objects with the names of the primitive
procedures that can appear in the expressions we will be evaluating. The
global environment also includes bindings for the symbols `true` and
`false`, so that they can be used as variables in expressions to be
evaluated.
::: scheme
(define (setup-environment) (let ((initial-env (extend-environment
(primitive-procedure-names) (primitive-procedure-objects)
the-empty-environment))) (define-variable! 'true true initial-env)
(define-variable! 'false false initial-env) initial-env)) (define
the-global-environment (setup-environment))
:::
It does not matter how we represent the primitive procedure objects, so
long as `apply` can identify and apply them by using the procedures
`primitive/procedure?` and `apply/primitive/procedure`. We have chosen
to represent a primitive procedure as a list beginning with the symbol
`primitive` and containing a procedure in the underlying Lisp that
implements that primitive.
::: scheme
(define (primitive-procedure? proc) (tagged-list? proc 'primitive))
(define (primitive-implementation proc) (cadr proc))
:::
`setup/environment` will get the primitive names and implementation
procedures from a list:[^220]
::: scheme
(define primitive-procedures (list (list 'car car) (list 'cdr cdr) (list
'cons cons) (list 'null? null?) $\color{SchemeDark}\langle$ *more
primitives* $\color{SchemeDark}\rangle$ )) (define
(primitive-procedure-names) (map car primitive-procedures)) (define
(primitive-procedure-objects) (map (lambda (proc) (list 'primitive (cadr
proc))) primitive-procedures))
:::
To apply a primitive procedure, we simply apply the implementation
procedure to the arguments, using the underlying Lisp system:[^221]
::: scheme
(define (apply-primitive-procedure proc args)
(apply-in-underlying-scheme (primitive-implementation proc) args))
:::
For convenience in running the metacircular evaluator, we provide a
*driver loop* that models the read-eval-print loop of the underlying
Lisp system. It prints a *prompt*, reads an input expression, evaluates
this expression in the global environment, and prints the result. We
precede each printed result by an *output prompt* so as to distinguish
the value of the expression from other output that may be printed.[^222]
::: scheme
(define input-prompt \";;; M-Eval input:\") (define output-prompt \";;;
M-Eval value:\") (define (driver-loop) (prompt-for-input input-prompt)
(let ((input (read))) (let ((output (eval input
the-global-environment))) (announce-output output-prompt) (user-print
output))) (driver-loop)) (define (prompt-for-input string) (newline)
(newline) (display string) (newline)) (define (announce-output string)
(newline) (display string) (newline))
:::
We use a special printing procedure, `user/print`, to avoid printing the
environment part of a compound procedure, which may be a very long list
(or may even contain cycles).
::: scheme
(define (user-print object) (if (compound-procedure? object) (display
(list 'compound-procedure (procedure-parameters object) (procedure-body
object) '\<procedure-env\>)) (display object)))
:::
Now all we need to do to run the evaluator is to initialize the global
environment and start the driver loop. Here is a sample interaction:
::: scheme
(define the-global-environment (setup-environment)) (driver-loop)
*;;; M-Eval input:* (define (append x y) (if (null? x) y (cons (car x)
(append (cdr x) y)))) *;;; M-Eval value:* *ok* *;;; M-Eval input:*
(append '(a b c) '(d e f)) *;;; M-Eval value:* *(a b c d e f)*
:::
> **[]{#Exercise 4.14 label="Exercise 4.14"}Exercise 4.14:** Eva Lu Ator
> and Louis Reasoner are each experimenting with the metacircular
> evaluator. Eva types in the definition of `map`, and runs some test
> programs that use it. They work fine. Louis, in contrast, has
> installed the system version of `map` as a primitive for the
> metacircular evaluator. When he tries it, things go terribly wrong.
> Explain why Louis's `map` fails even though Eva's works.
### Data as Programs {#Section 4.1.5}
In thinking about a Lisp program that evaluates Lisp expressions, an
analogy might be helpful. One operational view of the meaning of a
program is that a program is a description of an abstract (perhaps
infinitely large) machine. For example, consider the familiar program to
compute factorials:
::: scheme
(define (factorial n) (if (= n 1) 1 (\* (factorial (- n 1)) n)))
:::
We may regard this program as the description of a machine containing
parts that decrement, multiply, and test for equality, together with a
two-position switch and another factorial machine. (The factorial
machine is infinite because it contains another factorial machine within
it.) [Figure 4.2](#Figure 4.2) is a flow diagram for the factorial
machine, showing how the parts are wired together.
In a similar way, we can regard the evaluator as a very special machine
that takes as input a description of a machine. Given this input, the
evaluator configures itself to emulate the machine described. For
example, if we feed our evaluator the definition of `factorial`, as
shown in [Figure 4.3](#Figure 4.3), the evaluator will be able to
compute factorials.
[]{#Figure 4.2 label="Figure 4.2"}
![image](fig/chap4/Fig4.2.pdf){width="84mm"}
> **Figure 4.2:** The factorial program, viewed as an abstract machine.
From this perspective, our evaluator is seen to be a *universal
machine*. It mimics other machines when these are described as Lisp
programs.[^223] This is striking. Try to imagine an analogous evaluator
for electrical circuits. This would be a circuit that takes as input a
signal encoding the plans for some other circuit, such as a filter.
Given this input, the circuit evaluator would then behave like a filter
with the same description. Such a universal electrical circuit is almost
unimaginably complex. It is remarkable that the program evaluator is a
rather simple program.[^224]
[]{#Figure 4.3 label="Figure 4.3"}
![image](fig/chap4/Fig4.3.pdf){width="69mm"}
**Figure 4.3:** The evaluator emulating a factorial machine.
Another striking aspect of the evaluator is that it acts as a bridge
between the data objects that are manipulated by our programming
language and the programming language itself. Imagine that the evaluator
program (implemented in Lisp) is running, and that a user is typing
expressions to the evaluator and observing the results. From the
perspective of the user, an input expression such as `(* x x)` is an
expression in the programming language, which the evaluator should
execute. From the perspective of the evaluator, however, the expression
is simply a list (in this case, a list of three symbols: `*`, `x`, and
`x`) that is to be manipulated according to a well-defined set of rules.
That the user's programs are the evaluator's data need not be a source
of confusion. In fact, it is sometimes convenient to ignore this
distinction, and to give the user the ability to explicitly evaluate a
data object as a Lisp expression, by making `eval` available for use in
programs. Many Lisp dialects provide a primitive `eval` procedure that
takes as arguments an expression and an environment and evaluates the
expression relative to the environment.[^225] Thus,
::: scheme
(eval '(\* 5 5) user-initial-environment)
:::
and
::: scheme
(eval (cons '\* (list 5 5)) user-initial-environment)
:::
will both return 25.[^226]
> **[]{#Exercise 4.15 label="Exercise 4.15"}Exercise 4.15:** Given a
> one-argument procedure `p` and an object `a`, `p` is said to "halt" on
> `a` if evaluating the expression `(p a)` returns a value (as opposed
> to terminating with an error message or running forever). Show that it
> is impossible to write a procedure `halts?` that correctly determines
> whether `p` halts on `a` for any procedure `p` and object `a`. Use the
> following reasoning: If you had such a procedure `halts?`, you could
> implement the following program:
>
> ::: scheme
> (define (run-forever) (run-forever)) (define (try p) (if (halts? p p)
> (run-forever) 'halted))
> :::
>
> Now consider evaluating the expression `(try try)` and show that any
> possible outcome (either halting or running forever) violates the
> intended behavior of `halts?`.[^227]
### Internal Definitions {#Section 4.1.6}
Our environment model of evaluation and our metacircular evaluator
execute definitions in sequence, extending the environment frame one
definition at a time. This is particularly convenient for interactive
program development, in which the programmer needs to freely mix the
application of procedures with the definition of new procedures.
However, if we think carefully about the internal definitions used to
implement block structure (introduced in [Section
1.1.8](#Section 1.1.8)), we will find that name-by-name extension of the
environment may not be the best way to define local variables.
Consider a procedure with internal definitions, such as
::: scheme
(define (f x) (define (even? n) (if (= n 0) true (odd? (- n 1))))
(define (odd? n) (if (= n 0) false (even? (- n 1))))
$\color{SchemeDark}\langle$ *rest of body of
`f`* $\color{SchemeDark}\rangle$ )
:::
Our intention here is that the name `odd?` in the body of the procedure
`even?` should refer to the procedure `odd?` that is defined after
`even?`. The scope of the name `odd?` is the entire body of `f`, not
just the portion of the body of `f` starting at the point where the
`define` for `odd?` occurs. Indeed, when we consider that `odd?` is
itself defined in terms of `even?`---so that `even?` and `odd?` are
mutually recursive procedures---we see that the only satisfactory
interpretation of the two `define`s is to regard them as if the names
`even?` and `odd?` were being added to the environment simultaneously.
More generally, in block structure, the scope of a local name is the
entire procedure body in which the `define` is evaluated.
As it happens, our interpreter will evaluate calls to `f` correctly, but
for an "accidental" reason: Since the definitions of the internal
procedures come first, no calls to these procedures will be evaluated
until all of them have been defined. Hence, `odd?` will have been
defined by the time `even?` is executed. In fact, our sequential
evaluation mechanism will give the same result as a mechanism that
directly implements simultaneous definition for any procedure in which
the internal definitions come first in a body and evaluation of the
value expressions for the defined variables doesn't actually use any of
the defined variables. (For an example of a procedure that doesn't obey
these restrictions, so that sequential definition isn't equivalent to
simultaneous definition, see [Exercise 4.19](#Exercise 4.19).)[^228]
There is, however, a simple way to treat definitions so that internally
defined names have truly simultaneous scope---just create all local
variables that will be in the current environment before evaluating any
of the value expressions. One way to do this is by a syntax
transformation on `lambda` expressions. Before evaluating the body of a
`lambda` expression, we "scan out" and eliminate all the internal
definitions in the body. The internally defined variables will be
created with a `let` and then set to their values by assignment. For
example, the procedure
::: scheme
(lambda
$\color{SchemeDark}\langle$ *vars* $\color{SchemeDark}\rangle$
(define u
$\color{SchemeDark}\langle$ *e1* $\color{SchemeDark}\rangle$ )
(define v
$\color{SchemeDark}\langle$ *e2* $\color{SchemeDark}\rangle$ )
$\color{SchemeDark}\langle$ *e3* $\color{SchemeDark}\rangle$ )
:::
would be transformed into
::: scheme
(lambda
$\color{SchemeDark}\langle$ *vars* $\color{SchemeDark}\rangle$ (let
((u '\*unassigned\*) (v '\*unassigned\*)) (set! u
$\color{SchemeDark}\langle$ *e1* $\color{SchemeDark}\rangle$ ) (set!
v $\color{SchemeDark}\langle$ *e2* $\color{SchemeDark}\rangle$ )
$\color{SchemeDark}\langle$ *e3* $\color{SchemeDark}\rangle$ ))
:::
where `*unassigned*` is a special symbol that causes looking up a
variable to signal an error if an attempt is made to use the value of
the not-yet-assigned variable.
An alternative strategy for scanning out internal definitions is shown
in [Exercise 4.18](#Exercise 4.18). Unlike the transformation shown
above, this enforces the restriction that the defined variables' values
can be evaluated without using any of the variables' values.[^229]
> **[]{#Exercise 4.16 label="Exercise 4.16"}Exercise 4.16:** In this
> exercise we implement the method just described for interpreting
> internal definitions. We assume that the evaluator supports `let` (see
> [Exercise 4.6](#Exercise 4.6)).
>
> a. Change `lookup/variable/value` ([Section 4.1.3](#Section 4.1.3))
> to signal an error if the value it finds is the symbol
> `*unassigned*`.
>
> b. Write a procedure `scan/out/defines` that takes a procedure body
> and returns an equivalent one that has no internal definitions, by
> making the transformation described above.
>
> c. Install `scan/out/defines` in the interpreter, either in
> `make/procedure` or in `procedure/body` (see [Section
> 4.1.3](#Section 4.1.3)). Which place is better? Why?
> **[]{#Exercise 4.17 label="Exercise 4.17"}Exercise 4.17:** Draw
> diagrams of the environment in effect when evaluating the expression
> $\langle$*e3*$\kern0.1em\rangle$ in the procedure in the text,
> comparing how this will be structured when definitions are interpreted
> sequentially with how it will be structured if definitions are scanned
> out as described. Why is there an extra frame in the transformed
> program? Explain why this difference in environment structure can
> never make a difference in the behavior of a correct program. Design a
> way to make the interpreter implement the "simultaneous" scope rule
> for internal definitions without constructing the extra frame.
> **[]{#Exercise 4.18 label="Exercise 4.18"}Exercise 4.18:** Consider an
> alternative strategy for scanning out definitions that translates the
> example in the text to
>
> ::: scheme
> (lambda
> $\color{SchemeDark}\langle$ *vars* $\color{SchemeDark}\rangle$
> (let ((u '\*unassigned\*) (v '\*unassigned\*)) (let ((a
> $\color{SchemeDark}\langle$ *e1* $\color{SchemeDark}\rangle$ ) (b
> $\color{SchemeDark}\langle$ *e2* $\color{SchemeDark}\rangle$ ))
> (set! u a) (set! v b))
> $\color{SchemeDark}\langle$ *e3* $\color{SchemeDark}\rangle$ ))
> :::
>
> Here `a` and `b` are meant to represent new variable names, created by
> the interpreter, that do not appear in the user's program. Consider
> the `solve` procedure from [Section 3.5.4](#Section 3.5.4):
>
> ::: scheme
> (define (solve f y0 dt) (define y (integral (delay dy) y0 dt)) (define
> dy (stream-map f y)) y)
> :::
>
> Will this procedure work if internal definitions are scanned out as
> shown in this exercise? What if they are scanned out as shown in the
> text? Explain.
> **[]{#Exercise 4.19 label="Exercise 4.19"}Exercise 4.19:** Ben
> Bitdiddle, Alyssa P. Hacker, and Eva Lu Ator are arguing about the
> desired result of evaluating the expression
>
> ::: scheme
> (let ((a 1)) (define (f x) (define b (+ a x)) (define a 5) (+ a b)) (f
> 10))
> :::
>
> Ben asserts that the result should be obtained using the sequential
> rule for `define`: `b` is defined to be 11, then `a` is defined to be
> 5, so the result is 16. Alyssa objects that mutual recursion requires
> the simultaneous scope rule for internal procedure definitions, and
> that it is unreasonable to treat procedure names differently from
> other names. Thus, she argues for the mechanism implemented in
> [Exercise 4.16](#Exercise 4.16). This would lead to `a` being
> unassigned at the time that the value for `b` is to be computed.
> Hence, in Alyssa's view the procedure should produce an error. Eva has
> a third opinion. She says that if the definitions of `a` and `b` are
> truly meant to be simultaneous, then the value 5 for `a` should be
> used in evaluating `b`. Hence, in Eva's view `a` should be 5, `b`
> should be 15, and the result should be 20. Which (if any) of these
> viewpoints do you support? Can you devise a way to implement internal
> definitions so that they behave as Eva prefers?[^230]
> **[]{#Exercise 4.20 label="Exercise 4.20"}Exercise 4.20:** Because
> internal definitions look sequential but are actually simultaneous,
> some people prefer to avoid them entirely, and use the special form
> `letrec` instead. `letrec` looks like `let`, so it is not surprising
> that the variables it binds are bound simultaneously and have the same
> scope as each other. The sample procedure `f` above can be written
> without internal definitions, but with exactly the same meaning, as
>
> ::: scheme
> (define (f x) (letrec ((even? (lambda (n) (if (= n 0) true (odd? (- n
> 1))))) (odd? (lambda (n) (if (= n 0) false (even? (- n 1))))))
> $\color{SchemeDark}\langle$ *rest of body of
> `f`* $\color{SchemeDark}\rangle$ ))
> :::
>
> `letrec` expressions, which have the form
>
> ::: scheme
> (letrec
> (( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
> $\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
> $\dots$
> ( $\color{SchemeDark}\langle$ *var* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$
> $\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ ))
> $\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
> :::
>
> are a variation on `let` in which the expressions
> $\langle$*exp*$_k\rangle$ that provide the initial values for the
> variables $\langle$*var*$_k\rangle$ are evaluated in an environment
> that includes all the `letrec` bindings. This permits recursion in the
> bindings, such as the mutual recursion of `even?` and `odd?` in the
> example above, or the evaluation of 10 factorial with
>
> ::: scheme
> (letrec ((fact (lambda (n) (if (= n 1) 1 (\* n (fact (- n 1)))))))
> (fact 10))
> :::
>
> a. Implement `letrec` as a derived expression, by transforming a
> `letrec` expression into a `let` expression as shown in the text
> above or in [Exercise 4.18](#Exercise 4.18). That is, the `letrec`
> variables should be created with a `let` and then be assigned
> their values with `set!`.
>
> b. Louis Reasoner is confused by all this fuss about internal
> definitions. The way he sees it, if you don't like to use `define`
> inside a procedure, you can just use `let`. Illustrate what is
> loose about his reasoning by drawing an environment diagram that
> shows the environment in which the $\langle$*rest of body of
> `f`*$\kern0.08em\rangle$ is evaluated during evaluation of the
> expression `(f 5)`, with `f` defined as in this exercise. Draw an
> environment diagram for the same evaluation, but with `let` in
> place of `letrec` in the definition of `f`.
> **[]{#Exercise 4.21 label="Exercise 4.21"}Exercise 4.21:** Amazingly,
> Louis's intuition in [Exercise 4.20](#Exercise 4.20) is correct. It is
> indeed possible to specify recursive procedures without using `letrec`
> (or even `define`), although the method for accomplishing this is much
> more subtle than Louis imagined. The following expression computes 10
> factorial by applying a recursive factorial procedure:[^231]
>
> ::: scheme
> ((lambda (n) ((lambda (fact) (fact fact n)) (lambda (ft k) (if (= k 1)
> 1 (\* k (ft ft (- k 1))))))) 10)
> :::
>
> a. Check (by evaluating the expression) that this really does compute
> factorials. Devise an analogous expression for computing Fibonacci
> numbers.
>
> b. Consider the following procedure, which includes mutually
> recursive internal definitions:
>
> ::: scheme
> (define (f x) (define (even? n) (if (= n 0) true (odd? (- n 1))))
> (define (odd? n) (if (= n 0) false (even? (- n 1)))) (even? x))
> :::
>
> Fill in the missing expressions to complete an alternative
> definition of `f`, which uses neither internal definitions nor
> `letrec`:
>
> ::: scheme
> (define (f x) ((lambda (even? odd?) (even? even? odd? x)) (lambda
> (ev? od? n) (if (= n 0) true (od?
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ )))
> (lambda (ev? od? n) (if (= n 0) false (ev?
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ )))))
> :::
### Separating Syntactic Analysis from Execution {#Section 4.1.7}
The evaluator implemented above is simple, but it is very inefficient,
because the syntactic analysis of expressions is interleaved with their
execution. Thus if a program is executed many times, its syntax is
analyzed many times. Consider, for example, evaluating `(factorial 4)`
using the following definition of `factorial`:
::: scheme
(define (factorial n) (if (= n 1) 1 (\* (factorial (- n 1)) n)))
:::
Each time `factorial` is called, the evaluator must determine that the
body is an `if` expression and extract the predicate. Only then can it
evaluate the predicate and dispatch on its value. Each time it evaluates
the expression `(* (factorial (- n 1)) n)`, or the subexpressions
`(factorial (- n 1))` and `(- n 1)`, the evaluator must perform the case
analysis in `eval` to determine that the expression is an application,
and must extract its operator and operands. This analysis is expensive.
Performing it repeatedly is wasteful.
We can transform the evaluator to be significantly more efficient by
arranging things so that syntactic analysis is performed only
once.[^232] We split `eval`, which takes an expression and an
environment, into two parts. The procedure `analyze` takes only the
expression. It performs the syntactic analysis and returns a new
procedure, the *execution procedure*, that encapsulates the work to be
done in executing the analyzed expression. The execution procedure takes
an environment as its argument and completes the evaluation. This saves
work because `analyze` will be called only once on an expression, while
the execution procedure may be called many times.
With the separation into analysis and execution, `eval` now becomes
::: scheme
(define (eval exp env) ((analyze exp) env))
:::
The result of calling `analyze` is the execution procedure to be applied
to the environment. The `analyze` procedure is the same case analysis as
performed by the original `eval` of [Section 4.1.1](#Section 4.1.1),
except that the procedures to which we dispatch perform only analysis,
not full evaluation:
::: scheme
(define (analyze exp) (cond ((self-evaluating? exp)
(analyze-self-evaluating exp)) ((quoted? exp) (analyze-quoted exp))
((variable? exp) (analyze-variable exp)) ((assignment? exp)
(analyze-assignment exp)) ((definition? exp) (analyze-definition exp))
((if? exp) (analyze-if exp)) ((lambda? exp) (analyze-lambda exp))
((begin? exp) (analyze-sequence (begin-actions exp))) ((cond? exp)
(analyze (cond-\>if exp))) ((application? exp) (analyze-application
exp)) (else (error \"Unknown expression type: ANALYZE\" exp))))
:::
Here is the simplest syntactic analysis procedure, which handles
self-evaluating expressions. It returns an execution procedure that
ignores its environment argument and just returns the expression:
::: scheme
(define (analyze-self-evaluating exp) (lambda (env) exp))
:::
For a quoted expression, we can gain a little efficiency by extracting
the text of the quotation only once, in the analysis phase, rather than
in the execution phase.
::: scheme
(define (analyze-quoted exp) (let ((qval (text-of-quotation exp)))
(lambda (env) qval)))
:::
Looking up a variable value must still be done in the execution phase,
since this depends upon knowing the environment.[^233]
::: scheme
(define (analyze-variable exp) (lambda (env) (lookup-variable-value exp
env)))
:::
`analyze/assignment` also must defer actually setting the variable until
the execution, when the environment has been supplied. However, the fact
that the `assignment/value` expression can be analyzed (recursively)
during analysis is a major gain in efficiency, because the
`assignment/value` expression will now be analyzed only once. The same
holds true for definitions.
::: scheme
(define (analyze-assignment exp) (let ((var (assignment-variable exp))
(vproc (analyze (assignment-value exp)))) (lambda (env)
(set-variable-value! var (vproc env) env) 'ok))) (define
(analyze-definition exp) (let ((var (definition-variable exp)) (vproc
(analyze (definition-value exp)))) (lambda (env) (define-variable! var
(vproc env) env) 'ok)))
:::
For `if` expressions, we extract and analyze the predicate, consequent,
and alternative at analysis time.
::: scheme
(define (analyze-if exp) (let ((pproc (analyze (if-predicate exp)))
(cproc (analyze (if-consequent exp))) (aproc (analyze (if-alternative
exp)))) (lambda (env) (if (true? (pproc env)) (cproc env) (aproc
env)))))
:::
Analyzing a `lambda` expression also achieves a major gain in
efficiency: We analyze the `lambda` body only once, even though
procedures resulting from evaluation of the `lambda` may be applied many
times.
::: scheme
(define (analyze-lambda exp) (let ((vars (lambda-parameters exp)) (bproc
(analyze-sequence (lambda-body exp)))) (lambda (env) (make-procedure
vars bproc env))))
:::
Analysis of a sequence of expressions (as in a `begin` or the body of a
`lambda` expression) is more involved.[^234] Each expression in the
sequence is analyzed, yielding an execution procedure. These execution
procedures are combined to produce an execution procedure that takes an
environment as argument and sequentially calls each individual execution
procedure with the environment as argument.
::: scheme
(define (analyze-sequence exps) (define (sequentially proc1 proc2)
(lambda (env) (proc1 env) (proc2 env))) (define (loop first-proc
rest-procs) (if (null? rest-procs) first-proc (loop (sequentially
first-proc (car rest-procs)) (cdr rest-procs)))) (let ((procs (map
analyze exps))) (if (null? procs) (error \"Empty sequence: ANALYZE\"))
(loop (car procs) (cdr procs))))
:::
To analyze an application, we analyze the operator and operands and
construct an execution procedure that calls the operator execution
procedure (to obtain the actual procedure to be applied) and the operand
execution procedures (to obtain the actual arguments). We then pass
these to `execute/application`, which is the analog of `apply` in
[Section 4.1.1](#Section 4.1.1). `execute/application` differs from
`apply` in that the procedure body for a compound procedure has already
been analyzed, so there is no need to do further analysis. Instead, we
just call the execution procedure for the body on the extended
environment.
::: scheme
(define (analyze-application exp) (let ((fproc (analyze (operator exp)))
(aprocs (map analyze (operands exp)))) (lambda (env)
(execute-application (fproc env) (map (lambda (aproc) (aproc env))
aprocs))))) (define (execute-application proc args) (cond
((primitive-procedure? proc) (apply-primitive-procedure proc args))
((compound-procedure? proc) ((procedure-body proc) (extend-environment
(procedure-parameters proc) args (procedure-environment proc)))) (else
(error \"Unknown procedure type: EXECUTE-APPLICATION\" proc))))
:::
Our new evaluator uses the same data structures, syntax procedures, and
run-time support procedures as in sections [Section
4.1.2](#Section 4.1.2), [Section 4.1.3](#Section 4.1.3), and [Section
4.1.4](#Section 4.1.4).
> **[]{#Exercise 4.22 label="Exercise 4.22"}Exercise 4.22:** Extend the
> evaluator in this section to support the special form `let`. (See
> [Exercise 4.6](#Exercise 4.6).)
> **[]{#Exercise 4.23 label="Exercise 4.23"}Exercise 4.23:** Alyssa P.
> Hacker doesn't understand why `analyze/sequence` needs to be so
> complicated. All the other analysis procedures are straightforward
> transformations of the corresponding evaluation procedures (or `eval`
> clauses) in [Section 4.1.1](#Section 4.1.1). She expected
> `analyze/sequence` to look like this:
>
> ::: scheme
> (define (analyze-sequence exps) (define (execute-sequence procs env)
> (cond ((null? (cdr procs)) ((car procs) env)) (else ((car procs) env)
> (execute-sequence (cdr procs) env)))) (let ((procs (map analyze
> exps))) (if (null? procs) (error \"Empty sequence: ANALYZE\")) (lambda
> (env) (execute-sequence procs env))))
> :::
>
> Eva Lu Ator explains to Alyssa that the version in the text does more
> of the work of evaluating a sequence at analysis time. Alyssa's
> sequence-execution procedure, rather than having the calls to the
> individual execution procedures built in, loops through the procedures
> in order to call them: In effect, although the individual expressions
> in the sequence have been analyzed, the sequence itself has not been.
>
> Compare the two versions of `analyze/sequence`. For example, consider
> the common case (typical of procedure bodies) where the sequence has
> just one expression. What work will the execution procedure produced
> by Alyssa's program do? What about the execution procedure produced by
> the program in the text above? How do the two versions compare for a
> sequence with two expressions?
> **[]{#Exercise 4.24 label="Exercise 4.24"}Exercise 4.24:** Design and
> carry out some experiments to compare the speed of the original
> metacircular evaluator with the version in this section. Use your
> results to estimate the fraction of time that is spent in analysis
> versus execution for various procedures.
## Variations on a Scheme --- Lazy Evaluation {#Section 4.2}
Now that we have an evaluator expressed as a Lisp program, we can
experiment with alternative choices in language design simply by
modifying the evaluator. Indeed, new languages are often invented by
first writing an evaluator that embeds the new language within an
existing high-level language. For example, if we wish to discuss some
aspect of a proposed modification to Lisp with another member of the
Lisp community, we can supply an evaluator that embodies the change. The
recipient can then experiment with the new evaluator and send back
comments as further modifications. Not only does the high-level
implementation base make it easier to test and debug the evaluator; in
addition, the embedding enables the designer to snarf [^235] features
from the underlying language, just as our embedded Lisp evaluator uses
primitives and control structure from the underlying Lisp. Only later
(if ever) need the designer go to the trouble of building a complete
implementation in a low-level language or in hardware. In this section
and the next we explore some variations on Scheme that provide
significant additional expressive power.
### Normal Order and Applicative Order {#Section 4.2.1}
In [Section 1.1](#Section 1.1), where we began our discussion of models
of evaluation, we noted that Scheme is an *applicative-order* language,
namely, that all the arguments to Scheme procedures are evaluated when
the procedure is applied. In contrast, *normal-order* languages delay
evaluation of procedure arguments until the actual argument values are
needed. Delaying evaluation of procedure arguments until the last
possible moment (e.g., until they are required by a primitive operation)
is called *lazy evaluation*.[^236] Consider the procedure
::: scheme
(define (try a b) (if (= a 0) 1 b))
:::
Evaluating `(try 0 (/ 1 0))` generates an error in Scheme. With lazy
evaluation, there would be no error. Evaluating the expression would
return 1, because the argument `(/ 1 0)` would never be evaluated.
An example that exploits lazy evaluation is the definition of a
procedure `unless`
::: scheme
(define (unless condition usual-value exceptional-value) (if condition
exceptional-value usual-value))
:::
that can be used in expressions such as
::: scheme
(unless (= b 0) (/ a b) (begin (display \"exception: returning 0\") 0))
:::
This won't work in an applicative-order language because both the usual
value and the exceptional value will be evaluated before `unless` is
called (compare [Exercise 1.6](#Exercise 1.6)). An advantage of lazy
evaluation is that some procedures, such as `unless`, can do useful
computation even if evaluation of some of their arguments would produce
errors or would not terminate.
If the body of a procedure is entered before an argument has been
evaluated we say that the procedure is *non-strict* in that argument. If
the argument is evaluated before the body of the procedure is entered we
say that the procedure is *strict* in that argument.[^237] In a purely
applicative-order language, all procedures are strict in each argument.
In a purely normal-order language, all compound procedures are
non-strict in each argument, and primitive procedures may be either
strict or non-strict. There are also languages (see [Exercise
4.31](#Exercise 4.31)) that give programmers detailed control over the
strictness of the procedures they define.
A striking example of a procedure that can usefully be made non-strict
is `cons` (or, in general, almost any constructor for data structures).
One can do useful computation, combining elements to form data
structures and operating on the resulting data structures, even if the
values of the elements are not known. It makes perfect sense, for
instance, to compute the length of a list without knowing the values of
the individual elements in the list. We will exploit this idea in
[Section 4.2.3](#Section 4.2.3) to implement the streams of [Chapter
3](#Chapter 3) as lists formed of non-strict `cons` pairs.
> **[]{#Exercise 4.25 label="Exercise 4.25"}Exercise 4.25:** Suppose
> that (in ordinary applicative-order Scheme) we define `unless` as
> shown above and then define `factorial` in terms of `unless` as
>
> ::: scheme
> (define (factorial n) (unless (= n 1) (\* n (factorial (- n 1))) 1))
> :::
>
> What happens if we attempt to evaluate `(factorial 5)`? Will our
> definitions work in a normal-order language?
> **[]{#Exercise 4.26 label="Exercise 4.26"}Exercise 4.26:** Ben
> Bitdiddle and Alyssa P. Hacker disagree over the importance of lazy
> evaluation for implementing things such as `unless`. Ben points out
> that it's possible to implement `unless` in applicative order as a
> special form. Alyssa counters that, if one did that, `unless` would be
> merely syntax, not a procedure that could be used in conjunction with
> higher-order procedures. Fill in the details on both sides of the
> argument. Show how to implement `unless` as a derived expression (like
> `cond` or `let`), and give an example of a situation where it might be
> useful to have `unless` available as a procedure, rather than as a
> special form.
### An Interpreter with Lazy Evaluation {#Section 4.2.2}
In this section we will implement a normal-order language that is the
same as Scheme except that compound procedures are non-strict in each
argument. Primitive procedures will still be strict. It is not difficult
to modify the evaluator of [Section 4.1.1](#Section 4.1.1) so that the
language it interprets behaves this way. Almost all the required changes
center around procedure application.
The basic idea is that, when applying a procedure, the interpreter must
determine which arguments are to be evaluated and which are to be
delayed. The delayed arguments are not evaluated; instead, they are
transformed into objects called *thunks*.[^238] The thunk must contain
the information required to produce the value of the argument when it is
needed, as if it had been evaluated at the time of the application.
Thus, the thunk must contain the argument expression and the environment
in which the procedure application is being evaluated.
The process of evaluating the expression in a thunk is called
*forcing*.[^239] In general, a thunk will be forced only when its value
is needed: when it is passed to a primitive procedure that will use the
value of the thunk; when it is the value of a predicate of a
conditional; and when it is the value of an operator that is about to be
applied as a procedure. One design choice we have available is whether
or not to *memoize* thunks, as we did with delayed objects in [Section
3.5.1](#Section 3.5.1). With memoization, the first time a thunk is
forced, it stores the value that is computed. Subsequent forcings simply
return the stored value without repeating the computation. We'll make
our interpreter memoize, because this is more efficient for many
applications. There are tricky considerations here, however.[^240]
#### Modifying the evaluator {#modifying-the-evaluator .unnumbered}
The main difference between the lazy evaluator and the one in [Section
4.1](#Section 4.1) is in the handling of procedure applications in
`eval` and `apply`.
The `application?` clause of `eval` becomes
::: scheme
((application? exp) (apply (actual-value (operator exp) env) (operands
exp) env))
:::
This is almost the same as the `application?` clause of `eval` in
[Section 4.1.1](#Section 4.1.1). For lazy evaluation, however, we call
`apply` with the operand expressions, rather than the arguments produced
by evaluating them. Since we will need the environment to construct
thunks if the arguments are to be delayed, we must pass this as well. We
still evaluate the operator, because `apply` needs the actual procedure
to be applied in order to dispatch on its type (primitive versus
compound) and apply it.
Whenever we need the actual value of an expression, we use
::: scheme
(define (actual-value exp env) (force-it (eval exp env)))
:::
instead of just `eval`, so that if the expression's value is a thunk, it
will be forced.
Our new version of `apply` is also almost the same as the version in
[Section 4.1.1](#Section 4.1.1). The difference is that `eval` has
passed in unevaluated operand expressions: For primitive procedures
(which are strict), we evaluate all the arguments before applying the
primitive; for compound procedures (which are non-strict) we delay all
the arguments before applying the procedure.
::: scheme
(define (apply procedure arguments env) (cond ((primitive-procedure?
procedure) (apply-primitive-procedure procedure (list-of-arg-values
arguments env))) [; changed]{.roman} ((compound-procedure? procedure)
(eval-sequence (procedure-body procedure) (extend-environment
(procedure-parameters procedure) (list-of-delayed-args arguments env)
[; changed]{.roman} (procedure-environment procedure)))) (else (error
\"Unknown procedure type: APPLY\" procedure))))
:::
The procedures that process the arguments are just like `list/of/values`
from [Section 4.1.1](#Section 4.1.1), except that `list/of/delayed/args`
delays the arguments instead of evaluating them, and
`list/of/arg/values` uses `actual/value` instead of `eval`:
::: scheme
(define (list-of-arg-values exps env) (if (no-operands? exps) '() (cons
(actual-value (first-operand exps) env) (list-of-arg-values
(rest-operands exps) env)))) (define (list-of-delayed-args exps env) (if
(no-operands? exps) '() (cons (delay-it (first-operand exps) env)
(list-of-delayed-args (rest-operands exps) env))))
:::
The other place we must change the evaluator is in the handling of `if`,
where we must use `actual/value` instead of `eval` to get the value of
the predicate expression before testing whether it is true or false:
::: scheme
(define (eval-if exp env) (if (true? (actual-value (if-predicate exp)
env)) (eval (if-consequent exp) env) (eval (if-alternative exp) env)))
:::
Finally, we must change the `driver/loop` procedure ([Section
4.1.4](#Section 4.1.4)) to use `actual/value` instead of `eval`, so that
if a delayed value is propagated back to the read-eval-print loop, it
will be forced before being printed. We also change the prompts to
indicate that this is the lazy evaluator:
::: scheme
(define input-prompt \";;; L-Eval input:\") (define output-prompt \";;;
L-Eval value:\") (define (driver-loop) (prompt-for-input input-prompt)
(let ((input (read))) (let ((output (actual-value input
the-global-environment))) (announce-output output-prompt) (user-print
output))) (driver-loop))
:::
With these changes made, we can start the evaluator and test it. The
successful evaluation of the `try` expression discussed in [Section
4.2.1](#Section 4.2.1) indicates that the interpreter is performing lazy
evaluation:
::: scheme
(define the-global-environment (setup-environment)) (driver-loop) *;;;
L-Eval input:* (define (try a b) (if (= a 0) 1 b)) *;;; L-Eval
value:* *ok* *;;; L-Eval input:* (try 0 (/ 1 0)) *;;; L-Eval
value:* *1*
:::
#### Representing thunks {#representing-thunks .unnumbered}
Our evaluator must arrange to create thunks when procedures are applied
to arguments and to force these thunks later. A thunk must package an
expression together with the environment, so that the argument can be
produced later. To force the thunk, we simply extract the expression and
environment from the thunk and evaluate the expression in the
environment. We use `actual/value` rather than `eval` so that in case
the value of the expression is itself a thunk, we will force that, and
so on, until we reach something that is not a thunk:
::: scheme
(define (force-it obj) (if (thunk? obj) (actual-value (thunk-exp obj)
(thunk-env obj)) obj))
:::
One easy way to package an expression with an environment is to make a
list containing the expression and the environment. Thus, we create a
thunk as follows:
::: scheme
(define (delay-it exp env) (list 'thunk exp env)) (define (thunk? obj)
(tagged-list? obj 'thunk)) (define (thunk-exp thunk) (cadr thunk))
(define (thunk-env thunk) (caddr thunk))
:::
Actually, what we want for our interpreter is not quite this, but rather
thunks that have been memoized. When a thunk is forced, we will turn it
into an evaluated thunk by replacing the stored expression with its
value and changing the `thunk` tag so that it can be recognized as
already evaluated.[^241]
::: scheme
(define (evaluated-thunk? obj) (tagged-list? obj 'evaluated-thunk))
(define (thunk-value evaluated-thunk) (cadr evaluated-thunk)) (define
(force-it obj) (cond ((thunk? obj) (let ((result (actual-value
(thunk-exp obj) (thunk-env obj)))) (set-car! obj 'evaluated-thunk)
(set-car! (cdr obj) result) [; replace `exp` with its value]{.roman}
(set-cdr! (cdr obj) '()) [; forget unneeded `env`]{.roman} result))
((evaluated-thunk? obj) (thunk-value obj)) (else obj)))
:::
Notice that the same `delay/it` procedure works both with and without
memoization.
> **[]{#Exercise 4.27 label="Exercise 4.27"}Exercise 4.27:** Suppose we
> type in the following definitions to the lazy evaluator:
>
> ::: scheme
> (define count 0) (define (id x) (set! count (+ count 1)) x)
> :::
>
> Give the missing values in the following sequence of interactions, and
> explain your answers.[^242]
>
> ::: scheme
> (define w (id (id 10))) *;;; L-Eval input:* count *;;; L-Eval
> value:*
> $\color{SchemeDark}\langle$ *response* $\color{SchemeDark}\rangle$
> *;;; L-Eval input:* w *;;; L-Eval value:*
> $\color{SchemeDark}\langle$ *response* $\color{SchemeDark}\rangle$
> *;;; L-Eval input:* count *;;; L-Eval value:*
> $\color{SchemeDark}\langle$ *response* $\color{SchemeDark}\rangle$
> :::
> **[]{#Exercise 4.28 label="Exercise 4.28"}Exercise 4.28:** `eval` uses
> `actual/value` rather than `eval` to evaluate the operator before
> passing it to `apply`, in order to force the value of the operator.
> Give an example that demonstrates the need for this forcing.
>
> **[]{#Exercise 4.29 label="Exercise 4.29"}Exercise 4.29:** Exhibit a
> program that you would expect to run much more slowly without
> memoization than with memoization. Also, consider the following
> interaction, where the `id` procedure is defined as in [Exercise
> 4.27](#Exercise 4.27) and `count` starts at 0:
>
> ::: scheme
> (define (square x) (\* x x)) *;;; L-Eval input:* (square (id 10))
> *;;; L-Eval value:*
> $\color{SchemeDark}\langle$ *response* $\color{SchemeDark}\rangle$
> *;;; L-Eval input:* count *;;; L-Eval value:*
> $\color{SchemeDark}\langle$ *response* $\color{SchemeDark}\rangle$
> :::
>
> Give the responses both when the evaluator memoizes and when it does
> not.
> **[]{#Exercise 4.30 label="Exercise 4.30"}Exercise 4.30:** Cy D. Fect,
> a reformed C programmer, is worried that some side effects may never
> take place, because the lazy evaluator doesn't force the expressions
> in a sequence. Since the value of an expression in a sequence other
> than the last one is not used (the expression is there only for its
> effect, such as assigning to a variable or printing), there can be no
> subsequent use of this value (e.g., as an argument to a primitive
> procedure) that will cause it to be forced. Cy thus thinks that when
> evaluating sequences, we must force all expressions in the sequence
> except the final one. He proposes to modify `eval/sequence` from
> [Section 4.1.1](#Section 4.1.1) to use `actual/value` rather than
> `eval`:
>
> ::: scheme
> (define (eval-sequence exps env) (cond ((last-exp? exps) (eval
> (first-exp exps) env)) (else (actual-value (first-exp exps) env)
> (eval-sequence (rest-exps exps) env))))
> :::
>
> a. Ben Bitdiddle thinks Cy is wrong. He shows Cy the `for/each`
> procedure described in [Exercise 2.23](#Exercise 2.23), which
> gives an important example of a sequence with side effects:
>
> ::: scheme
> (define (for-each proc items) (if (null? items) 'done (begin (proc
> (car items)) (for-each proc (cdr items)))))
> :::
>
> He claims that the evaluator in the text (with the original
> `eval/sequence`) handles this correctly:
>
> ::: scheme
> *;;; L-Eval input:* (for-each (lambda (x) (newline) (display x))
> (list 57 321 88)) *57* *321* *88* *;;; L-Eval value:*
> *done*
> :::
>
> Explain why Ben is right about the behavior of `for/each`.
>
> b. Cy agrees that Ben is right about the `for/each` example, but says
> that that's not the kind of program he was thinking about when he
> proposed his change to `eval/sequence`. He defines the following
> two procedures in the lazy evaluator:
>
> ::: scheme
> (define (p1 x) (set! x (cons x '(2))) x) (define (p2 x) (define
> (p e) e x) (p (set! x (cons x '(2)))))
> :::
>
> What are the values of `(p1 1)` and `(p2 1)` with the original
> `eval/sequence`? What would the values be with Cy's proposed
> change to `eval/sequence`?
>
> c. Cy also points out that changing `eval/sequence` as he proposes
> does not affect the behavior of the example in part a. Explain why
> this is true.
>
> d. How do you think sequences ought to be treated in the lazy
> evaluator? Do you like Cy's approach, the approach in the text, or
> some other approach?
> **[]{#Exercise 4.31 label="Exercise 4.31"}Exercise 4.31:** The
> approach taken in this section is somewhat unpleasant, because it
> makes an incompatible change to Scheme. It might be nicer to implement
> lazy evaluation as an *upward-compatible extension*, that is, so that
> ordinary Scheme programs will work as before. We can do this by
> extending the syntax of procedure declarations to let the user control
> whether or not arguments are to be delayed. While we're at it, we may
> as well also give the user the choice between delaying with and
> without memoization. For example, the definition
>
> ::: scheme
> (define (f a (b lazy) c (d lazy-memo)) $\dots$ )
> :::
>
> would define `f` to be a procedure of four arguments, where the first
> and third arguments are evaluated when the procedure is called, the
> second argument is delayed, and the fourth argument is both delayed
> and memoized. Thus, ordinary procedure definitions will produce the
> same behavior as ordinary Scheme, while adding the `lazy/memo`
> declaration to each parameter of every compound procedure will produce
> the behavior of the lazy evaluator defined in this section. Design and
> implement the changes required to produce such an extension to Scheme.
> You will have to implement new syntax procedures to handle the new
> syntax for `define`. You must also arrange for `eval` or `apply` to
> determine when arguments are to be delayed, and to force or delay
> arguments accordingly, and you must arrange for forcing to memoize or
> not, as appropriate.
### Streams as Lazy Lists {#Section 4.2.3}
In [Section 3.5.1](#Section 3.5.1), we showed how to implement streams
as delayed lists. We introduced special forms `delay` and `cons/stream`,
which allowed us to construct a "promise" to compute the `cdr` of a
stream, without actually fulfilling that promise until later. We could
use this general technique of introducing special forms whenever we need
more control over the evaluation process, but this is awkward. For one
thing, a special form is not a first-class object like a procedure, so
we cannot use it together with higher-order procedures.[^243]
Additionally, we were forced to create streams as a new kind of data
object similar but not identical to lists, and this required us to
reimplement many ordinary list operations (`map`, `append`, and so on)
for use with streams.
With lazy evaluation, streams and lists can be identical, so there is no
need for special forms or for separate list and stream operations. All
we need to do is to arrange matters so that `cons` is non-strict. One
way to accomplish this is to extend the lazy evaluator to allow for
non-strict primitives, and to implement `cons` as one of these. An
easier way is to recall ([Section 2.1.3](#Section 2.1.3)) that there is
no fundamental need to implement `cons` as a primitive at all. Instead,
we can represent pairs as procedures:[^244]
::: scheme
(define (cons x y) (lambda (m) (m x y))) (define (car z) (z (lambda (p
q) p))) (define (cdr z) (z (lambda (p q) q)))
:::
In terms of these basic operations, the standard definitions of the list
operations will work with infinite lists (streams) as well as finite
ones, and the stream operations can be implemented as list operations.
Here are some examples:
::: scheme
(define (list-ref items n) (if (= n 0) (car items) (list-ref (cdr items)
(- n 1)))) (define (map proc items) (if (null? items) '() (cons (proc
(car items)) (map proc (cdr items))))) (define (scale-list items factor)
(map (lambda (x) (\* x factor)) items)) (define (add-lists list1 list2)
(cond ((null? list1) list2) ((null? list2) list1) (else (cons (+ (car
list1) (car list2)) (add-lists (cdr list1) (cdr list2)))))) (define ones
(cons 1 ones)) (define integers (cons 1 (add-lists ones integers)))
*;;; L-Eval input:* (list-ref integers 17) *;;; L-Eval value:*
*18*
:::
Note that these lazy lists are even lazier than the streams of [Chapter
3](#Chapter 3): The `car` of the list, as well as the `cdr`, is
delayed.[^245] In fact, even accessing the `car` or `cdr` of a lazy pair
need not force the value of a list element. The value will be forced
only when it is really needed---e.g., for use as the argument of a
primitive, or to be printed as an answer.
Lazy pairs also help with the problem that arose with streams in
[Section 3.5.4](#Section 3.5.4), where we found that formulating stream
models of systems with loops may require us to sprinkle our programs
with explicit `delay` operations, beyond the ones supplied by
`cons/stream`. With lazy evaluation, all arguments to procedures are
delayed uniformly. For instance, we can implement procedures to
integrate lists and solve differential equations as we originally
intended in [Section 3.5.4](#Section 3.5.4):
::: scheme
(define (integral integrand initial-value dt) (define int (cons
initial-value (add-lists (scale-list integrand dt) int))) int) (define
(solve f y0 dt) (define y (integral dy y0 dt)) (define dy (map f y)) y)
*;;; L-Eval input:* (list-ref (solve (lambda (x) x) 1 0.001) 1000)
*;;; L-Eval value:* *2.716924*
:::
> **[]{#Exercise 4.32 label="Exercise 4.32"}Exercise 4.32:** Give some
> examples that illustrate the difference between the streams of
> [Chapter 3](#Chapter 3) and the "lazier" lazy lists described in this
> section. How can you take advantage of this extra laziness?
> **[]{#Exercise 4.33 label="Exercise 4.33"}Exercise 4.33:** Ben
> Bitdiddle tests the lazy list implementation given above by evaluating
> the expression:
>
> ::: scheme
> (car '(a b c))
> :::
>
> To his surprise, this produces an error. After some thought, he
> realizes that the "lists" obtained by reading in quoted expressions
> are different from the lists manipulated by the new definitions of
> `cons`, `car`, and `cdr`. Modify the evaluator's treatment of quoted
> expressions so that quoted lists typed at the driver loop will produce
> true lazy lists.
> **[]{#Exercise 4.34 label="Exercise 4.34"}Exercise 4.34:** Modify the
> driver loop for the evaluator so that lazy pairs and lists will print
> in some reasonable way. (What are you going to do about infinite
> lists?) You may also need to modify the representation of lazy pairs
> so that the evaluator can identify them in order to print them.
## Variations on a Scheme --- Nondeterministic Computing {#Section 4.3}
In this section, we extend the Scheme evaluator to support a programming
paradigm called *nondeterministic computing* by building into the
evaluator a facility to support automatic search. This is a much more
profound change to the language than the introduction of lazy evaluation
in [Section 4.2](#Section 4.2).
Nondeterministic computing, like stream processing, is useful for
"generate and test" applications. Consider the task of starting with two
lists of positive integers and finding a pair of integers---one from the
first list and one from the second list---whose sum is prime. We saw how
to handle this with finite sequence operations in [Section
2.2.3](#Section 2.2.3) and with infinite streams in [Section
3.5.3](#Section 3.5.3). Our approach was to generate the sequence of all
possible pairs and filter these to select the pairs whose sum is prime.
Whether we actually generate the entire sequence of pairs first as in
[Chapter 2](#Chapter 2), or interleave the generating and filtering as
in [Chapter 3](#Chapter 3), is immaterial to the essential image of how
the computation is organized.
The nondeterministic approach evokes a different image. Imagine simply
that we choose (in some way) a number from the first list and a number
from the second list and require (using some mechanism) that their sum
be prime. This is expressed by following procedure:
::: scheme
(define (prime-sum-pair list1 list2) (let ((a (an-element-of list1)) (b
(an-element-of list2))) (require (prime? (+ a b))) (list a b)))
:::
It might seem as if this procedure merely restates the problem, rather
than specifying a way to solve it. Nevertheless, this is a legitimate
nondeterministic program.[^246]
The key idea here is that expressions in a nondeterministic language can
have more than one possible value. For instance, `an/element/of` might
return any element of the given list. Our nondeterministic program
evaluator will work by automatically choosing a possible value and
keeping track of the choice. If a subsequent requirement is not met, the
evaluator will try a different choice, and it will keep trying new
choices until the evaluation succeeds, or until we run out of choices.
Just as the lazy evaluator freed the programmer from the details of how
values are delayed and forced, the nondeterministic program evaluator
will free the programmer from the details of how choices are made.
It is instructive to contrast the different images of time evoked by
nondeterministic evaluation and stream processing. Stream processing
uses lazy evaluation to decouple the time when the stream of possible
answers is assembled from the time when the actual stream elements are
produced. The evaluator supports the illusion that all the possible
answers are laid out before us in a timeless sequence. With
nondeterministic evaluation, an expression represents the exploration of
a set of possible worlds, each determined by a set of choices. Some of
the possible worlds lead to dead ends, while others have useful values.
The nondeterministic program evaluator supports the illusion that time
branches, and that our programs have different possible execution
histories. When we reach a dead end, we can revisit a previous choice
point and proceed along a different branch.
The nondeterministic program evaluator implemented below is called the
`amb` evaluator because it is based on a new special form called `amb`.
We can type the above definition of `prime/sum/pair` at the `amb`
evaluator driver loop (along with definitions of `prime?`,
`an/element/of`, and `require`) and run the procedure as follows:
::: scheme
*;;; Amb-Eval input:* (prime-sum-pair '(1 3 5 8) '(20 35 110)) *;;;
Starting a new problem* *;;; Amb-Eval value:* *(3 20)*
:::
The value returned was obtained after the evaluator repeatedly chose
elements from each of the lists, until a successful choice was made.
[Section 4.3.1](#Section 4.3.1) introduces `amb` and explains how it
supports nondeterminism through the evaluator's automatic search
mechanism. [Section 4.3.2](#Section 4.3.2) presents examples of
nondeterministic programs, and [Section 4.3.3](#Section 4.3.3) gives the
details of how to implement the `amb` evaluator by modifying the
ordinary Scheme evaluator.
### Amb and Search {#Section 4.3.1}
To extend Scheme to support nondeterminism, we introduce a new special
form called `amb`.[^247] The expression
::: scheme
(amb
$\color{SchemeDark}\langle$ *e* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *e* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *e* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
:::
returns the value of one of the $n$ expressions $\langle$$e_i$$\rangle$
"ambiguously." For example, the expression
::: scheme
(list (amb 1 2 3) (amb 'a 'b))
:::
can have six possible values:
::: scheme
`(1 a)` `(1 b)` `(2 a)` `(2 b)` `(3 a)` `(3 b)`
:::
`amb` with a single choice produces an ordinary (single) value.
`amb` with no choices---the expression `(amb)`---is an expression with
no acceptable values. Operationally, we can think of `(amb)` as an
expression that when evaluated causes the computation to "fail": The
computation aborts and no value is produced. Using this idea, we can
express the requirement that a particular predicate expression `p` must
be true as follows:
::: scheme
(define (require p) (if (not p) (amb)))
:::
With `amb` and `require`, we can implement the `an/element/of` procedure
used above:
::: scheme
(define (an-element-of items) (require (not (null? items))) (amb (car
items) (an-element-of (cdr items))))
:::
`an/element/of` fails if the list is empty. Otherwise it ambiguously
returns either the first element of the list or an element chosen from
the rest of the list.
We can also express infinite ranges of choices. The following procedure
potentially returns any integer greater than or equal to some given $n$:
::: scheme
(define (an-integer-starting-from n) (amb n (an-integer-starting-from (+
n 1))))
:::
This is like the stream procedure `integers/starting/from` described in
[Section 3.5.2](#Section 3.5.2), but with an important difference: The
stream procedure returns an object that represents the sequence of all
integers beginning with $n$, whereas the `amb` procedure returns a
single integer.[^248]
Abstractly, we can imagine that evaluating an `amb` expression causes
time to split into branches, where the computation continues on each
branch with one of the possible values of the expression. We say that
`amb` represents a *nondeterministic choice point*. If we had a machine
with a sufficient number of processors that could be dynamically
allocated, we could implement the search in a straightforward way.
Execution would proceed as in a sequential machine, until an `amb`
expression is encountered. At this point, more processors would be
allocated and initialized to continue all of the parallel executions
implied by the choice. Each processor would proceed sequentially as if
it were the only choice, until it either terminates by encountering a
failure, or it further subdivides, or it finishes.[^249]
On the other hand, if we have a machine that can execute only one
process (or a few concurrent processes), we must consider the
alternatives sequentially. One could imagine modifying an evaluator to
pick at random a branch to follow whenever it encounters a choice point.
Random choice, however, can easily lead to failing values. We might try
running the evaluator over and over, making random choices and hoping to
find a non-failing value, but it is better to *systematically search*
all possible execution paths. The `amb` evaluator that we will develop
and work with in this section implements a systematic search as follows:
When the evaluator encounters an application of `amb`, it initially
selects the first alternative. This selection may itself lead to a
further choice. The evaluator will always initially choose the first
alternative at each choice point. If a choice results in a failure, then
the evaluator automagically[^250] *backtracks* to the most recent choice
point and tries the next alternative. If it runs out of alternatives at
any choice point, the evaluator will back up to the previous choice
point and resume from there. This process leads to a search strategy
known as *depth-first search* or *chronological backtracking*.[^251]
#### Driver loop {#driver-loop .unnumbered}
The driver loop for the `amb` evaluator has some unusual properties. It
reads an expression and prints the value of the first non-failing
execution, as in the `prime/sum/pair` example shown above. If we want to
see the value of the next successful execution, we can ask the
interpreter to backtrack and attempt to generate a second non-failing
execution. This is signaled by typing the symbol `try/again`. If any
expression except `try/again` is given, the interpreter will start a new
problem, discarding the unexplored alternatives in the previous problem.
Here is a sample interaction:
::: scheme
*;;; Amb-Eval input:* (prime-sum-pair '(1 3 5 8) '(20 35 110)) *;;;
Starting a new problem* *;;; Amb-Eval value:* *(3 20)*
*;;; Amb-Eval input:* try-again *;;; Amb-Eval value:* *(3 110)*
*;;; Amb-Eval input:* try-again *;;; Amb-Eval value:* *(8 35)*
*;;; Amb-Eval input:* try-again *;;; There are no more values of*
*(prime-sum-pair (quote (1 3 5 8)) (quote (20 35 110)))*
*;;; Amb-Eval input:* (prime-sum-pair '(19 27 30) '(11 36 58)) *;;;
Starting a new problem* *;;; Amb-Eval value:* *(30 11)*
:::
> **[]{#Exercise 4.35 label="Exercise 4.35"}Exercise 4.35:** Write a
> procedure `an/integer/between` that returns an integer between two
> given bounds. This can be used to implement a procedure that finds
> Pythagorean triples, i.e., triples of integers $(i, j, k)$ between the
> given bounds such that $i \le j$ and $i^2 + j^2 = k^2$, as follows:
>
> ::: scheme
> (define (a-pythagorean-triple-between low high) (let ((i
> (an-integer-between low high))) (let ((j (an-integer-between i high)))
> (let ((k (an-integer-between j high))) (require (= (+ (\* i i) (\* j
> j)) (\* k k))) (list i j k)))))
> :::
> **[]{#Exercise 4.36 label="Exercise 4.36"}Exercise 4.36:** [Exercise
> 3.69](#Exercise 3.69) discussed how to generate the stream of *all*
> Pythagorean triples, with no upper bound on the size of the integers
> to be searched. Explain why simply replacing `an/integer/between` by
> `an/integer/starting/from` in the procedure in [Exercise
> 4.35](#Exercise 4.35) is not an adequate way to generate arbitrary
> Pythagorean triples. Write a procedure that actually will accomplish
> this. (That is, write a procedure for which repeatedly typing
> `try/again` would in principle eventually generate all Pythagorean
> triples.)
> **[]{#Exercise 4.37 label="Exercise 4.37"}Exercise 4.37:** Ben
> Bitdiddle claims that the following method for generating Pythagorean
> triples is more efficient than the one in [Exercise
> 4.35](#Exercise 4.35). Is he correct? (Hint: Consider the number of
> possibilities that must be explored.)
>
> ::: scheme
> (define (a-pythagorean-triple-between low high) (let ((i
> (an-integer-between low high)) (hsq (\* high high))) (let ((j
> (an-integer-between i high))) (let ((ksq (+ (\* i i) (\* j j))))
> (require (\>= hsq ksq)) (let ((k (sqrt ksq))) (require (integer? k))
> (list i j k))))))
> :::
### Examples of Nondeterministic Programs {#Section 4.3.2}
[Section 4.3.3](#Section 4.3.3) describes the implementation of the
`amb` evaluator. First, however, we give some examples of how it can be
used. The advantage of nondeterministic programming is that we can
suppress the details of how search is carried out, thereby expressing
our programs at a higher level of abstraction.
#### Logic Puzzles {#logic-puzzles .unnumbered}
The following puzzle (taken from [Dinesman 1968](#Dinesman 1968)) is
typical of a large class of simple logic puzzles:
> Baker, Cooper, Fletcher, Miller, and Smith live on different floors of
> an apartment house that contains only five floors. Baker does not live
> on the top floor. Cooper does not live on the bottom floor. Fletcher
> does not live on either the top or the bottom floor. Miller lives on a
> higher floor than does Cooper. Smith does not live on a floor adjacent
> to Fletcher's. Fletcher does not live on a floor adjacent to Cooper's.
> Where does everyone live?
We can determine who lives on each floor in a straightforward way by
enumerating all the possibilities and imposing the given
restrictions:[^252]
::: scheme
(define (multiple-dwelling) (let ((baker (amb 1 2 3 4 5)) (cooper (amb 1
2 3 4 5)) (fletcher (amb 1 2 3 4 5)) (miller (amb 1 2 3 4 5)) (smith
(amb 1 2 3 4 5))) (require (distinct? (list baker cooper fletcher miller
smith))) (require (not (= baker 5))) (require (not (= cooper 1)))
(require (not (= fletcher 5))) (require (not (= fletcher 1))) (require
(\> miller cooper)) (require (not (= (abs (- smith fletcher)) 1)))
(require (not (= (abs (- fletcher cooper)) 1))) (list (list 'baker
baker) (list 'cooper cooper) (list 'fletcher fletcher) (list 'miller
miller) (list 'smith smith))))
:::
Evaluating the expression `(multiple/dwelling)` produces the result
::: scheme
((baker 3) (cooper 2) (fletcher 4) (miller 5) (smith 1))
:::
Although this simple procedure works, it is very slow. [Exercise
4.39](#Exercise 4.39) and [Exercise 4.40](#Exercise 4.40) discuss some
possible improvements.
> **[]{#Exercise 4.38 label="Exercise 4.38"}Exercise 4.38:** Modify the
> multiple-dwelling procedure to omit the requirement that Smith and
> Fletcher do not live on adjacent floors. How many solutions are there
> to this modified puzzle?
> **[]{#Exercise 4.39 label="Exercise 4.39"}Exercise 4.39:** Does the
> order of the restrictions in the multiple-dwelling procedure affect
> the answer? Does it affect the time to find an answer? If you think it
> matters, demonstrate a faster program obtained from the given one by
> reordering the restrictions. If you think it does not matter, argue
> your case.
> **[]{#Exercise 4.40 label="Exercise 4.40"}Exercise 4.40:** In the
> multiple dwelling problem, how many sets of assignments are there of
> people to floors, both before and after the requirement that floor
> assignments be distinct? It is very inefficient to generate all
> possible assignments of people to floors and then leave it to
> backtracking to eliminate them. For example, most of the restrictions
> depend on only one or two of the person-floor variables, and can thus
> be imposed before floors have been selected for all the people. Write
> and demonstrate a much more efficient nondeterministic procedure that
> solves this problem based upon generating only those possibilities
> that are not already ruled out by previous restrictions. (Hint: This
> will require a nest of `let` expressions.)
> **[]{#Exercise 4.41 label="Exercise 4.41"}Exercise 4.41:** Write an
> ordinary Scheme program to solve the multiple dwelling puzzle.
> **[]{#Exercise 4.42 label="Exercise 4.42"}Exercise 4.42:** Solve the
> following "Liars" puzzle (from [Phillips 1934](#Phillips 1934)):
>
> Five schoolgirls sat for an examination. Their parents---so they
> thought---showed an undue degree of interest in the result. They
> therefore agreed that, in writing home about the examination, each
> girl should make one true statement and one untrue one. The following
> are the relevant passages from their letters:
>
> - Betty: "Kitty was second in the examination. I was only third."
>
> - Ethel: "You'll be glad to hear that I was on top. Joan was 2nd."
>
> - Joan: "I was third, and poor old Ethel was bottom."
>
> - Kitty: "I came out second. Mary was only fourth."
>
> - Mary: "I was fourth. Top place was taken by Betty."
>
> What in fact was the order in which the five girls were placed?
> **[]{#Exercise 4.43 label="Exercise 4.43"}Exercise 4.43:** Use the
> `amb` evaluator to solve the following puzzle:[^253]
>
> Mary Ann Moore's father has a yacht and so has each of his four
> friends: Colonel Downing, Mr. Hall, Sir Barnacle Hood, and Dr. Parker.
> Each of the five also has one daughter and each has named his yacht
> after a daughter of one of the others. Sir Barnacle's yacht is the
> Gabrielle, Mr. Moore owns the Lorna; Mr. Hall the Rosalind. The
> Melissa, owned by Colonel Downing, is named after Sir Barnacle's
> daughter. Gabrielle's father owns the yacht that is named after Dr.
> Parker's daughter. Who is Lorna's father?
>
> Try to write the program so that it runs efficiently (see [Exercise
> 4.40](#Exercise 4.40)). Also determine how many solutions there are if
> we are not told that Mary Ann's last name is Moore.
> **[]{#Exercise 4.44 label="Exercise 4.44"}Exercise 4.44:** [Exercise
> 2.42](#Exercise 2.42) described the "eight-queens puzzle" of placing
> queens on a chessboard so that no two attack each other. Write a
> nondeterministic program to solve this puzzle.
#### Parsing natural language {#parsing-natural-language .unnumbered}
Programs designed to accept natural language as input usually start by
attempting to *parse* the input, that is, to match the input against
some grammatical structure. For example, we might try to recognize
simple sentences consisting of an article followed by a noun followed by
a verb, such as "The cat eats." To accomplish such an analysis, we must
be able to identify the parts of speech of individual words. We could
start with some lists that classify various words:[^254]
::: scheme
(define nouns '(noun student professor cat class)) (define verbs '(verb
studies lectures eats sleeps)) (define articles '(article the a))
:::
We also need a *grammar*, that is, a set of rules describing how
grammatical elements are composed from simpler elements. A very simple
grammar might stipulate that a sentence always consists of two
pieces---a noun phrase followed by a verb---and that a noun phrase
consists of an article followed by a noun. With this grammar, the
sentence "The cat eats" is parsed as follows:
::: scheme
(sentence (noun-phrase (article the) (noun cat)) (verb eats))
:::
We can generate such a parse with a simple program that has separate
procedures for each of the grammatical rules. To parse a sentence, we
identify its two constituent pieces and return a list of these two
elements, tagged with the symbol `sentence`:
::: scheme
(define (parse-sentence) (list 'sentence (parse-noun-phrase) (parse-word
verbs)))
:::
A noun phrase, similarly, is parsed by finding an article followed by a
noun:
::: scheme
(define (parse-noun-phrase) (list 'noun-phrase (parse-word articles)
(parse-word nouns)))
:::
At the lowest level, parsing boils down to repeatedly checking that the
next unparsed word is a member of the list of words for the required
part of speech. To implement this, we maintain a global variable
`*unparsed*`, which is the input that has not yet been parsed. Each time
we check a word, we require that `*unparsed*` must be non-empty and that
it should begin with a word from the designated list. If so, we remove
that word from `*unparsed*` and return the word together with its part
of speech (which is found at the head of the list):[^255]
::: scheme
(define (parse-word word-list) (require (not (null? \*unparsed\*)))
(require (memq (car \*unparsed\*) (cdr word-list))) (let ((found-word
(car \*unparsed\*))) (set! \*unparsed\* (cdr \*unparsed\*)) (list (car
word-list) found-word)))
:::
To start the parsing, all we need to do is set `*unparsed*` to be the
entire input, try to parse a sentence, and check that nothing is left
over:
::: scheme
(define \*unparsed\* '()) (define (parse input) (set! \*unparsed\*
input) (let ((sent (parse-sentence))) (require (null? \*unparsed\*))
sent))
:::
We can now try the parser and verify that it works for our simple test
sentence:
::: scheme
*;;; Amb-Eval input:* (parse '(the cat eats)) *;;; Starting a new
problem* *;;; Amb-Eval value:*
:::
::: smallscheme
*(sentence (noun-phrase (article the) (noun cat)) (verb eats))*
:::
The `amb` evaluator is useful here because it is convenient to express
the parsing constraints with the aid of `require`. Automatic search and
backtracking really pay off, however, when we consider more complex
grammars where there are choices for how the units can be decomposed.
Let's add to our grammar a list of prepositions:
::: scheme
(define prepositions '(prep for to in by with))
:::
and define a prepositional phrase (e.g., "for the cat") to be a
preposition followed by a noun phrase:
::: scheme
(define (parse-prepositional-phrase) (list 'prep-phrase (parse-word
prepositions) (parse-noun-phrase)))
:::
Now we can define a sentence to be a noun phrase followed by a verb
phrase, where a verb phrase can be either a verb or a verb phrase
extended by a prepositional phrase:[^256]
::: scheme
(define (parse-sentence) (list 'sentence (parse-noun-phrase)
(parse-verb-phrase))) (define (parse-verb-phrase) (define (maybe-extend
verb-phrase) (amb verb-phrase (maybe-extend (list 'verb-phrase
verb-phrase (parse-prepositional-phrase))))) (maybe-extend (parse-word
verbs)))
:::
While we're at it, we can also elaborate the definition of noun phrases
to permit such things as "a cat in the class." What we used to call a
noun phrase, we'll now call a simple noun phrase, and a noun phrase will
now be either a simple noun phrase or a noun phrase extended by a
prepositional phrase:
::: scheme
(define (parse-simple-noun-phrase) (list 'simple-noun-phrase (parse-word
articles) (parse-word nouns))) (define (parse-noun-phrase) (define
(maybe-extend noun-phrase) (amb noun-phrase (maybe-extend (list
'noun-phrase noun-phrase (parse-prepositional-phrase))))) (maybe-extend
(parse-simple-noun-phrase)))
:::
Our new grammar lets us parse more complex sentences. For example
::: scheme
(parse '(the student with the cat sleeps in the class))
:::
produces
::: scheme
(sentence (noun-phrase (simple-noun-phrase (article the) (noun student))
(prep-phrase (prep with) (simple-noun-phrase (article the) (noun cat))))
(verb-phrase (verb sleeps) (prep-phrase (prep in) (simple-noun-phrase
(article the) (noun class)))))
:::
Observe that a given input may have more than one legal parse. In the
sentence "The professor lectures to the student with the cat," it may be
that the professor is lecturing with the cat, or that the student has
the cat. Our nondeterministic program finds both possibilities:
::: scheme
(parse '(the professor lectures to the student with the cat))
:::
produces
::: scheme
(sentence (simple-noun-phrase (article the) (noun professor))
(verb-phrase (verb-phrase (verb lectures) (prep-phrase (prep to)
(simple-noun-phrase (article the) (noun student)))) (prep-phrase (prep
with) (simple-noun-phrase (article the) (noun cat)))))
:::
Asking the evaluator to try again yields
::: scheme
(sentence (simple-noun-phrase (article the) (noun professor))
(verb-phrase (verb lectures) (prep-phrase (prep to) (noun-phrase
(simple-noun-phrase (article the) (noun student)) (prep-phrase (prep
with) (simple-noun-phrase (article the) (noun cat)))))))
:::
> **[]{#Exercise 4.45 label="Exercise 4.45"}Exercise 4.45:** With the
> grammar given above, the following sentence can be parsed in five
> different ways: "The professor lectures to the student in the class
> with the cat." Give the five parses and explain the differences in
> shades of meaning among them.
> **[]{#Exercise 4.46 label="Exercise 4.46"}Exercise 4.46:** The
> evaluators in [Section 4.1](#Section 4.1) and [Section
> 4.2](#Section 4.2) do not determine what order operands are evaluated
> in. We will see that the `amb` evaluator evaluates them from left to
> right. Explain why our parsing program wouldn't work if the operands
> were evaluated in some other order.
> **[]{#Exercise 4.47 label="Exercise 4.47"}Exercise 4.47:** Louis
> Reasoner suggests that, since a verb phrase is either a verb or a verb
> phrase followed by a prepositional phrase, it would be much more
> straightforward to define the procedure `parse/verb/phrase` as follows
> (and similarly for noun phrases):
>
> ::: scheme
> (define (parse-verb-phrase) (amb (parse-word verbs) (list 'verb-phrase
> (parse-verb-phrase) (parse-prepositional-phrase))))
> :::
>
> Does this work? Does the program's behavior change if we interchange
> the order of expressions in the `amb`?
> **[]{#Exercise 4.48 label="Exercise 4.48"}Exercise 4.48:** Extend the
> grammar given above to handle more complex sentences. For example, you
> could extend noun phrases and verb phrases to include adjectives and
> adverbs, or you could handle compound sentences.[^257]
> **[]{#Exercise 4.49 label="Exercise 4.49"}Exercise 4.49:** Alyssa P.
> Hacker is more interested in generating interesting sentences than in
> parsing them. She reasons that by simply changing the procedure
> `parse/word` so that it ignores the "input sentence" and instead
> always succeeds and generates an appropriate word, we can use the
> programs we had built for parsing to do generation instead. Implement
> Alyssa's idea, and show the first half-dozen or so sentences
> generated.[^258]
### Implementing the `amb` Evaluator {#Section 4.3.3}
The evaluation of an ordinary Scheme expression may return a value, may
never terminate, or may signal an error. In nondeterministic Scheme the
evaluation of an expression may in addition result in the discovery of a
dead end, in which case evaluation must backtrack to a previous choice
point. The interpretation of nondeterministic Scheme is complicated by
this extra case.
We will construct the `amb` evaluator for nondeterministic Scheme by
modifying the analyzing evaluator of [Section
4.1.7](#Section 4.1.7).[^259] As in the analyzing evaluator, evaluation
of an expression is accomplished by calling an execution procedure
produced by analysis of that expression. The difference between the
interpretation of ordinary Scheme and the interpretation of
nondeterministic Scheme will be entirely in the execution procedures.
#### Execution procedures and continuations {#execution-procedures-and-continuations .unnumbered}
Recall that the execution procedures for the ordinary evaluator take one
argument: the environment of execution. In contrast, the execution
procedures in the `amb` evaluator take three arguments: the environment,
and two procedures called *continuation procedures*. The evaluation of
an expression will finish by calling one of these two continuations: If
the evaluation results in a value, the *success continuation* is called
with that value; if the evaluation results in the discovery of a dead
end, the *failure continuation* is called. Constructing and calling
appropriate continuations is the mechanism by which the nondeterministic
evaluator implements backtracking.
It is the job of the success continuation to receive a value and proceed
with the computation. Along with that value, the success continuation is
passed another failure continuation, which is to be called subsequently
if the use of that value leads to a dead end.
It is the job of the failure continuation to try another branch of the
nondeterministic process. The essence of the nondeterministic language
is in the fact that expressions may represent choices among
alternatives. The evaluation of such an expression must proceed with one
of the indicated alternative choices, even though it is not known in
advance which choices will lead to acceptable results. To deal with
this, the evaluator picks one of the alternatives and passes this value
to the success continuation. Together with this value, the evaluator
constructs and passes along a failure continuation that can be called
later to choose a different alternative.
A failure is triggered during evaluation (that is, a failure
continuation is called) when a user program explicitly rejects the
current line of attack (for example, a call to `require` may result in
execution of `(amb)`, an expression that always fails---see [Section
4.3.1](#Section 4.3.1)). The failure continuation in hand at that point
will cause the most recent choice point to choose another alternative.
If there are no more alternatives to be considered at that choice point,
a failure at an earlier choice point is triggered, and so on. Failure
continuations are also invoked by the driver loop in response to a
`try/again` request, to find another value of the expression.
In addition, if a side-effect operation (such as assignment to a
variable) occurs on a branch of the process resulting from a choice, it
may be necessary, when the process finds a dead end, to undo the side
effect before making a new choice. This is accomplished by having the
side-effect operation produce a failure continuation that undoes the
side effect and propagates the failure.
In summary, failure continuations are constructed by
- `amb` expressions---to provide a mechanism to make alternative
choices if the current choice made by the `amb` expression leads to
a dead end;
- the top-level driver---to provide a mechanism to report failure when
the choices are exhausted;
- assignments---to intercept failures and undo assignments during
backtracking.
Failures are initiated only when a dead end is encountered. This occurs
- if the user program executes `(amb)`;
- if the user types `try/again` at the top-level driver.
Failure continuations are also called during processing of a failure:
- When the failure continuation created by an assignment finishes
undoing a side effect, it calls the failure continuation it
intercepted, in order to propagate the failure back to the choice
point that led to this assignment or to the top level.
- When the failure continuation for an `amb` runs out of choices, it
calls the failure continuation that was originally given to the
`amb`, in order to propagate the failure back to the previous choice
point or to the top level.
#### Structure of the evaluator {#structure-of-the-evaluator .unnumbered}
The syntax- and data-representation procedures for the `amb` evaluator,
and also the basic `analyze` procedure, are identical to those in the
evaluator of [Section 4.1.7](#Section 4.1.7), except for the fact that
we need additional syntax procedures to recognize the `amb` special
form:[^260]
::: scheme
(define (amb? exp) (tagged-list? exp 'amb)) (define (amb-choices exp)
(cdr exp))
:::
We must also add to the dispatch in `analyze` a clause that will
recognize this special form and generate an appropriate execution
procedure:
::: scheme
((amb? exp) (analyze-amb exp))
:::
The top-level procedure `ambeval` (similar to the version of `eval`
given in [Section 4.1.7](#Section 4.1.7)) analyzes the given expression
and applies the resulting execution procedure to the given environment,
together with two given continuations:
::: scheme
(define (ambeval exp env succeed fail) ((analyze exp) env succeed fail))
:::
A success continuation is a procedure of two arguments: the value just
obtained and another failure continuation to be used if that value leads
to a subsequent failure. A failure continuation is a procedure of no
arguments. So the general form of an execution procedure is
::: scheme
(lambda (env succeed fail) [;; `succeed` is
`(lambda (value fail) `$\dots$`)`]{.roman} [;; `fail` is
`(lambda () `$\dots$`)`]{.roman} $\dots$ )
:::
For example, executing
::: scheme
(ambeval
$\color{SchemeDark}\langle$ *exp* $\color{SchemeDark}\rangle$
the-global-environment (lambda (value fail) value) (lambda () 'failed))
:::
will attempt to evaluate the given expression and will return either the
expression's value (if the evaluation succeeds) or the symbol `failed`
(if the evaluation fails). The call to `ambeval` in the driver loop
shown below uses much more complicated continuation procedures, which
continue the loop and support the `try/again` request.
Most of the complexity of the `amb` evaluator results from the mechanics
of passing the continuations around as the execution procedures call
each other. In going through the following code, you should compare each
of the execution procedures with the corresponding procedure for the
ordinary evaluator given in [Section 4.1.7](#Section 4.1.7).
#### Simple expressions {#simple-expressions .unnumbered}
The execution procedures for the simplest kinds of expressions are
essentially the same as those for the ordinary evaluator, except for the
need to manage the continuations. The execution procedures simply
succeed with the value of the expression, passing along the failure
continuation that was passed to them.
::: scheme
(define (analyze-self-evaluating exp) (lambda (env succeed fail)
(succeed exp fail))) (define (analyze-quoted exp) (let ((qval
(text-of-quotation exp))) (lambda (env succeed fail) (succeed qval
fail)))) (define (analyze-variable exp) (lambda (env succeed fail)
(succeed (lookup-variable-value exp env) fail))) (define (analyze-lambda
exp) (let ((vars (lambda-parameters exp)) (bproc (analyze-sequence
(lambda-body exp)))) (lambda (env succeed fail) (succeed (make-procedure
vars bproc env) fail))))
:::
Notice that looking up a variable always 'succeeds.' If
`lookup/variable/value` fails to find the variable, it signals an error,
as usual. Such a "failure" indicates a program bug---a reference to an
unbound variable; it is not an indication that we should try another
nondeterministic choice instead of the one that is currently being
tried.
#### Conditionals and sequences {#conditionals-and-sequences .unnumbered}
Conditionals are also handled in a similar way as in the ordinary
evaluator. The execution procedure generated by `analyze/if` invokes the
predicate execution procedure `pproc` with a success continuation that
checks whether the predicate value is true and goes on to execute either
the consequent or the alternative. If the execution of `pproc` fails,
the original failure continuation for the `if` expression is called.
::: scheme
(define (analyze-if exp) (let ((pproc (analyze (if-predicate exp)))
(cproc (analyze (if-consequent exp))) (aproc (analyze (if-alternative
exp)))) (lambda (env succeed fail) (pproc env [;; success continuation
for evaluating the predicate]{.roman} [;; to obtain
`pred/value`]{.roman} (lambda (pred-value fail2) (if (true? pred-value)
(cproc env succeed fail2) (aproc env succeed fail2))) [;; failure
continuation for evaluating the predicate]{.roman} fail))))
:::
Sequences are also handled in the same way as in the previous evaluator,
except for the machinations in the subprocedure `sequentially` that are
required for passing the continuations. Namely, to sequentially execute
`a` and then `b`, we call `a` with a success continuation that calls
`b`.
::: scheme
(define (analyze-sequence exps) (define (sequentially a b) (lambda (env
succeed fail) (a env [;; success continuation for calling `a`]{.roman}
(lambda (a-value fail2) (b env succeed fail2)) [;; failure continuation
for calling `a`]{.roman} fail))) (define (loop first-proc rest-procs)
(if (null? rest-procs) first-proc (loop (sequentially first-proc (car
rest-procs)) (cdr rest-procs)))) (let ((procs (map analyze exps))) (if
(null? procs) (error \"Empty sequence: ANALYZE\")) (loop (car procs)
(cdr procs))))
:::
#### Definitions and assignments {#definitions-and-assignments .unnumbered}
Definitions are another case where we must go to some trouble to manage
the continuations, because it is necessary to evaluate the
definition-value expression before actually defining the new variable.
To accomplish this, the definition-value execution procedure `vproc` is
called with the environment, a success continuation, and the failure
continuation. If the execution of `vproc` succeeds, obtaining a value
`val` for the defined variable, the variable is defined and the success
is propagated:
::: scheme
(define (analyze-definition exp) (let ((var (definition-variable exp))
(vproc (analyze (definition-value exp)))) (lambda (env succeed fail)
(vproc env (lambda (val fail2) (define-variable! var val env) (succeed
'ok fail2)) fail))))
:::
Assignments are more interesting. This is the first place where we
really use the continuations, rather than just passing them around. The
execution procedure for assignments starts out like the one for
definitions. It first attempts to obtain the new value to be assigned to
the variable. If this evaluation of `vproc` fails, the assignment fails.
If `vproc` succeeds, however, and we go on to make the assignment, we
must consider the possibility that this branch of the computation might
later fail, which will require us to backtrack out of the assignment.
Thus, we must arrange to undo the assignment as part of the backtracking
process.[^261]
This is accomplished by giving `vproc` a success continuation (marked
with the comment "\*1\*" below) that saves the old value of the variable
before assigning the new value to the variable and proceeding from the
assignment. The failure continuation that is passed along with the value
of the assignment (marked with the comment "\*2\*" below) restores the
old value of the variable before continuing the failure. That is, a
successful assignment provides a failure continuation that will
intercept a subsequent failure; whatever failure would otherwise have
called `fail2` calls this procedure instead, to undo the assignment
before actually calling `fail2`.
::: scheme
(define (analyze-assignment exp) (let ((var (assignment-variable exp))
(vproc (analyze (assignment-value exp)))) (lambda (env succeed fail)
(vproc env (lambda (val fail2) [; \*1\*]{.roman} (let ((old-value
(lookup-variable-value var env))) (set-variable-value! var val env)
(succeed 'ok (lambda () [; \*2\*]{.roman} (set-variable-value! var
old-value env) (fail2))))) fail))))
:::
#### Procedure applications {#procedure-applications .unnumbered}
The execution procedure for applications contains no new ideas except
for the technical complexity of managing the continuations. This
complexity arises in `analyze/application`, due to the need to keep
track of the success and failure continuations as we evaluate the
operands. We use a procedure `get/args` to evaluate the list of
operands, rather than a simple `map` as in the ordinary evaluator.
::: scheme
(define (analyze-application exp) (let ((fproc (analyze (operator exp)))
(aprocs (map analyze (operands exp)))) (lambda (env succeed fail) (fproc
env (lambda (proc fail2) (get-args aprocs env (lambda (args fail3)
(execute-application proc args succeed fail3)) fail2)) fail))))
:::
In `get/args`, notice how `cdr`-ing down the list of `aproc` execution
procedures and `cons`ing up the resulting list of `args` is accomplished
by calling each `aproc` in the list with a success continuation that
recursively calls `get/args`. Each of these recursive calls to
`get/args` has a success continuation whose value is the `cons` of the
newly obtained argument onto the list of accumulated arguments:
::: scheme
(define (get-args aprocs env succeed fail) (if (null? aprocs) (succeed
'() fail) ((car aprocs) env [;; success continuation for this
`aproc`]{.roman} (lambda (arg fail2) (get-args (cdr aprocs) env [;;
success continuation for]{.roman} [;; recursive call to
`get/args`]{.roman} (lambda (args fail3) (succeed (cons arg args)
fail3)) fail2)) fail)))
:::
The actual procedure application, which is performed by
`execute/appli/cation`, is accomplished in the same way as for the
ordinary evaluator, except for the need to manage the continuations.
::: scheme
(define (execute-application proc args succeed fail) (cond
((primitive-procedure? proc) (succeed (apply-primitive-procedure proc
args) fail)) ((compound-procedure? proc) ((procedure-body proc)
(extend-environment (procedure-parameters proc) args
(procedure-environment proc)) succeed fail)) (else (error \"Unknown
procedure type: EXECUTE-APPLICATION\" proc))))
:::
#### Evaluating `amb` expressions {#evaluating-amb-expressions .unnumbered}
The `amb` special form is the key element in the nondeterministic
language. Here we see the essence of the interpretation process and the
reason for keeping track of the continuations. The execution procedure
for `amb` defines a loop `try/next` that cycles through the execution
procedures for all the possible values of the `amb` expression. Each
execution procedure is called with a failure continuation that will try
the next one. When there are no more alternatives to try, the entire
`amb` expression fails.
::: scheme
(define (analyze-amb exp) (let ((cprocs (map analyze (amb-choices
exp)))) (lambda (env succeed fail) (define (try-next choices) (if (null?
choices) (fail) ((car choices) env succeed (lambda () (try-next (cdr
choices)))))) (try-next cprocs))))
:::
#### Driver loop {#driver-loop-1 .unnumbered}
The driver loop for the `amb` evaluator is complex, due to the mechanism
that permits the user to try again in evaluating an expression. The
driver uses a procedure called `internal/loop`, which takes as argument
a procedure `try/again`. The intent is that calling `try/again` should
go on to the next untried alternative in the nondeterministic
evaluation. `internal/loop` either calls `try/again` in response to the
user typing `try/again` at the driver loop, or else starts a new
evaluation by calling `ambeval`.
The failure continuation for this call to `ambeval` informs the user
that there are no more values and re-invokes the driver loop.
The success continuation for the call to `ambeval` is more subtle. We
print the obtained value and then invoke the internal loop again with a
`try/again` procedure that will be able to try the next alternative.
This `next/alternative` procedure is the second argument that was passed
to the success continuation. Ordinarily, we think of this second
argument as a failure continuation to be used if the current evaluation
branch later fails. In this case, however, we have completed a
successful evaluation, so we can invoke the "failure" alternative branch
in order to search for additional successful evaluations.
::: scheme
(define input-prompt \";;; Amb-Eval input:\") (define output-prompt
\";;; Amb-Eval value:\")
(define (driver-loop) (define (internal-loop try-again)
(prompt-for-input input-prompt) (let ((input (read))) (if (eq? input
'try-again) (try-again) (begin (newline) (display \";;; Starting a new
problem \") (ambeval input the-global-environment [;; `ambeval`
success]{.roman} (lambda (val next-alternative) (announce-output
output-prompt) (user-print val) (internal-loop next-alternative)) [;;
`ambeval` failure]{.roman} (lambda () (announce-output \";;; There are
no more values of\") (user-print input) (driver-loop)))))))
(internal-loop (lambda () (newline) (display \";;; There is no current
problem\") (driver-loop))))
:::
The initial call to `internal/loop` uses a `try/again` procedure that
complains that there is no current problem and restarts the driver loop.
This is the behavior that will happen if the user types `try/again` when
there is no evaluation in progress.
> **[]{#Exercise 4.50 label="Exercise 4.50"}Exercise 4.50:** Implement a
> new special form `ramb` that is like `amb` except that it searches
> alternatives in a random order, rather than from left to right. Show
> how this can help with Alyssa's problem in [Exercise
> 4.49](#Exercise 4.49).
> **[]{#Exercise 4.51 label="Exercise 4.51"}Exercise 4.51:** Implement a
> new kind of assignment called `permanent/set!` that is not undone upon
> failure. For example, we can choose two distinct elements from a list
> and count the number of trials required to make a successful choice as
> follows:
>
> ::: scheme
> (define count 0) (let ((x (an-element-of '(a b c))) (y (an-element-of
> '(a b c)))) (permanent-set! count (+ count 1)) (require (not (eq? x
> y))) (list x y count)) *;;; Starting a new problem* *;;; Amb-Eval
> value:* *(a b 2)* *;;; Amb-Eval input:* try-again *;;; Amb-Eval
> value:* *(a c 3)*
> :::
>
> What values would have been displayed if we had used `set!` here
> rather than `permanent/set!` ?
> **[]{#Exercise 4.52 label="Exercise 4.52"}Exercise 4.52:** Implement a
> new construct called `if/fail` that permits the user to catch the
> failure of an expression. `if/fail` takes two expressions. It
> evaluates the first expression as usual and returns as usual if the
> evaluation succeeds. If the evaluation fails, however, the value of
> the second expression is returned, as in the following example:
>
> ::: scheme
> *;;; Amb-Eval input:* (if-fail (let ((x (an-element-of '(1 3 5))))
> (require (even? x)) x) 'all-odd) *;;; Starting a new problem* *;;;
> Amb-Eval value:* *all-odd*
>
> *;;; Amb-Eval input:* (if-fail (let ((x (an-element-of '(1 3 5 8))))
> (require (even? x)) x) 'all-odd) *;;; Starting a new problem* *;;;
> Amb-Eval value:* *8*
> :::
> **[]{#Exercise 4.53 label="Exercise 4.53"}Exercise 4.53:** With
> `permanent/set!` as described in [Exercise 4.51](#Exercise 4.51) and
> `if/fail` as in [Exercise 4.52](#Exercise 4.52), what will be the
> result of evaluating
>
> ::: scheme
> (let ((pairs '())) (if-fail (let ((p (prime-sum-pair '(1 3 5 8) '(20
> 35 110)))) (permanent-set! pairs (cons p pairs)) (amb)) pairs))
> :::
> **[]{#Exercise 4.54 label="Exercise 4.54"}Exercise 4.54:** If we had
> not realized that `require` could be implemented as an ordinary
> procedure that uses `amb`, to be defined by the user as part of a
> nondeterministic program, we would have had to implement it as a
> special form. This would require syntax procedures
>
> ::: scheme
> (define (require? exp) (tagged-list? exp 'require)) (define
> (require-predicate exp) (cadr exp))
> :::
>
> and a new clause in the dispatch in `analyze`
>
> ::: scheme
> ((require? exp) (analyze-require exp))
> :::
>
> as well the procedure `analyze/require` that handles `require`
> expressions. Complete the following definition of `analyze/require`.
>
> ::: scheme
> (define (analyze-require exp) (let ((pproc (analyze (require-predicate
> exp)))) (lambda (env succeed fail) (pproc env (lambda (pred-value
> fail2) (if
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$ (succeed
> 'ok fail2))) fail))))
> :::
## Logic Programming {#Section 4.4}
In [Chapter 1](#Chapter 1) we stressed that computer science deals with
imperative (how to) knowledge, whereas mathematics deals with
declarative (what is) knowledge. Indeed, programming languages require
that the programmer express knowledge in a form that indicates the
step-by-step methods for solving particular problems. On the other hand,
high-level languages provide, as part of the language implementation, a
substantial amount of methodological knowledge that frees the user from
concern with numerous details of how a specified computation will
progress.
Most programming languages, including Lisp, are organized around
computing the values of mathematical functions. Expression-oriented
languages (such as Lisp, Fortran, and Algol) capitalize on the "pun"
that an expression that describes the value of a function may also be
interpreted as a means of computing that value. Because of this, most
programming languages are strongly biased toward unidirectional
computations (computations with well-defined inputs and outputs). There
are, however, radically different programming languages that relax this
bias. We saw one such example in [Section 3.3.5](#Section 3.3.5), where
the objects of computation were arithmetic constraints. In a constraint
system the direction and the order of computation are not so well
specified; in carrying out a computation the system must therefore
provide more detailed "how to" knowledge than would be the case with an
ordinary arithmetic computation. This does not mean, however, that the
user is released altogether from the responsibility of providing
imperative knowledge. There are many constraint networks that implement
the same set of constraints, and the user must choose from the set of
mathematically equivalent networks a suitable network to specify a
particular computation.
The nondeterministic program evaluator of [Section 4.3](#Section 4.3)
also moves away from the view that programming is about constructing
algorithms for computing unidirectional functions. In a nondeterministic
language, expressions can have more than one value, and, as a result,
the computation is dealing with relations rather than with single-valued
functions. Logic programming extends this idea by combining a relational
vision of programming with a powerful kind of symbolic pattern matching
called *unification*.[^262]
This approach, when it works, can be a very powerful way to write
programs. Part of the power comes from the fact that a single "what is"
fact can be used to solve a number of different problems that would have
different "how to" components. As an example, consider the `append`
operation, which takes two lists as arguments and combines their
elements to form a single list. In a procedural language such as Lisp,
we could define `append` in terms of the basic list constructor `cons`,
as we did in [Section 2.2.1](#Section 2.2.1):
::: scheme
(define (append x y) (if (null? x) y (cons (car x) (append (cdr x) y))))
:::
This procedure can be regarded as a translation into Lisp of the
following two rules, the first of which covers the case where the first
list is empty and the second of which handles the case of a nonempty
list, which is a `cons` of two parts:
- For any list `y`, the empty list and `y` `append` to form `y`.
- For any `u`, `v`, `y`, and `z`, `(cons u v)` and `y` `append` to
form `(cons u z)` if `v` and `y` `append` to form `z`.[^263]
Using the `append` procedure, we can answer questions such as
> Find the `append` of `(a b)` and `(c d)`.
But the same two rules are also sufficient for answering the following
sorts of questions, which the procedure can't answer:
> Find a list `y` that `append`s with `(a b)` to produce `(a b c d)`.
>
> Find all `x` and `y` that `append` to form `(a b c d)`.
In a logic programming language, the programmer writes an `append`
"procedure" by stating the two rules about `append` given above. "How
to" knowledge is provided automatically by the interpreter to allow this
single pair of rules to be used to answer all three types of questions
about `append`.[^264]
Contemporary logic programming languages (including the one we implement
here) have substantial deficiencies, in that their general "how to"
methods can lead them into spurious infinite loops or other undesirable
behavior. Logic programming is an active field of research in computer
science.[^265]
Earlier in this chapter we explored the technology of implementing
interpreters and described the elements that are essential to an
interpreter for a Lisp-like language (indeed, to an interpreter for any
conventional language). Now we will apply these ideas to discuss an
interpreter for a logic programming language. We call this language the
*query language*, because it is very useful for retrieving information
from data bases by formulating *queries*, or questions, expressed in the
language. Even though the query language is very different from Lisp, we
will find it convenient to describe the language in terms of the same
general framework we have been using all along: as a collection of
primitive elements, together with means of combination that enable us to
combine simple elements to create more complex elements and means of
abstraction that enable us to regard complex elements as single
conceptual units. An interpreter for a logic programming language is
considerably more complex than an interpreter for a language like Lisp.
Nevertheless, we will see that our query-language interpreter contains
many of the same elements found in the interpreter of [Section
4.1](#Section 4.1). In particular, there will be an "eval" part that
classifies expressions according to type and an "apply" part that
implements the language's abstraction mechanism (procedures in the case
of Lisp, and *rules* in the case of logic programming). Also, a central
role is played in the implementation by a frame data structure, which
determines the correspondence between symbols and their associated
values. One additional interesting aspect of our query-language
implementation is that we make substantial use of streams, which were
introduced in [Chapter 3](#Chapter 3).
### Deductive Information Retrieval {#Section 4.4.1}
Logic programming excels in providing interfaces to data bases for
information retrieval. The query language we shall implement in this
chapter is designed to be used in this way.
In order to illustrate what the query system does, we will show how it
can be used to manage the data base of personnel records for Microshaft,
a thriving high-technology company in the Boston area. The language
provides pattern-directed access to personnel information and can also
take advantage of general rules in order to make logical deductions.
#### A sample data base {#a-sample-data-base .unnumbered}
The personnel data base for Microshaft contains *assertions* about
company personnel. Here is the information about Ben Bitdiddle, the
resident computer wizard:
::: scheme
(address (Bitdiddle Ben) (Slumerville (Ridge Road) 10)) (job (Bitdiddle
Ben) (computer wizard)) (salary (Bitdiddle Ben) 60000)
:::
Each assertion is a list (in this case a triple) whose elements can
themselves be lists.
As resident wizard, Ben is in charge of the company's computer division,
and he supervises two programmers and one technician. Here is the
information about them:
::: scheme
(address (Hacker Alyssa P) (Cambridge (Mass Ave) 78)) (job (Hacker
Alyssa P) (computer programmer)) (salary (Hacker Alyssa P) 40000)
(supervisor (Hacker Alyssa P) (Bitdiddle Ben))
(address (Fect Cy D) (Cambridge (Ames Street) 3)) (job (Fect Cy D)
(computer programmer)) (salary (Fect Cy D) 35000) (supervisor (Fect Cy
D) (Bitdiddle Ben))
(address (Tweakit Lem E) (Boston (Bay State Road) 22)) (job (Tweakit Lem
E) (computer technician)) (salary (Tweakit Lem E) 25000) (supervisor
(Tweakit Lem E) (Bitdiddle Ben))
:::
There is also a programmer trainee, who is supervised by Alyssa:
::: scheme
(address (Reasoner Louis) (Slumerville (Pine Tree Road) 80)) (job
(Reasoner Louis) (computer programmer trainee)) (salary (Reasoner Louis)
30000) (supervisor (Reasoner Louis) (Hacker Alyssa P))
:::
All of these people are in the computer division, as indicated by the
word `computer` as the first item in their job descriptions.
Ben is a high-level employee. His supervisor is the company's big wheel
himself:
::: scheme
(supervisor (Bitdiddle Ben) (Warbucks Oliver)) (address (Warbucks
Oliver) (Swellesley (Top Heap Road))) (job (Warbucks Oliver)
(administration big wheel)) (salary (Warbucks Oliver) 150000)
:::
Besides the computer division supervised by Ben, the company has an
accounting division, consisting of a chief accountant and his assistant:
::: scheme
(address (Scrooge Eben) (Weston (Shady Lane) 10)) (job (Scrooge Eben)
(accounting chief accountant)) (salary (Scrooge Eben) 75000) (supervisor
(Scrooge Eben) (Warbucks Oliver))
(address (Cratchet Robert) (Allston (N Harvard Street) 16)) (job
(Cratchet Robert) (accounting scrivener)) (salary (Cratchet Robert)
18000) (supervisor (Cratchet Robert) (Scrooge Eben))
:::
There is also a secretary for the big wheel:
::: scheme
(address (Aull DeWitt) (Slumerville (Onion Square) 5)) (job (Aull
DeWitt) (administration secretary)) (salary (Aull DeWitt) 25000)
(supervisor (Aull DeWitt) (Warbucks Oliver))
:::
The data base also contains assertions about which kinds of jobs can be
done by people holding other kinds of jobs. For instance, a computer
wizard can do the jobs of both a computer programmer and a computer
technician:
::: scheme
(can-do-job (computer wizard) (computer programmer)) (can-do-job
(computer wizard) (computer technician))
:::
A computer programmer could fill in for a trainee:
::: scheme
(can-do-job (computer programmer) (computer programmer trainee))
:::
Also, as is well known,
::: scheme
(can-do-job (administration secretary) (administration big wheel))
:::
#### Simple queries {#simple-queries .unnumbered}
The query language allows users to retrieve information from the data
base by posing queries in response to the system's prompt. For example,
to find all computer programmers one can say
::: scheme
*;;; Query input:* (job ?x (computer programmer))
:::
The system will respond with the following items:
::: scheme
*;;; Query results:* (job (Hacker Alyssa P) (computer programmer))
(job (Fect Cy D) (computer programmer))
:::
The input query specifies that we are looking for entries in the data
base that match a certain *pattern*. In this example, the pattern
specifies entries consisting of three items, of which the first is the
literal symbol `job`, the second can be anything, and the third is the
literal list `(computer programmer)`. The "anything" that can be the
second item in the matching list is specified by a *pattern variable*,
`?x`. The general form of a pattern variable is a symbol, taken to be
the name of the variable, preceded by a question mark. We will see below
why it is useful to specify names for pattern variables rather than just
putting `?` into patterns to represent "anything." The system responds
to a simple query by showing all entries in the data base that match the
specified pattern.
A pattern can have more than one variable. For example, the query
::: scheme
(address ?x ?y)
:::
will list all the employees' addresses.
A pattern can have no variables, in which case the query simply
determines whether that pattern is an entry in the data base. If so,
there will be one match; if not, there will be no matches.
The same pattern variable can appear more than once in a query,
specifying that the same "anything" must appear in each position. This
is why variables have names. For example,
::: scheme
(supervisor ?x ?x)
:::
finds all people who supervise themselves (though there are no such
assertions in our sample data base).
The query
::: scheme
(job ?x (computer ?type))
:::
matches all job entries whose third item is a two-element list whose
first item is `computer`:
::: scheme
(job (Bitdiddle Ben) (computer wizard)) (job (Hacker Alyssa P) (computer
programmer)) (job (Fect Cy D) (computer programmer)) (job (Tweakit Lem
E) (computer technician))
:::
This same pattern does *not* match
::: scheme
(job (Reasoner Louis) (computer programmer trainee))
:::
because the third item in the entry is a list of three elements, and the
pattern's third item specifies that there should be two elements. If we
wanted to change the pattern so that the third item could be any list
beginning with `computer`, we could specify[^266]
::: scheme
(job ?x (computer . ?type))
:::
For example,
::: scheme
(computer . ?type)
:::
matches the data
::: scheme
(computer programmer trainee)
:::
with `?type` as the list `(programmer trainee)`. It also matches the
data
::: scheme
(computer programmer)
:::
with `?type` as the list `(programmer)`, and matches the data
::: scheme
(computer)
:::
with `?type` as the empty list `()`.
We can describe the query language's processing of simple queries as
follows:
- The system finds all assignments to variables in the query pattern
that *satisfy* the pattern---that is, all sets of values for the
variables such that if the pattern variables are *instantiated with*
(replaced by) the values, the result is in the data base.
- The system responds to the query by listing all instantiations of
the query pattern with the variable assignments that satisfy it.
Note that if the pattern has no variables, the query reduces to a
determination of whether that pattern is in the data base. If so, the
empty assignment, which assigns no values to variables, satisfies that
pattern for that data base.
> **[]{#Exercise 4.55 label="Exercise 4.55"}Exercise 4.55:** Give simple
> queries that retrieve the following information from the data base:
>
> 1. all people supervised by Ben Bitdiddle;
>
> 2. the names and jobs of all people in the accounting division;
>
> 3. the names and addresses of all people who live in Slumerville.
#### Compound queries {#compound-queries .unnumbered}
Simple queries form the primitive operations of the query language. In
order to form compound operations, the query language provides means of
combination. One thing that makes the query language a logic programming
language is that the means of combination mirror the means of
combination used in forming logical expressions: `and`, `or`, and `not`.
(Here `and`, `or`, and `not` are not the Lisp primitives, but rather
operations built into the query language.)
We can use `and` as follows to find the addresses of all the computer
programmers:
::: scheme
(and (job ?person (computer programmer)) (address ?person ?where))
:::
The resulting output is
::: scheme
(and (job (Hacker Alyssa P) (computer programmer)) (address (Hacker
Alyssa P) (Cambridge (Mass Ave) 78))) (and (job (Fect Cy D) (computer
programmer)) (address (Fect Cy D) (Cambridge (Ames Street) 3)))
:::
In general,
::: scheme
(and
$\color{SchemeDark}\langle$ *query* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *query* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *query* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
:::
is satisfied by all sets of values for the pattern variables that
simultaneously satisfy $\langle$*query*$_1\rangle$ $\dots$
$\langle$*query*$_n\rangle$.
As for simple queries, the system processes a compound query by finding
all assignments to the pattern variables that satisfy the query, then
displaying instantiations of the query with those values.
Another means of constructing compound queries is through `or`. For
example,
::: scheme
(or (supervisor ?x (Bitdiddle Ben)) (supervisor ?x (Hacker Alyssa P)))
:::
will find all employees supervised by Ben Bitdiddle or Alyssa P. Hacker:
::: scheme
(or (supervisor (Hacker Alyssa P) (Bitdiddle Ben)) (supervisor (Hacker
Alyssa P) (Hacker Alyssa P))) (or (supervisor (Fect Cy D) (Bitdiddle
Ben)) (supervisor (Fect Cy D) (Hacker Alyssa P))) (or (supervisor
(Tweakit Lem E) (Bitdiddle Ben)) (supervisor (Tweakit Lem E) (Hacker
Alyssa P))) (or (supervisor (Reasoner Louis) (Bitdiddle Ben))
(supervisor (Reasoner Louis) (Hacker Alyssa P)))
:::
In general,
::: scheme
(or
$\color{SchemeDark}\langle$ *query* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *query* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *query* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
:::
is satisfied by all sets of values for the pattern variables that
satisfy at least one of $\langle$*query*$_1\rangle$ $\dots$
$\langle$*query*$_n\rangle$.
Compound queries can also be formed with `not`. For example,
::: scheme
(and (supervisor ?x (Bitdiddle Ben)) (not (job ?x (computer
programmer))))
:::
finds all people supervised by Ben Bitdiddle who are not computer
programmers. In general,
::: scheme
(not
$\color{SchemeDark}\langle$ *query* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
:::
is satisfied by all assignments to the pattern variables that do not
satisfy $\langle$*query*$_1\rangle$.[^267]
The final combining form is called `lisp/value`. When `lisp/value` is
the first element of a pattern, it specifies that the next element is a
Lisp predicate to be applied to the rest of the (instantiated) elements
as arguments. In general,
::: scheme
(lisp-value
$\color{SchemeDark}\langle$ *predicate* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *arg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *arg* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
:::
will be satisfied by assignments to the pattern variables for which the
$\langle$*predicate*$\rangle$ applied to the instantiated
$\langle$*arg*$_1\rangle$ $\dots$ $\langle$*arg*$_n\rangle$ is true. For
example, to find all people whose salary is greater than \$30,000 we
could write[^268]
::: scheme
(and (salary ?person ?amount) (lisp-value \> ?amount 30000))
:::
> **[]{#Exercise 4.56 label="Exercise 4.56"}Exercise 4.56:** Formulate
> compound queries that retrieve the following information:
>
> a. the names of all people who are supervised by Ben Bitdiddle,
> together with their addresses;
>
> b. all people whose salary is less than Ben Bitdiddle's, together
> with their salary and Ben Bitdiddle's salary;
>
> c. all people who are supervised by someone who is not in the
> computer division, together with the supervisor's name and job.
#### Rules {#rules .unnumbered}
In addition to primitive queries and compound queries, the query
language provides means for abstracting queries. These are given by
*rules*. The rule
::: scheme
(rule (lives-near ?person-1 ?person-2) (and (address ?person-1 (?town .
?rest-1)) (address ?person-2 (?town . ?rest-2)) (not (same ?person-1
?person-2))))
:::
specifies that two people live near each other if they live in the same
town. The final `not` clause prevents the rule from saying that all
people live near themselves. The `same` relation is defined by a very
simple rule:[^269]
::: scheme
(rule (same ?x ?x))
:::
The following rule declares that a person is a "wheel" in an
organization if he supervises someone who is in turn a supervisor:
::: scheme
(rule (wheel ?person) (and (supervisor ?middle-manager ?person)
(supervisor ?x ?middle-manager)))
:::
The general form of a rule is
::: scheme
(rule
$\color{SchemeDark}\langle$ *conclusion* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ )
:::
where $\langle$*conclusion*$\rangle$ is a pattern and
$\langle$*body*$\rangle$ is any query.[^270] We can think of a rule as
representing a large (even infinite) set of assertions, namely all
instantiations of the rule conclusion with variable assignments that
satisfy the rule body. When we described simple queries (patterns), we
said that an assignment to variables satisfies a pattern if the
instantiated pattern is in the data base. But the pattern needn't be
explicitly in the data base as an assertion. It can be an implicit
assertion implied by a rule. For example, the query
::: scheme
(lives-near ?x (Bitdiddle Ben))
:::
results in
::: scheme
(lives-near (Reasoner Louis) (Bitdiddle Ben)) (lives-near (Aull DeWitt)
(Bitdiddle Ben))
:::
To find all computer programmers who live near Ben Bitdiddle, we can ask
::: scheme
(and (job ?x (computer programmer)) (lives-near ?x (Bitdiddle Ben)))
:::
As in the case of compound procedures, rules can be used as parts of
other rules (as we saw with the `lives/near` rule above) or even be
defined recursively. For instance, the rule
::: scheme
(rule (outranked-by ?staff-person ?boss) (or (supervisor ?staff-person
?boss) (and (supervisor ?staff-person ?middle-manager) (outranked-by
?middle-manager ?boss))))
:::
says that a staff person is outranked by a boss in the organization if
the boss is the person's supervisor or (recursively) if the person's
supervisor is outranked by the boss.
> **[]{#Exercise 4.57 label="Exercise 4.57"}Exercise 4.57:** Define a
> rule that says that person 1 can replace person 2 if either person 1
> does the same job as person 2 or someone who does person 1's job can
> also do person 2's job, and if person 1 and person 2 are not the same
> person. Using your rule, give queries that find the following:
>
> a. all people who can replace Cy D. Fect;
>
> b. all people who can replace someone who is being paid more than
> they are, together with the two salaries.
> **[]{#Exercise 4.58 label="Exercise 4.58"}Exercise 4.58:** Define a
> rule that says that a person is a "big shot" in a division if the
> person works in the division but does not have a supervisor who works
> in the division.
> **[]{#Exercise 4.59 label="Exercise 4.59"}Exercise 4.59:** Ben
> Bitdiddle has missed one meeting too many. Fearing that his habit of
> forgetting meetings could cost him his job, Ben decides to do
> something about it. He adds all the weekly meetings of the firm to the
> Microshaft data base by asserting the following:
>
> ::: scheme
> (meeting accounting (Monday 9am)) (meeting administration (Monday
> 10am)) (meeting computer (Wednesday 3pm)) (meeting administration
> (Friday 1pm))
> :::
>
> Each of the above assertions is for a meeting of an entire division.
> Ben also adds an entry for the company-wide meeting that spans all the
> divisions. All of the company's employees attend this meeting.
>
> ::: scheme
> (meeting whole-company (Wednesday 4pm))
> :::
>
> a. On Friday morning, Ben wants to query the data base for all the
> meetings that occur that day. What query should he use?
>
> b. Alyssa P. Hacker is unimpressed. She thinks it would be much more
> useful to be able to ask for her meetings by specifying her name.
> So she designs a rule that says that a person's meetings include
> all `whole/company` meetings plus all meetings of that person's
> division. Fill in the body of Alyssa's rule.
>
> ::: scheme
> (rule (meeting-time ?person ?day-and-time)
> $\color{SchemeDark}\langle$ *rule-body* $\color{SchemeDark}\rangle$ )
> :::
>
> c. Alyssa arrives at work on Wednesday morning and wonders what
> meetings she has to attend that day. Having defined the above
> rule, what query should she make to find this out?
> **[]{#Exercise 4.60 label="Exercise 4.60"}Exercise 4.60:** By giving
> the query
>
> ::: scheme
> (lives-near ?person (Hacker Alyssa P))
> :::
>
> Alyssa P. Hacker is able to find people who live near her, with whom
> she can ride to work. On the other hand, when she tries to find all
> pairs of people who live near each other by querying
>
> ::: scheme
> (lives-near ?person-1 ?person-2)
> :::
>
> she notices that each pair of people who live near each other is
> listed twice; for example,
>
> ::: scheme
> (lives-near (Hacker Alyssa P) (Fect Cy D)) (lives-near (Fect Cy D)
> (Hacker Alyssa P))
> :::
>
> Why does this happen? Is there a way to find a list of people who live
> near each other, in which each pair appears only once? Explain.
#### Logic as programs {#logic-as-programs .unnumbered}
We can regard a rule as a kind of logical implication: *If* an
assignment of values to pattern variables satisfies the body, *then* it
satisfies the conclusion. Consequently, we can regard the query language
as having the ability to perform *logical deductions* based upon the
rules. As an example, consider the `append` operation described at the
beginning of [Section 4.4](#Section 4.4). As we said, `append` can be
characterized by the following two rules:
- For any list `y`, the empty list and `y` `append` to form `y`.
- For any `u`, `v`, `y`, and `z`, `(cons u v)` and `y` `append` to
form `(cons u z)` if `v` and `y` `append` to form `z`.
To express this in our query language, we define two rules for a
relation
::: scheme
(append-to-form x y z)
:::
which we can interpret to mean "`x` and `y` `append` to form `z`":
::: scheme
(rule (append-to-form () ?y ?y)) (rule (append-to-form (?u . ?v) ?y (?u
. ?z)) (append-to-form ?v ?y ?z))
:::
The first rule has no body, which means that the conclusion holds for
any value of `?y`. Note how the second rule makes use of dotted-tail
notation to name the `car` and `cdr` of a list.
Given these two rules, we can formulate queries that compute the
`append` of two lists:
::: scheme
*;;; Query input:* (append-to-form (a b) (c d) ?z) *;;; Query
results:* (append-to-form (a b) (c d) (a b c d))
:::
What is more striking, we can use the same rules to ask the question
"Which list, when `append`ed to `(a b)`, yields `(a b c d)`?" This is
done as follows:
::: scheme
*;;; Query input:* (append-to-form (a b) ?y (a b c d)) *;;; Query
results:* (append-to-form (a b) (c d) (a b c d))
:::
We can also ask for all pairs of lists that `append` to form
`(a b c d)`:
::: scheme
*;;; Query input:* (append-to-form ?x ?y (a b c d)) *;;; Query
results:* (append-to-form () (a b c d) (a b c d)) (append-to-form (a)
(b c d) (a b c d)) (append-to-form (a b) (c d) (a b c d))
(append-to-form (a b c) (d) (a b c d)) (append-to-form (a b c d) () (a b
c d))
:::
The query system may seem to exhibit quite a bit of intelligence in
using the rules to deduce the answers to the queries above. Actually, as
we will see in the next section, the system is following a
well-determined algorithm in unraveling the rules. Unfortunately,
although the system works impressively in the `append` case, the general
methods may break down in more complex cases, as we will see in [Section
4.4.3](#Section 4.4.3).
> **[]{#Exercise 4.61 label="Exercise 4.61"}Exercise 4.61:** The
> following rules implement a `next/to` relation that finds adjacent
> elements of a list:
>
> ::: scheme
> (rule (?x next-to ?y in (?x ?y . ?u))) (rule (?x next-to ?y in (?v .
> ?z)) (?x next-to ?y in ?z))
> :::
>
> What will the response be to the following queries?
>
> ::: scheme
> (?x next-to ?y in (1 (2 3) 4)) (?x next-to 1 in (2 1 3 1))
> :::
> **[]{#Exercise 4.62 label="Exercise 4.62"}Exercise 4.62:** Define
> rules to implement the `last/pair` operation of [Exercise
> 2.17](#Exercise 2.17), which returns a list containing the last
> element of a nonempty list. Check your rules on queries such as
> `(last/pair (3) ?x)`, `(last/pair (1 2 3) ?x)` and
> `(last/pair (2 ?x) (3))`. Do your rules work correctly on queries such
> as `(last/pair ?x (3))` ?
> **[]{#Exercise 4.63 label="Exercise 4.63"}Exercise 4.63:** The
> following data base (see Genesis 4) traces the genealogy of the
> descendants of Ada back to Adam, by way of Cain:
>
> ::: scheme
> (son Adam Cain) (son Cain Enoch) (son Enoch Irad) (son Irad Mehujael)
> (son Mehujael Methushael) (son Methushael Lamech) (wife Lamech Ada)
> (son Ada Jabal) (son Ada Jubal)
> :::
>
> Formulate rules such as "If $S$ is the son of $f$, and $f$ is the son
> of $G$, then $S$ is the grandson of $G$" and "If $W$ is the wife of
> $M$, and $S$ is the son of $W$, then $S$ is the son of $M$" (which was
> supposedly more true in biblical times than today) that will enable
> the query system to find the grandson of Cain; the sons of Lamech; the
> grandsons of Methushael. (See [Exercise 4.69](#Exercise 4.69) for some
> rules to deduce more complicated relationships.)
### How the Query System Works {#Section 4.4.2}
In [Section 4.4.4](#Section 4.4.4) we will present an implementation of
the query interpreter as a collection of procedures. In this section we
give an overview that explains the general structure of the system
independent of low-level implementation details. After describing the
implementation of the interpreter, we will be in a position to
understand some of its limitations and some of the subtle ways in which
the query language's logical operations differ from the operations of
mathematical logic.
It should be apparent that the query evaluator must perform some kind of
search in order to match queries against facts and rules in the data
base. One way to do this would be to implement the query system as a
nondeterministic program, using the `amb` evaluator of [Section
4.3](#Section 4.3) (see [Exercise 4.78](#Exercise 4.78)). Another
possibility is to manage the search with the aid of streams. Our
implementation follows this second approach.
The query system is organized around two central operations called
*pattern matching* and *unification*. We first describe pattern matching
and explain how this operation, together with the organization of
information in terms of streams of frames, enables us to implement both
simple and compound queries. We next discuss unification, a
generalization of pattern matching needed to implement rules. Finally,
we show how the entire query interpreter fits together through a
procedure that classifies expressions in a manner analogous to the way
`eval` classifies expressions for the interpreter described in [Section
4.1](#Section 4.1).
#### Pattern matching {#pattern-matching .unnumbered}
A *pattern matcher* is a program that tests whether some datum fits a
specified pattern. For example, the data list `((a b) c (a b))` matches
the pattern `(?x c ?x)` with the pattern variable `?x` bound to `(a b)`.
The same data list matches the pattern `(?x ?y ?z)` with `?x` and `?z`
both bound to `(a b)` and `?y` bound to `c`. It also matches the pattern
`((?x ?y) c (?x ?y))` with `?x` bound to `a` and `?y` bound to `b`.
However, it does not match the pattern `(?x a ?y)`, since that pattern
specifies a list whose second element is the symbol `a`.
The pattern matcher used by the query system takes as inputs a pattern,
a datum, and a *frame* that specifies bindings for various pattern
variables. It checks whether the datum matches the pattern in a way that
is consistent with the bindings already in the frame. If so, it returns
the given frame augmented by any bindings that may have been determined
by the match. Otherwise, it indicates that the match has failed.
For example, using the pattern `(?x ?y ?x)` to match `(a b a)` given an
empty frame will return a frame specifying that `?x` is bound to `a` and
`?y` is bound to `b`. Trying the match with the same pattern, the same
datum, and a frame specifying that `?y` is bound to `a` will fail.
Trying the match with the same pattern, the same datum, and a frame in
which `?y` is bound to `b` and `?x` is unbound will return the given
frame augmented by a binding of `?x` to `a`.
The pattern matcher is all the mechanism that is needed to process
simple queries that don't involve rules. For instance, to process the
query
::: scheme
(job ?x (computer programmer))
:::
we scan through all assertions in the data base and select those that
match the pattern with respect to an initially empty frame. For each
match we find, we use the frame returned by the match to instantiate the
pattern with a value for `?x`.
#### Streams of frames {#streams-of-frames .unnumbered}
The testing of patterns against frames is organized through the use of
streams. Given a single frame, the matching process runs through the
data-base entries one by one. For each data-base entry, the matcher
generates either a special symbol indicating that the match has failed
or an extension to the frame. The results for all the data-base entries
are collected into a stream, which is passed through a filter to weed
out the failures. The result is a stream of all the frames that extend
the given frame via a match to some assertion in the data base.[^271]
In our system, a query takes an input stream of frames and performs the
above matching operation for every frame in the stream, as indicated in
[Figure 4.4](#Figure 4.4). That is, for each frame in the input stream,
the query generates a new stream consisting of all extensions to that
frame by matches to assertions in the data base. All these streams are
then combined to form one huge stream, which contains all possible
extensions of every frame in the input stream. This stream is the output
of the query.
To answer a simple query, we use the query with an input stream
consisting of a single empty frame. The resulting output stream contains
all extensions to the empty frame (that is, all answers to our query).
This stream of frames is then used to generate a stream of copies of the
original query pattern with the variables instantiated by the values in
each frame, and this is the stream that is finally printed.
[]{#Figure 4.4 label="Figure 4.4"}
![image](fig/chap4/Fig4.4.pdf){width="102mm"}
**Figure 4.4:** A query processes a stream of frames.
#### Compound queries {#compound-queries-1 .unnumbered}
The real elegance of the stream-of-frames implementation is evident when
we deal with compound queries. The processing of compound queries makes
use of the ability of our matcher to demand that a match be consistent
with a specified frame. For example, to handle the `and` of two queries,
such as
::: scheme
(and (can-do-job ?x (computer programmer trainee)) (job ?person ?x))
:::
(informally, "Find all people who can do the job of a computer
programmer trainee"), we first find all entries that match the pattern
::: scheme
(can-do-job ?x (computer programmer trainee))
:::
[]{#Figure 4.5 label="Figure 4.5"}
![image](fig/chap4/Fig4.5.pdf){width="93mm"}
> **Figure 4.5:** The `and` combination of two queries is produced by
> operating on the stream of frames in series.
This produces a stream of frames, each of which contains a binding for
`?x`. Then for each frame in the stream we find all entries that match
::: scheme
(job ?person ?x)
:::
in a way that is consistent with the given binding for `?x`. Each such
match will produce a frame containing bindings for `?x` and `?person`.
The `and` of two queries can be viewed as a series combination of the
two component queries, as shown in [Figure 4.5](#Figure 4.5). The frames
that pass through the first query filter are filtered and further
extended by the second query.
[]{#Figure 4.6 label="Figure 4.6"}
![image](fig/chap4/Fig4.6.pdf){width="107mm"}
> **Figure 4.6:** The `or` combination of two queries is produced by
> operating on the stream of frames in parallel and merging the results.
[Figure 4.6](#Figure 4.6) shows the analogous method for computing the
`or` of two queries as a parallel combination of the two component
queries. The input stream of frames is extended separately by each
query. The two resulting streams are then merged to produce the final
output stream.
Even from this high-level description, it is apparent that the
processing of compound queries can be slow. For example, since a query
may produce more than one output frame for each input frame, and each
query in an `and` gets its input frames from the previous query, an
`and` query could, in the worst case, have to perform a number of
matches that is exponential in the number of queries (see [Exercise
4.76](#Exercise 4.76)).[^272] Though systems for handling only simple
queries are quite practical, dealing with complex queries is extremely
difficult.[^273]
From the stream-of-frames viewpoint, the `not` of some query acts as a
filter that removes all frames for which the query can be satisfied. For
instance, given the pattern
::: scheme
(not (job ?x (computer programmer)))
:::
we attempt, for each frame in the input stream, to produce extension
frames that satisfy `(job ?x (computer programmer))`. We remove from the
input stream all frames for which such extensions exist. The result is a
stream consisting of only those frames in which the binding for `?x`
does not satisfy `(job ?x (computer programmer))`. For example, in
processing the query
::: scheme
(and (supervisor ?x ?y) (not (job ?x (computer programmer))))
:::
the first clause will generate frames with bindings for `?x` and `?y`.
The `not` clause will then filter these by removing all frames in which
the binding for `?x` satisfies the restriction that `?x` is a computer
programmer.[^274]
The `lisp/value` special form is implemented as a similar filter on
frame streams. We use each frame in the stream to instantiate any
variables in the pattern, then apply the Lisp predicate. We remove from
the input stream all frames for which the predicate fails.
#### Unification {#unification .unnumbered}
In order to handle rules in the query language, we must be able to find
the rules whose conclusions match a given query pattern. Rule
conclusions are like assertions except that they can contain variables,
so we will need a generalization of pattern matching---called
*unification*---in which both the "pattern" and the "datum" may contain
variables.
A unifier takes two patterns, each containing constants and variables,
and determines whether it is possible to assign values to the variables
that will make the two patterns equal. If so, it returns a frame
containing these bindings. For example, unifying `(?x a ?y)` and
`(?y ?z a)` will specify a frame in which `?x`, `?y`, and `?z` must all
be bound to `a`. On the other hand, unifying `(?x ?y a)` and `(?x b ?y)`
will fail, because there is no value for `?y` that can make the two
patterns equal. (For the second elements of the patterns to be equal,
`?y` would have to be `b`; however, for the third elements to be equal,
`?y` would have to be `a`.) The unifier used in the query system, like
the pattern matcher, takes a frame as input and performs unifications
that are consistent with this frame.
The unification algorithm is the most technically difficult part of the
query system. With complex patterns, performing unification may seem to
require deduction. To unify `(?x ?x)` and `((a ?y c) (a b ?z))`, for
example, the algorithm must infer that `?x` should be `(a b c)`, `?y`
should be `b`, and `?z` should be `c`. We may think of this process as
solving a set of equations among the pattern components. In general,
these are simultaneous equations, which may require substantial
manipulation to solve.[^275] For example, unifying `(?x ?x)` and
`((a ?y c) (a b ?z))` may be thought of as specifying the simultaneous
equations
::: scheme
?x = (a ?y c) ?x = (a b ?z)
:::
These equations imply that
::: scheme
(a ?y c) = (a b ?z)
:::
which in turn implies that
::: scheme
a = a, ?y = b, c = ?z,
:::
and hence that
::: scheme
?x = (a b c)
:::
In a successful pattern match, all pattern variables become bound, and
the values to which they are bound contain only constants. This is also
true of all the examples of unification we have seen so far. In general,
however, a successful unification may not completely determine the
variable values; some variables may remain unbound and others may be
bound to values that contain variables.
Consider the unification of `(?x a)` and `((b ?y) ?z)`. We can deduce
that `?x = (b ?y)` and `a = ?z`, but we cannot further solve for `?x` or
`?y`. The unification doesn't fail, since it is certainly possible to
make the two patterns equal by assigning values to `?x` and `?y`. Since
this match in no way restricts the values `?y` can take on, no binding
for `?y` is put into the result frame. The match does, however, restrict
the value of `?x`. Whatever value `?y` has, `?x` must be `(b ?y)`. A
binding of `?x` to the pattern `(b ?y)` is thus put into the frame. If a
value for `?y` is later determined and added to the frame (by a pattern
match or unification that is required to be consistent with this frame),
the previously bound `?x` will refer to this value.[^276]
#### Applying rules {#applying-rules .unnumbered}
Unification is the key to the component of the query system that makes
inferences from rules. To see how this is accomplished, consider
processing a query that involves applying a rule, such as
::: scheme
(lives-near ?x (Hacker Alyssa P))
:::
To process this query, we first use the ordinary pattern-match procedure
described above to see if there are any assertions in the data base that
match this pattern. (There will not be any in this case, since our data
base includes no direct assertions about who lives near whom.) The next
step is to attempt to unify the query pattern with the conclusion of
each rule. We find that the pattern unifies with the conclusion of the
rule
::: scheme
(rule (lives-near ?person-1 ?person-2) (and (address ?person-1 (?town .
?rest-1)) (address ?person-2 (?town . ?rest-2)) (not (same ?person-1
?person-2))))
:::
resulting in a frame specifying that `?person/2` is bound to
`(Hacker Alyssa P)` and that `?x` should be bound to (have the same
value as) `?person/1`. Now, relative to this frame, we evaluate the
compound query given by the body of the rule. Successful matches will
extend this frame by providing a binding for `?person/1`, and
consequently a value for `?x`, which we can use to instantiate the
original query pattern.
In general, the query evaluator uses the following method to apply a
rule when trying to establish a query pattern in a frame that specifies
bindings for some of the pattern variables:
- Unify the query with the conclusion of the rule to form, if
successful, an extension of the original frame.
- Relative to the extended frame, evaluate the query formed by the
body of the rule.
Notice how similar this is to the method for applying a procedure in the
`eval`/`apply` evaluator for Lisp:
- Bind the procedure's parameters to its arguments to form a frame
that extends the original procedure environment.
- Relative to the extended environment, evaluate the expression formed
by the body of the procedure.
The similarity between the two evaluators should come as no surprise.
Just as procedure definitions are the means of abstraction in Lisp, rule
definitions are the means of abstraction in the query language. In each
case, we unwind the abstraction by creating appropriate bindings and
evaluating the rule or procedure body relative to these.
#### Simple queries {#simple-queries-1 .unnumbered}
We saw earlier in this section how to evaluate simple queries in the
absence of rules. Now that we have seen how to apply rules, we can
describe how to evaluate simple queries by using both rules and
assertions.
Given the query pattern and a stream of frames, we produce, for each
frame in the input stream, two streams:
- a stream of extended frames obtained by matching the pattern against
all assertions in the data base (using the pattern matcher), and
- a stream of extended frames obtained by applying all possible rules
(using the unifier).[^277]
Appending these two streams produces a stream that consists of all the
ways that the given pattern can be satisfied consistent with the
original frame. These streams (one for each frame in the input stream)
are now all combined to form one large stream, which therefore consists
of all the ways that any of the frames in the original input stream can
be extended to produce a match with the given pattern.
#### The query evaluator and the driver loop {#the-query-evaluator-and-the-driver-loop .unnumbered}
Despite the complexity of the underlying matching operations, the system
is organized much like an evaluator for any language. The procedure that
coordinates the matching operations is called `qeval`, and it plays a
role analogous to that of the `eval` procedure for Lisp. `qeval` takes
as inputs a query and a stream of frames. Its output is a stream of
frames, corresponding to successful matches to the query pattern, that
extend some frame in the input stream, as indicated in [Figure
4.4](#Figure 4.4). Like `eval`, `qeval` classifies the different types
of expressions (queries) and dispatches to an appropriate procedure for
each. There is a procedure for each special form (`and`, `or`, `not`,
and `lisp/value`) and one for simple queries.
The driver loop, which is analogous to the `driver/loop` procedure for
the other evaluators in this chapter, reads queries from the terminal.
For each query, it calls `qeval` with the query and a stream that
consists of a single empty frame. This will produce the stream of all
possible matches (all possible extensions to the empty frame). For each
frame in the resulting stream, it instantiates the original query using
the values of the variables found in the frame. This stream of
instantiated queries is then printed.[^278]
The driver also checks for the special command `assert!`, which signals
that the input is not a query but rather an assertion or rule to be
added to the data base. For instance,
::: scheme
(assert! (job (Bitdiddle Ben) (computer wizard))) (assert! (rule (wheel
?person) (and (supervisor ?middle-manager ?person) (supervisor ?x
?middle-manager))))
:::
### Is Logic Programming Mathematical Logic? {#Section 4.4.3}
The means of combination used in the query language may at first seem
identical to the operations `and`, `or`, and `not` of mathematical
logic, and the application of query-language rules is in fact
accomplished through a legitimate method of inference.[^279] This
identification of the query language with mathematical logic is not
really valid, though, because the query language provides a *control
structure* that interprets the logical statements procedurally. We can
often take advantage of this control structure. For example, to find all
of the supervisors of programmers we could formulate a query in either
of two logically equivalent forms:
::: scheme
(and (job ?x (computer programmer)) (supervisor ?x ?y))
:::
or
::: scheme
(and (supervisor ?x ?y) (job ?x (computer programmer)))
:::
If a company has many more supervisors than programmers (the usual
case), it is better to use the first form rather than the second because
the data base must be scanned for each intermediate result (frame)
produced by the first clause of the `and`.
The aim of logic programming is to provide the programmer with
techniques for decomposing a computational problem into two separate
problems: "what" is to be computed, and "how" this should be computed.
This is accomplished by selecting a subset of the statements of
mathematical logic that is powerful enough to be able to describe
anything one might want to compute, yet weak enough to have a
controllable procedural interpretation. The intention here is that, on
the one hand, a program specified in a logic programming language should
be an effective program that can be carried out by a computer. Control
("how" to compute) is effected by using the order of evaluation of the
language. We should be able to arrange the order of clauses and the
order of subgoals within each clause so that the computation is done in
an order deemed to be effective and efficient. At the same time, we
should be able to view the result of the computation ("what" to compute)
as a simple consequence of the laws of logic.
Our query language can be regarded as just such a procedurally
interpretable subset of mathematical logic. An assertion represents a
simple fact (an atomic proposition). A rule represents the implication
that the rule conclusion holds for those cases where the rule body
holds. A rule has a natural procedural interpretation: To establish the
conclusion of the rule, establish the body of the rule. Rules,
therefore, specify computations. However, because rules can also be
regarded as statements of mathematical logic, we can justify any
"inference" accomplished by a logic program by asserting that the same
result could be obtained by working entirely within mathematical
logic.[^280]
#### Infinite loops {#infinite-loops .unnumbered}
A consequence of the procedural interpretation of logic programs is that
it is possible to construct hopelessly inefficient programs for solving
certain problems. An extreme case of inefficiency occurs when the system
falls into infinite loops in making deductions. As a simple example,
suppose we are setting up a data base of famous marriages, including
::: scheme
(assert! (married Minnie Mickey))
:::
If we now ask
::: scheme
(married Mickey ?who)
:::
we will get no response, because the system doesn't know that if $A$ is
married to $B$, then $B$ is married to $A$. So we assert the rule
::: scheme
(assert! (rule (married ?x ?y) (married ?y ?x)))
:::
and again query
::: scheme
(married Mickey ?who)
:::
Unfortunately, this will drive the system into an infinite loop, as
follows:
- The system finds that the `married` rule is applicable; that is, the
rule conclusion `(married ?x ?y)` successfully unifies with the
query pattern `(married Mickey ?who)` to produce a frame in which
`?x` is bound to `Mickey` and `?y` is bound to `?who`. So the
interpreter proceeds to evaluate the rule body `(married ?y ?x)` in
this frame---in effect, to process the query
`(married ?who Mickey)`.
- One answer appears directly as an assertion in the data base:
`(married Minnie Mickey)`.
- The `married` rule is also applicable, so the interpreter again
evaluates the rule body, which this time is equivalent to
`(married Mickey ?who)`.
The system is now in an infinite loop. Indeed, whether the system will
find the simple answer `(married Minnie Mickey)` before it goes into the
loop depends on implementation details concerning the order in which the
system checks the items in the data base. This is a very simple example
of the kinds of loops that can occur. Collections of interrelated rules
can lead to loops that are much harder to anticipate, and the appearance
of a loop can depend on the order of clauses in an `and` (see [Exercise
4.64](#Exercise 4.64)) or on low-level details concerning the order in
which the system processes queries.[^281]
#### Problems with `not` {#problems-with-not .unnumbered}
Another quirk in the query system concerns `not`. Given the data base of
[Section 4.4.1](#Section 4.4.1), consider the following two queries:
::: scheme
(and (supervisor ?x ?y) (not (job ?x (computer programmer)))) (and (not
(job ?x (computer programmer))) (supervisor ?x ?y))
:::
These two queries do not produce the same result. The first query begins
by finding all entries in the data base that match `(supervisor ?x ?y)`,
and then filters the resulting frames by removing the ones in which the
value of `?x` satisfies `(job ?x (computer programmer))`. The second
query begins by filtering the incoming frames to remove those that can
satisfy `(job ?x (computer programmer))`. Since the only incoming frame
is empty, it checks the data base to see if there are any patterns that
satisfy `(job ?x (computer programmer))`. Since there generally are
entries of this form, the `not` clause filters out the empty frame and
returns an empty stream of frames. Consequently, the entire compound
query returns an empty stream.
The trouble is that our implementation of `not` really is meant to serve
as a filter on values for the variables. If a `not` clause is processed
with a frame in which some of the variables remain unbound (as does `?x`
in the example above), the system will produce unexpected results.
Similar problems occur with the use of `lisp/value`---the Lisp predicate
can't work if some of its arguments are unbound. See [Exercise
4.77](#Exercise 4.77).
There is also a much more serious way in which the `not` of the query
language differs from the `not` of mathematical logic. In logic, we
interpret the statement "not $P$" to mean that $P$ is not true. In the
query system, however, "not $P$" means that $P$ is not deducible from
the knowledge in the data base. For example, given the personnel data
base of [Section 4.4.1](#Section 4.4.1), the system would happily deduce
all sorts of `not` statements, such as that Ben Bitdiddle is not a
baseball fan, that it is not raining outside, and that 2 + 2 is not
4.[^282] In other words, the `not` of logic programming languages
reflects the so-called *closed world assumption* that all relevant
information has been included in the data base.[^283]
> **[]{#Exercise 4.64 label="Exercise 4.64"}Exercise 4.64:** Louis
> Reasoner mistakenly deletes the `outranked/by` rule ([Section
> 4.4.1](#Section 4.4.1)) from the data base. When he realizes this, he
> quickly reinstalls it. Unfortunately, he makes a slight change in the
> rule, and types it in as
>
> ::: scheme
> (rule (outranked-by ?staff-person ?boss) (or (supervisor ?staff-person
> ?boss) (and (outranked-by ?middle-manager ?boss) (supervisor
> ?staff-person ?middle-manager))))
> :::
>
> Just after Louis types this information into the system, DeWitt Aull
> comes by to find out who outranks Ben Bitdiddle. He issues the query
>
> ::: scheme
> (outranked-by (Bitdiddle Ben) ?who)
> :::
>
> After answering, the system goes into an infinite loop. Explain why.
> **[]{#Exercise 4.65 label="Exercise 4.65"}Exercise 4.65:** Cy D. Fect,
> looking forward to the day when he will rise in the organization,
> gives a query to find all the wheels (using the `wheel` rule of
> [Section 4.4.1](#Section 4.4.1)):
>
> ::: scheme
> (wheel ?who)
> :::
>
> To his surprise, the system responds
>
> ::: scheme
> *;;; Query results:* (wheel (Warbucks Oliver)) (wheel (Bitdiddle
> Ben)) (wheel (Warbucks Oliver)) (wheel (Warbucks Oliver)) (wheel
> (Warbucks Oliver))
> :::
>
> Why is Oliver Warbucks listed four times?
> **[]{#Exercise 4.66 label="Exercise 4.66"}Exercise 4.66:** Ben has
> been generalizing the query system to provide statistics about the
> company. For example, to find the total salaries of all the computer
> programmers one will be able to say
>
> ::: scheme
> (sum ?amount (and (job ?x (computer programmer)) (salary ?x ?amount)))
> :::
>
> In general, Ben's new system allows expressions of the form
>
> ::: scheme
> (accumulation-function
> $\color{SchemeDark}\langle$ *variable* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *query
> pattern* $\color{SchemeDark}\rangle$ )
> :::
>
> where `accumulation/function` can be things like `sum`, `average`, or
> `maximum`. Ben reasons that it should be a cinch to implement this. He
> will simply feed the query pattern to `qeval`. This will produce a
> stream of frames. He will then pass this stream through a mapping
> function that extracts the value of the designated variable from each
> frame in the stream and feed the resulting stream of values to the
> accumulation function. Just as Ben completes the implementation and is
> about to try it out, Cy walks by, still puzzling over the `wheel`
> query result in [Exercise 4.65](#Exercise 4.65). When Cy shows Ben the
> system's response, Ben groans, "Oh, no, my simple accumulation scheme
> won't work!"
>
> What has Ben just realized? Outline a method he can use to salvage the
> situation.
> **[]{#Exercise 4.67 label="Exercise 4.67"}Exercise 4.67:** Devise a
> way to install a loop detector in the query system so as to avoid the
> kinds of simple loops illustrated in the text and in [Exercise
> 4.64](#Exercise 4.64). The general idea is that the system should
> maintain some sort of history of its current chain of deductions and
> should not begin processing a query that it is already working on.
> Describe what kind of information (patterns and frames) is included in
> this history, and how the check should be made. (After you study the
> details of the query-system implementation in [Section
> 4.4.4](#Section 4.4.4), you may want to modify the system to include
> your loop detector.)
> **[]{#Exercise 4.68 label="Exercise 4.68"}Exercise 4.68:** Define
> rules to implement the `reverse` operation of [Exercise
> 2.18](#Exercise 2.18), which returns a list containing the same
> elements as a given list in reverse order. (Hint: Use
> `append/to/form`.) Can your rules answer both `(reverse (1 2 3) ?x)`
> and `(reverse ?x (1 2 3))` ?
> **[]{#Exercise 4.69 label="Exercise 4.69"}Exercise 4.69:** Beginning
> with the data base and the rules you formulated in [Exercise
> 4.63](#Exercise 4.63), devise a rule for adding "greats" to a grandson
> relationship. This should enable the system to deduce that Irad is the
> great-grandson of Adam, or that Jabal and Jubal are the
> great-great-great-great-great-grandsons of Adam. (Hint: Represent the
> fact about Irad, for example, as `((great grandson) Adam Irad)`. Write
> rules that determine if a list ends in the word `grandson`. Use this
> to express a rule that allows one to derive the relationship
> `((great . ?rel) ?x ?y)`, where `?rel` is a list ending in
> `grandson`.) Check your rules on queries such as
> `((great grandson) ?g ?ggs)` and `(?relationship Adam Irad)`.
### Implementing the Query System {#Section 4.4.4}
[Section 4.4.2](#Section 4.4.2) described how the query system works.
Now we fill in the details by presenting a complete implementation of
the system.
#### The Driver Loop and Instantiation {#Section 4.4.4.1}
The driver loop for the query system repeatedly reads input expressions.
If the expression is a rule or assertion to be added to the data base,
then the information is added. Otherwise the expression is assumed to be
a query. The driver passes this query to the evaluator `qeval` together
with an initial frame stream consisting of a single empty frame. The
result of the evaluation is a stream of frames generated by satisfying
the query with variable values found in the data base. These frames are
used to form a new stream consisting of copies of the original query in
which the variables are instantiated with values supplied by the stream
of frames, and this final stream is printed at the terminal:
::: scheme
(define input-prompt \";;; Query input:\") (define output-prompt \";;;
Query results:\")
(define (query-driver-loop) (prompt-for-input input-prompt) (let ((q
(query-syntax-process (read)))) (cond ((assertion-to-be-added? q)
(add-rule-or-assertion! (add-assertion-body q)) (newline) (display
\"Assertion added to data base.\") (query-driver-loop)) (else (newline)
(display output-prompt) (display-stream (stream-map (lambda (frame)
(instantiate q frame (lambda (v f) (contract-question-mark v)))) (qeval
q (singleton-stream '())))) (query-driver-loop)))))
:::
Here, as in the other evaluators in this chapter, we use an abstract
syntax for the expressions of the query language. The implementation of
the expression syntax, including the predicate `assertion/to/be/added?`
and the selector `add/assertion/body`, is given in [Section
4.4.4.7](#Section 4.4.4.7). `add/rule/or/assertion!` is defined in
[Section 4.4.4.5](#Section 4.4.4.5).
Before doing any processing on an input expression, the driver loop
transforms it syntactically into a form that makes the processing more
efficient. This involves changing the representation of pattern
variables. When the query is instantiated, any variables that remain
unbound are transformed back to the input representation before being
printed. These transformations are performed by the two procedures
`query/syntax/process` and `contract/question/mark` ([Section
4.4.4.7](#Section 4.4.4.7)).
To instantiate an expression, we copy it, replacing any variables in the
expression by their values in a given frame. The values are themselves
instantiated, since they could contain variables (for example, if `?x`
in `exp` is bound to `?y` as the result of unification and `?y` is in
turn bound to 5). The action to take if a variable cannot be
instantiated is given by a procedural argument to `instantiate`.
::: scheme
(define (instantiate exp frame unbound-var-handler) (define (copy exp)
(cond ((var? exp) (let ((binding (binding-in-frame exp frame))) (if
binding (copy (binding-value binding)) (unbound-var-handler exp
frame)))) ((pair? exp) (cons (copy (car exp)) (copy (cdr exp)))) (else
exp))) (copy exp))
:::
The procedures that manipulate bindings are defined in [Section
4.4.4.8](#Section 4.4.4.8).
#### The Evaluator {#Section 4.4.4.2}
The `qeval` procedure, called by the `query/driver/loop`, is the basic
evaluator of the query system. It takes as inputs a query and a stream
of frames, and it returns a stream of extended frames. It identifies
special forms by a data-directed dispatch using `get` and `put`, just as
we did in implementing generic operations in [Chapter 2](#Chapter 2).
Any query that is not identified as a special form is assumed to be a
simple query, to be processed by `simple/query`.
::: scheme
(define (qeval query frame-stream) (let ((qproc (get (type query)
'qeval))) (if qproc (qproc (contents query) frame-stream) (simple-query
query frame-stream))))
:::
`type` and `contents`, defined in [Section 4.4.4.7](#Section 4.4.4.7),
implement the abstract syntax of the special forms.
#### Simple queries {#simple-queries-2 .unnumbered}
The `simple/query` procedure handles simple queries. It takes as
arguments a simple query (a pattern) together with a stream of frames,
and it returns the stream formed by extending each frame by all
data-base matches of the query.
::: scheme
(define (simple-query query-pattern frame-stream) (stream-flatmap
(lambda (frame) (stream-append-delayed (find-assertions query-pattern
frame) (delay (apply-rules query-pattern frame)))) frame-stream))
:::
For each frame in the input stream, we use `find/assertions` ([Section
4.4.4.3](#Section 4.4.4.3)) to match the pattern against all assertions
in the data base, producing a stream of extended frames, and we use
`apply/rules` ([Section 4.4.4.4](#Section 4.4.4.4)) to apply all
possible rules, producing another stream of extended frames. These two
streams are combined (using `stream/append/delayed`, [Section
4.4.4.6](#Section 4.4.4.6)) to make a stream of all the ways that the
given pattern can be satisfied consistent with the original frame (see
[Exercise 4.71](#Exercise 4.71)). The streams for the individual input
frames are combined using `stream/flatmap` ([Section
4.4.4.6](#Section 4.4.4.6)) to form one large stream of all the ways
that any of the frames in the original input stream can be extended to
produce a match with the given pattern.
#### Compound queries {#compound-queries-2 .unnumbered}
`and` queries are handled as illustrated in [Figure 4.5](#Figure 4.5) by
the `conjoin` procedure. `conjoin` takes as inputs the conjuncts and the
frame stream and returns the stream of extended frames. First, `conjoin`
processes the stream of frames to find the stream of all possible frame
extensions that satisfy the first query in the conjunction. Then, using
this as the new frame stream, it recursively applies `conjoin` to the
rest of the queries.
::: scheme
(define (conjoin conjuncts frame-stream) (if (empty-conjunction?
conjuncts) frame-stream (conjoin (rest-conjuncts conjuncts) (qeval
(first-conjunct conjuncts) frame-stream))))
:::
The expression
::: scheme
(put 'and 'qeval conjoin)
:::
sets up `qeval` to dispatch to `conjoin` when an `and` form is
encountered.
`or` queries are handled similarly, as shown in [Figure
4.6](#Figure 4.6). The output streams for the various disjuncts of the
`or` are computed separately and merged using the `interleave/delayed`
procedure from [Section 4.4.4.6](#Section 4.4.4.6). (See [Exercise
4.71](#Exercise 4.71) and [Exercise 4.72](#Exercise 4.72).)
::: scheme
(define (disjoin disjuncts frame-stream) (if (empty-disjunction?
disjuncts) the-empty-stream (interleave-delayed (qeval (first-disjunct
disjuncts) frame-stream) (delay (disjoin (rest-disjuncts disjuncts)
frame-stream))))) (put 'or 'qeval disjoin)
:::
The predicates and selectors for the syntax of conjuncts and disjuncts
are given in [Section 4.4.4.7](#Section 4.4.4.7).
#### Filters {#filters .unnumbered}
`not` is handled by the method outlined in [Section
4.4.2](#Section 4.4.2). We attempt to extend each frame in the input
stream to satisfy the query being negated, and we include a given frame
in the output stream only if it cannot be extended.
::: scheme
(define (negate operands frame-stream) (stream-flatmap (lambda (frame)
(if (stream-null? (qeval (negated-query operands) (singleton-stream
frame))) (singleton-stream frame) the-empty-stream)) frame-stream)) (put
'not 'qeval negate)
:::
`lisp/value` is a filter similar to `not`. Each frame in the stream is
used to instantiate the variables in the pattern, the indicated
predicate is applied, and the frames for which the predicate returns
false are filtered out of the input stream. An error results if there
are unbound pattern variables.
::: scheme
(define (lisp-value call frame-stream) (stream-flatmap (lambda (frame)
(if (execute (instantiate call frame (lambda (v f) (error \"Unknown pat
var: LISP-VALUE\" v)))) (singleton-stream frame) the-empty-stream))
frame-stream)) (put 'lisp-value 'qeval lisp-value)
:::
`execute`, which applies the predicate to the arguments, must `eval` the
predicate expression to get the procedure to apply. However, it must not
evaluate the arguments, since they are already the actual arguments, not
expressions whose evaluation (in Lisp) will produce the arguments. Note
that `execute` is implemented using `eval` and `apply` from the
underlying Lisp system.
::: scheme
(define (execute exp) (apply (eval (predicate exp)
user-initial-environment) (args exp)))
:::
The `always/true` special form provides for a query that is always
satisfied. It ignores its contents (normally empty) and simply passes
through all the frames in the input stream. `always/true` is used by the
`rule/body` selector ([Section 4.4.4.7](#Section 4.4.4.7)) to provide
bodies for rules that were defined without bodies (that is, rules whose
conclusions are always satisfied).
::: scheme
(define (always-true ignore frame-stream) frame-stream) (put
'always-true 'qeval always-true)
:::
The selectors that define the syntax of `not` and `lisp/value` are given
in [Section 4.4.4.7](#Section 4.4.4.7).
#### Finding Assertions by Pattern Matching {#Section 4.4.4.3}
`find/assertions`, called by `simple/query` ([Section
4.4.4.2](#Section 4.4.4.2)), takes as input a pattern and a frame. It
returns a stream of frames, each extending the given one by a data-base
match of the given pattern. It uses `fetch/assertions` ([Section
4.4.4.5](#Section 4.4.4.5)) to get a stream of all the assertions in the
data base that should be checked for a match against the pattern and the
frame. The reason for `fetch/assertions` here is that we can often apply
simple tests that will eliminate many of the entries in the data base
from the pool of candidates for a successful match. The system would
still work if we eliminated `fetch/assertions` and simply checked a
stream of all assertions in the data base, but the computation would be
less efficient because we would need to make many more calls to the
matcher.
::: scheme
(define (find-assertions pattern frame) (stream-flatmap (lambda (datum)
(check-an-assertion datum pattern frame)) (fetch-assertions pattern
frame)))
:::
`check/an/assertion` takes as arguments a pattern, a data object
(assertion), and a frame and returns either a one-element stream
containing the extended frame or `the/empty/stream` if the match fails.
::: scheme
(define (check-an-assertion assertion query-pat query-frame) (let
((match-result (pattern-match query-pat assertion query-frame))) (if
(eq? match-result 'failed) the-empty-stream (singleton-stream
match-result))))
:::
The basic pattern matcher returns either the symbol `failed` or an
extension of the given frame. The basic idea of the matcher is to check
the pattern against the data, element by element, accumulating bindings
for the pattern variables. If the pattern and the data object are the
same, the match succeeds and we return the frame of bindings accumulated
so far. Otherwise, if the pattern is a variable we extend the current
frame by binding the variable to the data, so long as this is consistent
with the bindings already in the frame. If the pattern and the data are
both pairs, we (recursively) match the `car` of the pattern against the
`car` of the data to produce a frame; in this frame we then match the
`cdr` of the pattern against the `cdr` of the data. If none of these
cases are applicable, the match fails and we return the symbol `failed`.
::: scheme
(define (pattern-match pat dat frame) (cond ((eq? frame 'failed)
'failed) ((equal? pat dat) frame) ((var? pat) (extend-if-consistent pat
dat frame)) ((and (pair? pat) (pair? dat)) (pattern-match (cdr pat) (cdr
dat) (pattern-match (car pat) (car dat) frame))) (else 'failed)))
:::
Here is the procedure that extends a frame by adding a new binding, if
this is consistent with the bindings already in the frame:
::: scheme
(define (extend-if-consistent var dat frame) (let ((binding
(binding-in-frame var frame))) (if binding (pattern-match (binding-value
binding) dat frame) (extend var dat frame))))
:::
If there is no binding for the variable in the frame, we simply add the
binding of the variable to the data. Otherwise we match, in the frame,
the data against the value of the variable in the frame. If the stored
value contains only constants, as it must if it was stored during
pattern matching by `extend/if/consistent`, then the match simply tests
whether the stored and new values are the same. If so, it returns the
unmodified frame; if not, it returns a failure indication. The stored
value may, however, contain pattern variables if it was stored during
unification (see [Section 4.4.4.4](#Section 4.4.4.4)). The recursive
match of the stored pattern against the new data will add or check
bindings for the variables in this pattern. For example, suppose we have
a frame in which `?x` is bound to `(f ?y)` and `?y` is unbound, and we
wish to augment this frame by a binding of `?x` to `(f b)`. We look up
`?x` and find that it is bound to `(f ?y)`. This leads us to match
`(f ?y)` against the proposed new value `(f b)` in the same frame.
Eventually this match extends the frame by adding a binding of `?y` to
`b`. `?X` remains bound to `(f ?y)`. We never modify a stored binding
and we never store more than one binding for a given variable.
The procedures used by `extend/if/consistent` to manipulate bindings are
defined in [Section 4.4.4.8](#Section 4.4.4.8).
#### Patterns with dotted tails {#patterns-with-dotted-tails .unnumbered}
If a pattern contains a dot followed by a pattern variable, the pattern
variable matches the rest of the data list (rather than the next element
of the data list), just as one would expect with the dotted-tail
notation described in [Exercise 2.20](#Exercise 2.20). Although the
pattern matcher we have just implemented doesn't look for dots, it does
behave as we want. This is because the Lisp `read` primitive, which is
used by `query/driver/loop` to read the query and represent it as a list
structure, treats dots in a special way.
When `read` sees a dot, instead of making the next item be the next
element of a list (the `car` of a `cons` whose `cdr` will be the rest of
the list) it makes the next item be the `cdr` of the list structure. For
example, the list structure produced by `read` for the pattern
`(computer ?type)` could be constructed by evaluating the expression
`(cons ’computer (cons ’?type ’()))`, and that for `(computer . ?type)`
could be constructed by evaluating the expression
`(cons ’computer ’?type)`.
Thus, as `pattern/match` recursively compares `car`s and `cdr`s of a
data list and a pattern that had a dot, it eventually matches the
variable after the dot (which is a `cdr` of the pattern) against a
sublist of the data list, binding the variable to that list. For
example, matching the pattern `(computer . ?type)` against
`(computer programmer trainee)` will match `?type` against the list
`(programmer trainee)`.
#### Rules and Unification {#Section 4.4.4.4}
`apply/rules` is the rule analog of `find/assertions` ([Section
4.4.4.3](#Section 4.4.4.3)). It takes as input a pattern and a frame,
and it forms a stream of extension frames by applying rules from the
data base. `stream/flatmap` maps `apply/a/rule` down the stream of
possibly applicable rules (selected by `fetch/rules`, [Section
4.4.4.5](#Section 4.4.4.5)) and combines the resulting streams of
frames.
::: scheme
(define (apply-rules pattern frame) (stream-flatmap (lambda (rule)
(apply-a-rule rule pattern frame)) (fetch-rules pattern frame)))
:::
`apply/a/rule` applies rules using the method outlined in [Section
4.4.2](#Section 4.4.2). It first augments its argument frame by unifying
the rule conclusion with the pattern in the given frame. If this
succeeds, it evaluates the rule body in this new frame.
Before any of this happens, however, the program renames all the
variables in the rule with unique new names. The reason for this is to
prevent the variables for different rule applications from becoming
confused with each other. For instance, if two rules both use a variable
named `?x`, then each one may add a binding for `?x` to the frame when
it is applied. These two `?x`'s have nothing to do with each other, and
we should not be fooled into thinking that the two bindings must be
consistent. Rather than rename variables, we could devise a more clever
environment structure; however, the renaming approach we have chosen
here is the most straightforward, even if not the most efficient. (See
[Exercise 4.79](#Exercise 4.79).) Here is the `apply/a/rule` procedure:
::: scheme
(define (apply-a-rule rule query-pattern query-frame) (let ((clean-rule
(rename-variables-in rule))) (let ((unify-result (unify-match
query-pattern (conclusion clean-rule) query-frame))) (if (eq?
unify-result 'failed) the-empty-stream (qeval (rule-body clean-rule)
(singleton-stream unify-result))))))
:::
The selectors `rule/body` and `conclusion` that extract parts of a rule
are defined in [Section 4.4.4.7](#Section 4.4.4.7).
We generate unique variable names by associating a unique identifier
(such as a number) with each rule application and combining this
identifier with the original variable names. For example, if the
rule-application identifier is 7, we might change each `?x` in the rule
to `?x/7` and each `?y` in the rule to `?y/7`. (`make/new/variable` and
`new/rule/application/id` are included with the syntax procedures in
[Section 4.4.4.7](#Section 4.4.4.7).)
::: scheme
(define (rename-variables-in rule) (let ((rule-application-id
(new-rule-application-id))) (define (tree-walk exp) (cond ((var? exp)
(make-new-variable exp rule-application-id)) ((pair? exp) (cons
(tree-walk (car exp)) (tree-walk (cdr exp)))) (else exp))) (tree-walk
rule)))
:::
The unification algorithm is implemented as a procedure that takes as
inputs two patterns and a frame and returns either the extended frame or
the symbol `failed`. The unifier is like the pattern matcher except that
it is symmetrical---variables are allowed on both sides of the match.
`unify/match` is basically the same as `pattern/match`, except that
there is extra code (marked "`***`" below) to handle the case where the
object on the right side of the match is a variable.
::: scheme
(define (unify-match p1 p2 frame) (cond ((eq? frame 'failed) 'failed)
((equal? p1 p2) frame) ((var? p1) (extend-if-possible p1 p2 frame))
((var? p2) (extend-if-possible p2 p1 frame)) [; \*\*\*]{.roman} ((and
(pair? p1) (pair? p2)) (unify-match (cdr p1) (cdr p2) (unify-match (car
p1) (car p2) frame))) (else 'failed)))
:::
In unification, as in one-sided pattern matching, we want to accept a
proposed extension of the frame only if it is consistent with existing
bindings. The procedure `extend/if/possible` used in unification is the
same as the `extend/if/consistent` used in pattern matching except for
two special checks, marked "`***`" in the program below. In the first
case, if the variable we are trying to match is not bound, but the value
we are trying to match it with is itself a (different) variable, it is
necessary to check to see if the value is bound, and if so, to match its
value. If both parties to the match are unbound, we may bind either to
the other.
The second check deals with attempts to bind a variable to a pattern
that includes that variable. Such a situation can occur whenever a
variable is repeated in both patterns. Consider, for example, unifying
the two patterns `(?x ?x)` and
`(?y `$\langle$*`expression involving ``?y`*$\rangle$`)` in a frame
where both `?x` and `?y` are unbound. First `?x` is matched against
`?y`, making a binding of `?x` to `?y`. Next, the same `?x` is matched
against the given expression involving `?y`. Since `?x` is already bound
to `?y`, this results in matching `?y` against the expression. If we
think of the unifier as finding a set of values for the pattern
variables that make the patterns the same, then these patterns imply
instructions to find a `?y` such that `?y` is equal to the expression
involving `?y`. There is no general method for solving such equations,
so we reject such bindings; these cases are recognized by the predicate
`depends/on?`.[^284] On the other hand, we do not want to reject
attempts to bind a variable to itself. For example, consider unifying
`(?x ?x)` and `(?y ?y)`. The second attempt to bind `?x` to `?y` matches
`?y` (the stored value of `?x`) against `?y` (the new value of `?x`).
This is taken care of by the `equal?` clause of `unify/match`.
::: scheme
(define (extend-if-possible var val frame) (let ((binding
(binding-in-frame var frame))) (cond (binding (unify-match
(binding-value binding) val frame)) ((var? val) [; \*\*\*]{.roman}
(let ((binding (binding-in-frame val frame))) (if binding (unify-match
var (binding-value binding) frame) (extend var val frame))))
((depends-on? val var frame) [; \*\*\*]{.roman} 'failed) (else (extend
var val frame)))))
:::
`depends/on?` is a predicate that tests whether an expression proposed
to be the value of a pattern variable depends on the variable. This must
be done relative to the current frame because the expression may contain
occurrences of a variable that already has a value that depends on our
test variable. The structure of `depends/on?` is a simple recursive tree
walk in which we substitute for the values of variables whenever
necessary.
::: scheme
(define (depends-on? exp var frame) (define (tree-walk e) (cond ((var?
e) (if (equal? var e) true (let ((b (binding-in-frame e frame))) (if b
(tree-walk (binding-value b)) false)))) ((pair? e) (or (tree-walk (car
e)) (tree-walk (cdr e)))) (else false))) (tree-walk exp))
:::
#### Maintaining the Data Base {#Section 4.4.4.5}
One important problem in designing logic programming languages is that
of arranging things so that as few irrelevant data-base entries as
possible will be examined in checking a given pattern. In our system, in
addition to storing all assertions in one big stream, we store all
assertions whose `car`s are constant symbols in separate streams, in a
table indexed by the symbol. To fetch an assertion that may match a
pattern, we first check to see if the `car` of the pattern is a constant
symbol. If so, we return (to be tested using the matcher) all the stored
assertions that have the same `car`. If the pattern's `car` is not a
constant symbol, we return all the stored assertions. Cleverer methods
could also take advantage of information in the frame, or try also to
optimize the case where the `car` of the pattern is not a constant
symbol. We avoid building our criteria for indexing (using the `car`,
handling only the case of constant symbols) into the program; instead we
call on predicates and selectors that embody our criteria.
::: scheme
(define THE-ASSERTIONS the-empty-stream) (define (fetch-assertions
pattern frame) (if (use-index? pattern) (get-indexed-assertions pattern)
(get-all-assertions))) (define (get-all-assertions) THE-ASSERTIONS)
(define (get-indexed-assertions pattern) (get-stream (index-key-of
pattern) 'assertion-stream))
:::
`get/stream` looks up a stream in the table and returns an empty stream
if nothing is stored there.
::: scheme
(define (get-stream key1 key2) (let ((s (get key1 key2))) (if s s
the-empty-stream)))
:::
Rules are stored similarly, using the `car` of the rule conclusion. Rule
conclusions are arbitrary patterns, however, so they differ from
assertions in that they can contain variables. A pattern whose `car` is
a constant symbol can match rules whose conclusions start with a
variable as well as rules whose conclusions have the same `car`. Thus,
when fetching rules that might match a pattern whose `car` is a constant
symbol we fetch all rules whose conclusions start with a variable as
well as those whose conclusions have the same `car` as the pattern. For
this purpose we store all rules whose conclusions start with a variable
in a separate stream in our table, indexed by the symbol `?`.
::: scheme
(define THE-RULES the-empty-stream) (define (fetch-rules pattern frame)
(if (use-index? pattern) (get-indexed-rules pattern) (get-all-rules)))
(define (get-all-rules) THE-RULES) (define (get-indexed-rules pattern)
(stream-append (get-stream (index-key-of pattern) 'rule-stream)
(get-stream '? 'rule-stream)))
:::
`add/rule/or/assertion!` is used by `query/driver/loop` to add
assertions and rules to the data base. Each item is stored in the index,
if appropriate, and in a stream of all assertions or rules in the data
base.
::: scheme
(define (add-rule-or-assertion! assertion) (if (rule? assertion)
(add-rule! assertion) (add-assertion! assertion))) (define
(add-assertion! assertion) (store-assertion-in-index assertion) (let
((old-assertions THE-ASSERTIONS)) (set! THE-ASSERTIONS (cons-stream
assertion old-assertions)) 'ok)) (define (add-rule! rule)
(store-rule-in-index rule) (let ((old-rules THE-RULES)) (set! THE-RULES
(cons-stream rule old-rules)) 'ok))
:::
To actually store an assertion or a rule, we check to see if it can be
indexed. If so, we store it in the appropriate stream.
::: scheme
(define (store-assertion-in-index assertion) (if (indexable? assertion)
(let ((key (index-key-of assertion))) (let ((current-assertion-stream
(get-stream key 'assertion-stream))) (put key 'assertion-stream
(cons-stream assertion current-assertion-stream)))))) (define
(store-rule-in-index rule) (let ((pattern (conclusion rule))) (if
(indexable? pattern) (let ((key (index-key-of pattern))) (let
((current-rule-stream (get-stream key 'rule-stream))) (put key
'rule-stream (cons-stream rule current-rule-stream)))))))
:::
The following procedures define how the data-base index is used. A
pattern (an assertion or a rule conclusion) will be stored in the table
if it starts with a variable or a constant symbol.
::: scheme
(define (indexable? pat) (or (constant-symbol? (car pat)) (var? (car
pat))))
:::
The key under which a pattern is stored in the table is either `?` (if
it starts with a variable) or the constant symbol with which it starts.
::: scheme
(define (index-key-of pat) (let ((key (car pat))) (if (var? key) '?
key)))
:::
The index will be used to retrieve items that might match a pattern if
the pattern starts with a constant symbol.
::: scheme
(define (use-index? pat) (constant-symbol? (car pat)))
:::
> **[]{#Exercise 4.70 label="Exercise 4.70"}Exercise 4.70:** What is the
> purpose of the `let` bindings in the procedures `add/assertion!` and
> `add/rule!` ? What would be wrong with the following implementation of
> `add/assertion!` ? Hint: Recall the definition of the infinite stream
> of ones in [Section 3.5.2](#Section 3.5.2):
> `(define ones (cons/stream 1 ones))`.
>
> ::: scheme
> (define (add-assertion! assertion) (store-assertion-in-index
> assertion) (set! THE-ASSERTIONS (cons-stream assertion
> THE-ASSERTIONS)) 'ok)
> :::
#### Stream Operations {#Section 4.4.4.6}
The query system uses a few stream operations that were not presented in
[Chapter 3](#Chapter 3).
`stream/append/delayed` and `interleave/delayed` are just like
`stream/append` and `interleave` ([Section 3.5.3](#Section 3.5.3)),
except that they take a delayed argument (like the `integral` procedure
in [Section 3.5.4](#Section 3.5.4)). This postpones looping in some
cases (see [Exercise 4.71](#Exercise 4.71)).
::: scheme
(define (stream-append-delayed s1 delayed-s2) (if (stream-null? s1)
(force delayed-s2) (cons-stream (stream-car s1) (stream-append-delayed
(stream-cdr s1) delayed-s2)))) (define (interleave-delayed s1
delayed-s2) (if (stream-null? s1) (force delayed-s2) (cons-stream
(stream-car s1) (interleave-delayed (force delayed-s2) (delay
(stream-cdr s1))))))
:::
`stream/flatmap`, which is used throughout the query evaluator to map a
procedure over a stream of frames and combine the resulting streams of
frames, is the stream analog of the `flatmap` procedure introduced for
ordinary lists in [Section 2.2.3](#Section 2.2.3). Unlike ordinary
`flatmap`, however, we accumulate the streams with an interleaving
process, rather than simply appending them (see [Exercise
4.72](#Exercise 4.72) and [Exercise 4.73](#Exercise 4.73)).
::: scheme
(define (stream-flatmap proc s) (flatten-stream (stream-map proc s)))
(define (flatten-stream stream) (if (stream-null? stream)
the-empty-stream (interleave-delayed (stream-car stream) (delay
(flatten-stream (stream-cdr stream))))))
:::
The evaluator also uses the following simple procedure to generate a
stream consisting of a single element:
::: scheme
(define (singleton-stream x) (cons-stream x the-empty-stream))
:::
#### Query Syntax Procedures {#Section 4.4.4.7}
`type` and `contents`, used by `qeval` ([Section
4.4.4.2](#Section 4.4.4.2)), specify that a special form is identified
by the symbol in its `car`. They are the same as the `type/tag` and
`contents` procedures in [Section 2.4.2](#Section 2.4.2), except for the
error message.
::: scheme
(define (type exp) (if (pair? exp) (car exp) (error \"Unknown expression
TYPE\" exp))) (define (contents exp) (if (pair? exp) (cdr exp) (error
\"Unknown expression CONTENTS\" exp)))
:::
The following procedures, used by `query/driver/loop` (in [Section
4.4.4.1](#Section 4.4.4.1)), specify that rules and assertions are added
to the data base by expressions of the form
`(assert! `$\langle$*`rule/or/assertion`*$\rangle$`)`:
::: scheme
(define (assertion-to-be-added? exp) (eq? (type exp) 'assert!)) (define
(add-assertion-body exp) (car (contents exp)))
:::
Here are the syntax definitions for the `and`, `or`, `not`, and
`lisp/value` special forms ([Section 4.4.4.2](#Section 4.4.4.2)):
::: scheme
(define (empty-conjunction? exps) (null? exps)) (define (first-conjunct
exps) (car exps)) (define (rest-conjuncts exps) (cdr exps)) (define
(empty-disjunction? exps) (null? exps)) (define (first-disjunct exps)
(car exps)) (define (rest-disjuncts exps) (cdr exps)) (define
(negated-query exps) (car exps)) (define (predicate exps) (car exps))
(define (args exps) (cdr exps))
:::
The following three procedures define the syntax of rules:
::: scheme
(define (rule? statement) (tagged-list? statement 'rule)) (define
(conclusion rule) (cadr rule)) (define (rule-body rule) (if (null? (cddr
rule)) '(always-true) (caddr rule)))
:::
`query/driver/loop` ([Section 4.4.4.1](#Section 4.4.4.1)) calls
`query/syntax/process` to transform pattern variables in the expression,
which have the form `?symbol`, into the internal format `(? symbol)`.
That is to say, a pattern such as `(job ?x ?y)` is actually represented
internally by the system as `(job (? x) (? y))`. This increases the
efficiency of query processing, since it means that the system can check
to see if an expression is a pattern variable by checking whether the
`car` of the expression is the symbol `?`, rather than having to extract
characters from the symbol. The syntax transformation is accomplished by
the following procedure:[^285]
::: scheme
(define (query-syntax-process exp) (map-over-symbols
expand-question-mark exp)) (define (map-over-symbols proc exp) (cond
((pair? exp) (cons (map-over-symbols proc (car exp)) (map-over-symbols
proc (cdr exp)))) ((symbol? exp) (proc exp)) (else exp))) (define
(expand-question-mark symbol) (let ((chars (symbol-\>string symbol)))
(if (string=? (substring chars 0 1) \"?\") (list '? (string-\>symbol
(substring chars 1 (string-length chars)))) symbol)))
:::
Once the variables are transformed in this way, the variables in a
pattern are lists starting with `?`, and the constant symbols (which
need to be recognized for data-base indexing, [Section
4.4.4.5](#Section 4.4.4.5)) are just the symbols.
::: scheme
(define (var? exp) (tagged-list? exp '?)) (define (constant-symbol? exp)
(symbol? exp))
:::
Unique variables are constructed during rule application (in [Section
4.4.4.4](#Section 4.4.4.4)) by means of the following procedures. The
unique identifier for a rule application is a number, which is
incremented each time a rule is applied.
::: scheme
(define rule-counter 0) (define (new-rule-application-id) (set!
rule-counter (+ 1 rule-counter)) rule-counter) (define
(make-new-variable var rule-application-id) (cons '? (cons
rule-application-id (cdr var))))
:::
When `query/driver/loop` instantiates the query to print the answer, it
converts any unbound pattern variables back to the right form for
printing, using
::: scheme
(define (contract-question-mark variable) (string-\>symbol
(string-append \"?\" (if (number? (cadr variable)) (string-append
(symbol-\>string (caddr variable)) \"-\" (number-\>string (cadr
variable))) (symbol-\>string (cadr variable))))))
:::
#### Frames and Bindings {#Section 4.4.4.8}
Frames are represented as lists of bindings, which are variable-value
pairs:
::: scheme
(define (make-binding variable value) (cons variable value)) (define
(binding-variable binding) (car binding)) (define (binding-value
binding) (cdr binding)) (define (binding-in-frame variable frame) (assoc
variable frame)) (define (extend variable value frame) (cons
(make-binding variable value) frame))
:::
> **[]{#Exercise 4.71 label="Exercise 4.71"}Exercise 4.71:** Louis
> Reasoner wonders why the `simple/query` and `disjoin` procedures
> ([Section 4.4.4.2](#Section 4.4.4.2)) are implemented using explicit
> `delay` operations, rather than being defined as follows:
>
> ::: scheme
> (define (simple-query query-pattern frame-stream) (stream-flatmap
> (lambda (frame) (stream-append (find-assertions query-pattern frame)
> (apply-rules query-pattern frame))) frame-stream)) (define (disjoin
> disjuncts frame-stream) (if (empty-disjunction? disjuncts)
> the-empty-stream (interleave (qeval (first-disjunct disjuncts)
> frame-stream) (disjoin (rest-disjuncts disjuncts) frame-stream))))
> :::
>
> Can you give examples of queries where these simpler definitions would
> lead to undesirable behavior?
> **[]{#Exercise 4.72 label="Exercise 4.72"}Exercise 4.72:** Why do
> `disjoin` and `stream/flatmap` interleave the streams rather than
> simply append them? Give examples that illustrate why interleaving
> works better. (Hint: Why did we use `interleave` in [Section
> 3.5.3](#Section 3.5.3)?)
> **[]{#Exercise 4.73 label="Exercise 4.73"}Exercise 4.73:** Why does
> `flatten/stream` use `delay` explicitly? What would be wrong with
> defining it as follows:
>
> ::: scheme
> (define (flatten-stream stream) (if (stream-null? stream)
> the-empty-stream (interleave (stream-car stream) (flatten-stream
> (stream-cdr stream)))))
> :::
> **[]{#Exercise 4.74 label="Exercise 4.74"}Exercise 4.74:** Alyssa P.
> Hacker proposes to use a simpler version of `stream/flatmap` in
> `negate`, `lisp/value`, and `find/assertions`. She observes that the
> procedure that is mapped over the frame stream in these cases always
> produces either the empty stream or a singleton stream, so no
> interleaving is needed when combining these streams.
>
> a. Fill in the missing expressions in Alyssa's program.
>
> ::: scheme
> (define (simple-stream-flatmap proc s) (simple-flatten (stream-map
> proc s))) (define (simple-flatten stream) (stream-map
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> (stream-filter
> $\color{SchemeDark}\langle$ ?? $\color{SchemeDark}\rangle$
> stream)))
> :::
>
> b. Does the query system's behavior change if we change it in this
> way?
> **[]{#Exercise 4.75 label="Exercise 4.75"}Exercise 4.75:** Implement
> for the query language a new special form called `unique`. `unique`
> should succeed if there is precisely one item in the data base
> satisfying a specified query. For example,
>
> ::: scheme
> (unique (job ?x (computer wizard)))
> :::
>
> should print the one-item stream
>
> ::: scheme
> (unique (job (Bitdiddle Ben) (computer wizard)))
> :::
>
> since Ben is the only computer wizard, and
>
> ::: scheme
> (unique (job ?x (computer programmer)))
> :::
>
> should print the empty stream, since there is more than one computer
> programmer. Moreover,
>
> ::: scheme
> (and (job ?x ?j) (unique (job ?anyone ?j)))
> :::
>
> should list all the jobs that are filled by only one person, and the
> people who fill them.
>
> There are two parts to implementing `unique`. The first is to write a
> procedure that handles this special form, and the second is to make
> `qeval` dispatch to that procedure. The second part is trivial, since
> `qeval` does its dispatching in a data-directed way. If your procedure
> is called `uniquely/asserted`, all you need to do is
>
> ::: scheme
> (put 'unique 'qeval uniquely-asserted)
> :::
>
> and `qeval` will dispatch to this procedure for every query whose
> `type` (`car`) is the symbol `unique`.
>
> The real problem is to write the procedure `uniquely/asserted`. This
> should take as input the `contents` (`cdr`) of the `unique` query,
> together with a stream of frames. For each frame in the stream, it
> should use `qeval` to find the stream of all extensions to the frame
> that satisfy the given query. Any stream that does not have exactly
> one item in it should be eliminated. The remaining streams should be
> passed back to be accumulated into one big stream that is the result
> of the `unique` query. This is similar to the implementation of the
> `not` special form.
>
> Test your implementation by forming a query that lists all people who
> supervise precisely one person.
> **[]{#Exercise 4.76 label="Exercise 4.76"}Exercise 4.76:** Our
> implementation of `and` as a series combination of queries ([Figure
> 4.5](#Figure 4.5)) is elegant, but it is inefficient because in
> processing the second query of the `and` we must scan the data base
> for each frame produced by the first query. If the data base has $n$
> elements, and a typical query produces a number of output frames
> proportional to $n$ (say $n / k$), then scanning the data base for
> each frame produced by the first query will require $n^2 / k$ calls to
> the pattern matcher. Another approach would be to process the two
> clauses of the `and` separately, then look for all pairs of output
> frames that are compatible. If each query produces $n / k$ output
> frames, then this means that we must perform $n^2 / k^2$ compatibility
> checks---a factor of $k$ fewer than the number of matches required in
> our current method.
>
> Devise an implementation of `and` that uses this strategy. You must
> implement a procedure that takes two frames as inputs, checks whether
> the bindings in the frames are compatible, and, if so, produces a
> frame that merges the two sets of bindings. This operation is similar
> to unification.
> **[]{#Exercise 4.77 label="Exercise 4.77"}Exercise 4.77:** In [Section
> 4.4.3](#Section 4.4.3) we saw that `not` and `lisp/value` can cause
> the query language to give "wrong" answers if these filtering
> operations are applied to frames in which variables are unbound.
> Devise a way to fix this shortcoming. One idea is to perform the
> filtering in a "delayed" manner by appending to the frame a "promise"
> to filter that is fulfilled only when enough variables have been bound
> to make the operation possible. We could wait to perform filtering
> until all other operations have been performed. However, for
> efficiency's sake, we would like to perform filtering as soon as
> possible so as to cut down on the number of intermediate frames
> generated.
> **[]{#Exercise 4.78 label="Exercise 4.78"}Exercise 4.78:** Redesign
> the query language as a nondeterministic program to be implemented
> using the evaluator of [Section 4.3](#Section 4.3), rather than as a
> stream process. In this approach, each query will produce a single
> answer (rather than the stream of all answers) and the user can type
> `try/again` to see more answers. You should find that much of the
> mechanism we built in this section is subsumed by nondeterministic
> search and backtracking. You will probably also find, however, that
> your new query language has subtle differences in behavior from the
> one implemented here. Can you find examples that illustrate this
> difference?
> **[]{#Exercise 4.79 label="Exercise 4.79"}Exercise 4.79:** When we
> implemented the Lisp evaluator in [Section 4.1](#Section 4.1), we saw
> how to use local environments to avoid name conflicts between the
> parameters of procedures. For example, in evaluating
>
> ::: scheme
> (define (square x) (\* x x)) (define (sum-of-squares x y) (+ (square
> x) (square y))) (sum-of-squares 3 4)
> :::
>
> there is no confusion between the `x` in `square` and the `x` in
> `sum/of/squares`, because we evaluate the body of each procedure in an
> environment that is specially constructed to contain bindings for the
> local variables. In the query system, we used a different strategy to
> avoid name conflicts in applying rules. Each time we apply a rule we
> rename the variables with new names that are guaranteed to be unique.
> The analogous strategy for the Lisp evaluator would be to do away with
> local environments and simply rename the variables in the body of a
> procedure each time we apply the procedure.
>
> Implement for the query language a rule-application method that uses
> environments rather than renaming. See if you can build on your
> environment structure to create constructs in the query language for
> dealing with large systems, such as the rule analog of
> block-structured procedures. Can you relate any of this to the problem
> of making deductions in a context (e.g., "If I supposed that $P$ were
> true, then I would be able to deduce $A$ and $B$.") as a method of
> problem solving? (This problem is open-ended. A good answer is
> probably worth a Ph.D.)
# Computing with Register Machines {#Chapter 5}
> My aim is to show that the heavenly machine is not a kind of divine,
> live being, but a kind of clockwork (and he who believes that a clock
> has soul attributes the maker's glory to the work), insofar as nearly
> all the manifold motions are caused by a most simple and material
> force, just as all motions of the clock are caused by a single weight.
>
> ---Johannes Kepler (letter to Herwart von Hohenburg, 1605)
We began this book by studying processes and
by describing processes in terms of procedures written in Lisp. To
explain the meanings of these procedures, we used a succession of models
of evaluation: the substitution model of [Chapter 1](#Chapter 1), the
environment model of [Chapter 3](#Chapter 3), and the metacircular
evaluator of [Chapter 4](#Chapter 4). Our examination of the
metacircular evaluator, in particular, dispelled much of the mystery of
how Lisp-like languages are interpreted. But even the metacircular
evaluator leaves important questions unanswered, because it fails to
elucidate the mechanisms of control in a Lisp system. For instance, the
evaluator does not explain how the evaluation of a subexpression manages
to return a value to the expression that uses this value, nor does the
evaluator explain how some recursive procedures generate iterative
processes (that is, are evaluated using constant space) whereas other
recursive procedures generate recursive processes. These questions
remain unanswered because the metacircular evaluator is itself a Lisp
program and hence inherits the control structure of the underlying Lisp
system. In order to provide a more complete description of the control
structure of the Lisp evaluator, we must work at a more primitive level
than Lisp itself.
In this chapter we will describe processes in terms of the step-by-step
operation of a traditional computer. Such a computer, or *register
machine*, sequentially executes *instructions* that manipulate the
contents of a fixed set of storage elements called *registers*. A
typical register-machine instruction applies a primitive operation to
the contents of some registers and assigns the result to another
register. Our descriptions of processes executed by register machines
will look very much like "machine-language" programs for traditional
computers. However, instead of focusing on the machine language of any
particular computer, we will examine several Lisp procedures and design
a specific register machine to execute each procedure. Thus, we will
approach our task from the perspective of a hardware architect rather
than that of a machine-language computer programmer. In designing
register machines, we will develop mechanisms for implementing important
programming constructs such as recursion. We will also present a
language for describing designs for register machines. In [Section
5.2](#Section 5.2) we will implement a Lisp program that uses these
descriptions to simulate the machines we design.
Most of the primitive operations of our register machines are very
simple. For example, an operation might add the numbers fetched from two
registers, producing a result to be stored into a third register. Such
an operation can be performed by easily described hardware. In order to
deal with list structure, however, we will also use the memory
operations `car`, `cdr`, and `cons`, which require an elaborate
storage-allocation mechanism. In [Section 5.3](#Section 5.3) we study
their implementation in terms of more elementary operations.
In [Section 5.4](#Section 5.4), after we have accumulated experience
formulating simple procedures as register machines, we will design a
machine that carries out the algorithm described by the metacircular
evaluator of [Section 4.1](#Section 4.1). This will fill in the gap in
our understanding of how Scheme expressions are interpreted, by
providing an explicit model for the mechanisms of control in the
evaluator. In [Section 5.5](#Section 5.5) we will study a simple
compiler that translates Scheme programs into sequences of instructions
that can be executed directly with the registers and operations of the
evaluator register machine.
## Designing Register Machines {#Section 5.1}
To design a register machine, we must design its *data paths* (registers
and operations) and the *controller* that sequences these operations. To
illustrate the design of a simple register machine, let us examine
Euclid's Algorithm, which is used to compute the greatest common divisor
(gcd) of two integers. As we saw in [Section
1.2.5](#Section 1.2.5), Euclid's Algorithm can be carried out by an
iterative process, as specified by the following procedure:
::: scheme
(define (gcd a b) (if (= b 0) a (gcd b (remainder a b))))
:::
A machine to carry out this algorithm must keep track of two numbers,
$a$ and $b$, so let us assume that these numbers are stored in two
registers with those names. The basic operations required are testing
whether the contents of register `b` is zero and computing the remainder
of the contents of register `a` divided by the contents of register `b`.
The remainder operation is a complex process, but assume for the moment
that we have a primitive device that computes remainders. On each cycle
of the gcd algorithm, the contents of register `a` must be
replaced by the contents of register `b`, and the contents of `b` must
be replaced by the remainder of the old contents of `a` divided by the
old contents of `b`. It would be convenient if these replacements could
be done simultaneously, but in our model of register machines we will
assume that only one register can be assigned a new value at each step.
To accomplish the replacements, our machine will use a third "temporary"
register, which we call `t`. (First the remainder will be placed in `t`,
then the contents of `b` will be placed in `a`, and finally the
remainder stored in `t` will be placed in `b`.)
We can illustrate the registers and operations required for this machine
by using the data-path diagram shown in [Figure 5.1](#Figure 5.1). In
this diagram, the registers (`a`, `b`, and `t`) are represented by
rectangles. Each way to assign a value to a register is indicated by an
arrow with an `X` behind the head, pointing from the source of data to
the register. We can think of the `X` as a button that, when pushed,
allows the value at the source to "flow" into the designated register.
The label next to each button is the name we will use to refer to the
button. The names are arbitrary, and can be chosen to have mnemonic
value (for example, `a←b` denotes pushing the button that assigns the
contents of register `b` to register `a`). The source of data for a
register can be another register (as in the `a←b` assignment), an
operation result (as in the `t←r` assignment), or a constant (a built-in
value that cannot be changed, represented in a data-path diagram by a
triangle containing the constant).
[]{#Figure 5.1 label="Figure 5.1"}
![image](fig/chap5/Fig5.1a.pdf){width="58mm"}
**Figure 5.1:** Data paths for a gcd machine.
An operation that computes a value from constants and the contents of
registers is represented in a data-path diagram by a trapezoid
containing a name for the operation. For example, the box marked `rem`
in [Figure 5.1](#Figure 5.1) represents an operation that computes the
remainder of the contents of the registers `a` and `b` to which it is
attached. Arrows (without buttons) point from the input registers and
constants to the box, and arrows connect the operation's output value to
registers. A test is represented by a circle containing a name for the
test. For example, our gcd machine has an operation that
tests whether the contents of register `b` is zero. A test also has
arrows from its input registers and constants, but it has no output
arrows; its value is used by the controller rather than by the data
paths. Overall, the data-path diagram shows the registers and operations
that are required for the machine and how they must be connected. If we
view the arrows as wires and the `X` buttons as switches, the data-path
diagram is very like the wiring diagram for a machine that could be
constructed from electrical components.
[]{#Figure 5.2 label="Figure 5.2"}
![image](fig/chap5/Fig5.2.pdf){width="41mm"}
**Figure 5.2:** Controller for a gcd machine.
In order for the data paths to actually compute gcds, the
buttons must be pushed in the correct sequence. We will describe this
sequence in terms of a controller diagram, as illustrated in [Figure
5.2](#Figure 5.2). The elements of the controller diagram indicate how
the data-path components should be operated. The rectangular boxes in
the controller diagram identify data-path buttons to be pushed, and the
arrows describe the sequencing from one step to the next. The diamond in
the diagram represents a decision. One of the two sequencing arrows will
be followed, depending on the value of the data-path test identified in
the diamond. We can interpret the controller in terms of a physical
analogy: Think of the diagram as a maze in which a marble is rolling.
When the marble rolls into a box, it pushes the data-path button that is
named by the box. When the marble rolls into a decision node (such as
the test for `b` = 0), it leaves the node on the path determined by the
result of the indicated test. Taken together, the data paths and the
controller completely describe a machine for computing
gcds. We start the controller (the rolling marble) at the
place marked `start`, after placing numbers in registers `a` and `b`.
When the controller reaches `done`, we will find the value of the
gcd in register `a`.
> **[]{#Exercise 5.1 label="Exercise 5.1"}Exercise 5.1:** Design a
> register machine to compute factorials using the iterative algorithm
> specified by the following procedure. Draw data-path and controller
> diagrams for this machine.
>
> ::: scheme
> (define (factorial n) (define (iter product counter) (if (\> counter
> n) product (iter (\* counter product) (+ counter 1)))) (iter 1 1))
> :::
### A Language for Describing Register Machines {#Section 5.1.1}
Data-path and controller diagrams are adequate for representing simple
machines such as gcd, but they are unwieldy for describing
large machines such as a Lisp interpreter. To make it possible to deal
with complex machines, we will create a language that presents, in
textual form, all the information given by the data-path and controller
diagrams. We will start with a notation that directly mirrors the
diagrams.
We define the data paths of a machine by describing the registers and
the operations. To describe a register, we give it a name and specify
the buttons that control assignment to it. We give each of these buttons
a name and specify the source of the data that enters the register under
the button's control. (The source is a register, a constant, or an
operation.) To describe an operation, we give it a name and specify its
inputs (registers or constants).
We define the controller of a machine as a sequence of *instructions*
together with *labels* that identify *entry points* in the sequence. An
instruction is one of the following:
- The name of a data-path button to push to assign a value to a
register. (This corresponds to a box in the controller diagram.)
- A `test` instruction, that performs a specified test.
- A conditional branch (`branch` instruction) to a location indicated
by a controller label, based on the result of the previous test.
(The test and branch together correspond to a diamond in the
controller diagram.) If the test is false, the controller should
continue with the next instruction in the sequence. Otherwise, the
controller should continue with the instruction after the label.
- An unconditional branch (`goto` instruction) naming a controller
label at which to continue execution.
The machine starts at the beginning of the controller instruction
sequence and stops when execution reaches the end of the sequence.
Except when a branch changes the flow of control, instructions are
executed in the order in which they are listed.
> **[]{#Figure 5.3 label="Figure 5.3"}Figure 5.3:** $\downarrow$ A
> specification of the gcd machine.
>
> ::: scheme
> (data-paths (registers ((name a) (buttons ((name a\<-b) (source
> (register b))))) ((name b) (buttons ((name b\<-t) (source (register
> t))))) ((name t) (buttons ((name t\<-r) (source (operation rem))))))
> (operations ((name rem) (inputs (register a) (register b))) ((name =)
> (inputs (register b) (constant 0))))) (controller test-b [;
> label]{.roman} (test =) [; test]{.roman} (branch (label gcd-done))
> [; conditional branch]{.roman} (t\<-r) [; button push]{.roman}
> (a\<-b) [; button push]{.roman} (b\<-t) [; button push]{.roman}
> (goto (label test-b)) [; unconditional branch]{.roman} gcd-done) [;
> label]{.roman}
> :::
[Figure 5.3](#Figure 5.3) shows the gcd machine described
in this way. This example only hints at the generality of these
descriptions, since the gcd machine is a very simple case:
Each register has only one button, and each button and test is used only
once in the controller.
Unfortunately, it is difficult to read such a description. In order to
understand the controller instructions we must constantly refer back to
the definitions of the button names and the operation names, and to
understand what the buttons do we may have to refer to the definitions
of the operation names. We will thus transform our notation to combine
the information from the data-path and controller descriptions so that
we see it all together.
To obtain this form of description, we will replace the arbitrary button
and operation names by the definitions of their behavior. That is,
instead of saying (in the controller) "Push button `t←r`" and separately
saying (in the data paths) "Button `t←r` assigns the value of the `rem`
operation to register `t`" and "The `rem` operation's inputs are the
contents of registers `a` and `b`," we will say (in the controller)
"Push the button that assigns to register `t` the value of the `rem`
operation on the contents of registers `a` and `b`." Similarly, instead
of saying (in the controller) "Perform the `=` test" and separately
saying (in the data paths) "The `=` test operates on the contents of
register `b` and the constant 0," we will say "Perform the `=` test on
the contents of register `b` and the constant 0." We will omit the
data-path description, leaving only the controller sequence. Thus, the
gcd machine is described as follows:
::: scheme
(controller test-b (test (op =) (reg b) (const 0)) (branch (label
gcd-done)) (assign t (op rem) (reg a) (reg b)) (assign a (reg b))
(assign b (reg t)) (goto (label test-b)) gcd-done)
:::
This form of description is easier to read than the kind illustrated in
[Figure 5.3](#Figure 5.3), but it also has disadvantages:
- It is more verbose for large machines, because complete descriptions
of the data-path elements are repeated whenever the elements are
mentioned in the controller instruction sequence. (This is not a
problem in the gcd example, because each operation and
button is used only once.) Moreover, repeating the data-path
descriptions obscures the actual data-path structure of the machine;
it is not obvious for a large machine how many registers,
operations, and buttons there are and how they are interconnected.
- Because the controller instructions in a machine definition look
like Lisp expressions, it is easy to forget that they are not
arbitrary Lisp expressions. They can notate only legal machine
operations. For example, operations can operate directly only on
constants and the contents of registers, not on the results of other
operations.
In spite of these disadvantages, we will use this register-machine
language throughout this chapter, because we will be more concerned with
understanding controllers than with understanding the elements and
connections in data paths. We should keep in mind, however, that
data-path design is crucial in designing real machines.
> **[]{#Exercise 5.2 label="Exercise 5.2"}Exercise 5.2:** Use the
> register-machine language to describe the iterative factorial machine
> of [Exercise 5.1](#Exercise 5.1).
#### Actions {#actions .unnumbered}
Let us modify the gcd machine so that we can type in the
numbers whose gcd we want and get the answer printed at
our terminal. We will not discuss how to make a machine that can read
and print, but will assume (as we do when we use `read` and `display` in
Scheme) that they are available as primitive operations.[^286]
[]{#Figure 5.4 label="Figure 5.4"}
![image](fig/chap5/Fig5.4b.pdf){width="107mm"}
**Figure 5.4:** A gcd machine that reads inputs and prints
results.
`read` is like the operations we have been using in that it produces a
value that can be stored in a register. But `read` does not take inputs
from any registers; its value depends on something that happens outside
the parts of the machine we are designing. We will allow our machine's
operations to have such behavior, and thus will draw and notate the use
of `read` just as we do any other operation that computes a value.
`print`, on the other hand, differs from the operations we have been
using in a fundamental way: It does not produce an output value to be
stored in a register. Though it has an effect, this effect is not on a
part of the machine we are designing. We will refer to this kind of
operation as an *action*. We will represent an action in a data-path
diagram just as we represent an operation that computes a value---as a
trapezoid that contains the name of the action. Arrows point to the
action box from any inputs (registers or constants). We also associate a
button with the action. Pushing the button makes the action happen. To
make a controller push an action button we use a new kind of instruction
called `perform`. Thus, the action of printing the contents of register
`a` is represented in a controller sequence by the instruction
::: scheme
(perform (op print) (reg a))
:::
[Figure 5.4](#Figure 5.4) shows the data paths and controller for the
new gcd machine. Instead of having the machine stop after
printing the answer, we have made it start over, so that it repeatedly
reads a pair of numbers, computes their gcd, and prints
the result. This structure is like the driver loops we used in the
interpreters of [Chapter 4](#Chapter 4).
### Abstraction in Machine Design {#Section 5.1.2}
We will often define a machine to include "primitive" operations that
are actually very complex. For example, in [Section 5.4](#Section 5.4)
and [Section 5.5](#Section 5.5) we will treat Scheme's environment
manipulations as primitive. Such abstraction is valuable because it
allows us to ignore the details of parts of a machine so that we can
concentrate on other aspects of the design. The fact that we have swept
a lot of complexity under the rug, however, does not mean that a machine
design is unrealistic. We can always replace the complex "primitives" by
simpler primitive operations.
Consider the gcd machine. The machine has an instruction
that computes the remainder of the contents of registers `a` and `b` and
assigns the result to register `t`. If we want to construct the
gcd machine without using a primitive remainder operation,
we must specify how to compute remainders in terms of simpler
operations, such as subtraction. Indeed, we can write a Scheme procedure
that finds remainders in this way:
::: scheme
(define (remainder n d) (if (\< n d) n (remainder (- n d) d)))
:::
[]{#Figure 5.5 label="Figure 5.5"}
![image](fig/chap5/Fig5.5a.pdf){width="67mm"}
> **Figure 5.5:** Data paths and controller for the elaborated
> gcd machine.
We can thus replace the remainder operation in the gcd
machine's data paths with a subtraction operation and a comparison test.
[Figure 5.5](#Figure 5.5) shows the data paths and controller for the
elaborated machine. The instruction
::: scheme
(assign t (op rem) (reg a) (reg b))
:::
in the gcd controller definition is replaced by a sequence
of instructions that contains a loop, as shown in [Figure
5.6](#Figure 5.6).
> **[]{#Figure 5.6 label="Figure 5.6"}Figure 5.6:** $\downarrow$
> Controller instruction sequence for the gcd machine in
> [Figure 5.5](#Figure 5.5).
>
> ::: scheme
> (controller test-b (test (op =) (reg b) (const 0)) (branch (label
> gcd-done)) (assign t (reg a)) rem-loop (test (op \<) (reg t) (reg b))
> (branch (label rem-done)) (assign t (op -) (reg t) (reg b)) (goto
> (label rem-loop)) rem-done (assign a (reg b)) (assign b (reg t)) (goto
> (label test-b)) gcd-done)
> :::
> **[]{#Exercise 5.3 label="Exercise 5.3"}Exercise 5.3:** Design a
> machine to compute square roots using Newton's method, as described in
> [Section 1.1.7](#Section 1.1.7):
>
> ::: scheme
> (define (sqrt x) (define (good-enough? guess) (\< (abs (- (square
> guess) x)) 0.001)) (define (improve guess) (average guess (/ x
> guess))) (define (sqrt-iter guess) (if (good-enough? guess) guess
> (sqrt-iter (improve guess)))) (sqrt-iter 1.0))
> :::
>
> Begin by assuming that `good/enough?` and `improve` operations are
> available as primitives. Then show how to expand these in terms of
> arithmetic operations. Describe each version of the `sqrt` machine
> design by drawing a data-path diagram and writing a controller
> definition in the register-machine language.
### Subroutines {#Section 5.1.3}
When designing a machine to perform a computation, we would often prefer
to arrange for components to be shared by different parts of the
computation rather than duplicate the components. Consider a machine
that includes two gcd computations---one that finds the
gcd of the contents of registers `a` and `b` and one that
finds the gcd of the contents of registers `c` and `d`. We
might start by assuming we have a primitive `gcd` operation, then expand
the two instances of `gcd` in terms of more primitive operations.
[Figure 5.7](#Figure 5.7) shows just the gcd portions of
the resulting machine's data paths, without showing how they connect to
the rest of the machine. The figure also shows the corresponding
portions of the machine's controller sequence.
[]{#Figure 5.7 label="Figure 5.7"}
![image](fig/chap5/Fig5.7b.pdf){width="105mm"}
> **Figure 5.7:** Portions of the data paths and controller sequence for
> a machine with two gcd computations.
This machine has two remainder operation boxes and two boxes for testing
equality. If the duplicated components are complicated, as is the
remainder box, this will not be an economical way to build the machine.
We can avoid duplicating the data-path components by using the same
components for both gcd computations, provided that doing
so will not affect the rest of the larger machine's computation. If the
values in registers `a` and `b` are not needed by the time the
controller gets to `gcd/2` (or if these values can be moved to other
registers for safekeeping), we can change the machine so that it uses
registers `a` and `b`, rather than registers `c` and `d`, in computing
the second gcd as well as the first. If we do this, we
obtain the controller sequence shown in [Figure 5.8](#Figure 5.8).
> **[]{#Figure 5.8 label="Figure 5.8"}Figure 5.8:** $\downarrow$
> Portions of the controller sequence for a machine that uses the same
> data-path components for two different gcd computations.
>
> ::: scheme
> gcd-1 (test (op =) (reg b) (const 0)) (branch (label after-gcd-1))
> (assign t (op rem) (reg a) (reg b)) (assign a (reg b)) (assign b (reg
> t)) (goto (label gcd-1)) after-gcd-1 $\dots$ gcd-2 (test (op =) (reg
> b) (const 0)) (branch (label after-gcd-2)) (assign t (op rem) (reg a)
> (reg b)) (assign a (reg b)) (assign b (reg t)) (goto (label gcd-2))
> after-gcd-2
> :::
We have removed the duplicate data-path components (so that the data
paths are again as in [Figure 5.1](#Figure 5.1)), but the controller now
has two gcd sequences that differ only in their
entry-point labels. It would be better to replace these two sequences by
branches to a single sequence---a `gcd` *subroutine*---at the end of
which we branch back to the correct place in the main instruction
sequence. We can accomplish this as follows: Before branching to `gcd`,
we place a distinguishing value (such as 0 or 1) into a special
register, `continue`. At the end of the `gcd` subroutine we return
either to `after/gcd/1` or to `after/gcd/2`, depending on the value of
the `continue` register. [Figure 5.9](#Figure 5.9) shows the relevant
portion of the resulting controller sequence, which includes only a
single copy of the `gcd` instructions.
> **[]{#Figure 5.9 label="Figure 5.9"}Figure 5.9:** $\downarrow$ Using a
> `continue` register to avoid the duplicate controller sequence in
> [Figure 5.8](#Figure 5.8).
>
> ::: scheme
> gcd (test (op =) (reg b) (const 0)) (branch (label gcd-done)) (assign
> t (op rem) (reg a) (reg b)) (assign a (reg b)) (assign b (reg t))
> (goto (label gcd)) gcd-done (test (op =) (reg continue) (const 0))
> (branch (label after-gcd-1)) (goto (label after-gcd-2)) $\dots$ [;;
> Before branching to `gcd` from the first place where]{.roman} [;; it
> is needed, we place 0 in the `continue` register]{.roman} (assign
> continue (const 0)) (goto (label gcd)) after-gcd-1 $\dots$
>
> [;; Before the second use of `gcd`, we place 1]{.roman} [;; in the
> `continue` register]{.roman} (assign continue (const 1)) (goto (label
> gcd)) after-gcd-2
> :::
This is a reasonable approach for handling small problems, but it would
be awkward if there were many instances of gcd
computations in the controller sequence. To decide where to continue
executing after the `gcd` subroutine, we would need tests in the data
paths and branch instructions in the controller for all the places that
use `gcd`. A more powerful method for implementing subroutines is to
have the `continue` register hold the label of the entry point in the
controller sequence at which execution should continue when the
subroutine is finished. Implementing this strategy requires a new kind
of connection between the data paths and the controller of a register
machine: There must be a way to assign to a register a label in the
controller sequence in such a way that this value can be fetched from
the register and used to continue execution at the designated entry
point.
To reflect this ability, we will extend the `assign` instruction of the
register-machine language to allow a register to be assigned as value a
label from the controller sequence (as a special kind of constant). We
will also extend the `goto` instruction to allow execution to continue
at the entry point described by the contents of a register rather than
only at an entry point described by a constant label. Using these new
constructs we can terminate the `gcd` subroutine with a branch to the
location stored in the `continue` register. This leads to the controller
sequence shown in [Figure 5.10](#Figure 5.10).
> **[]{#Figure 5.10 label="Figure 5.10"}Figure 5.10:** $\downarrow$
> Assigning labels to the `continue` register simplifies and generalizes
> the strategy shown in [Figure 5.9](#Figure 5.9).
>
> ::: scheme
> gcd (test (op =) (reg b) (const 0)) (branch (label gcd-done)) (assign
> t (op rem) (reg a) (reg b)) (assign a (reg b)) (assign b (reg t))
> (goto (label gcd)) gcd-done (goto (reg continue)) $\dots$ [;;
> Before calling `gcd`, we assign to `continue`]{.roman} [;; the label
> to which `gcd` should return.]{.roman} (assign continue (label
> after-gcd-1)) (goto (label gcd)) after-gcd-1 $\dots$ [;; Here is
> the second call to `gcd`,]{.roman} [;; with a different
> continuation.]{.roman} (assign continue (label after-gcd-2)) (goto
> (label gcd)) after-gcd-2
> :::
A machine with more than one subroutine could use multiple continuation
registers (e.g., `gcd/continue`, `factorial/continue`) or we could have
all subroutines share a single `continue` register. Sharing is more
economical, but we must be careful if we have a subroutine (`sub1`) that
calls another subroutine (`sub2`). Unless `sub1` saves the contents of
`continue` in some other register before setting up `continue` for the
call to `sub2`, `sub1` will not know where to go when it is finished.
The mechanism developed in the next section to handle recursion also
provides a better solution to this problem of nested subroutine calls.
### Using a Stack to Implement Recursion {#Section 5.1.4}
With the ideas illustrated so far, we can implement any iterative
process by specifying a register machine that has a register
corresponding to each state variable of the process. The machine
repeatedly executes a controller loop, changing the contents of the
registers, until some termination condition is satisfied. At each point
in the controller sequence, the state of the machine (representing the
state of the iterative process) is completely determined by the contents
of the registers (the values of the state variables).
Implementing recursive processes, however, requires an additional
mechanism. Consider the following recursive method for computing
factorials, which we first examined in [Section 1.2.1](#Section 1.2.1):
::: scheme
(define (factorial n) (if (= n 1) 1 (\* (factorial (- n 1)) n)))
:::
As we see from the procedure, computing $n!$ requires computing
$(n - 1)!$. Our gcd machine, modeled on the procedure
::: scheme
(define (gcd a b) (if (= b 0) a (gcd b (remainder a b))))
:::
similarly had to compute another gcd. But there is an
important difference between the `gcd` procedure, which reduces the
original computation to a new gcd computation, and
`factorial`, which requires computing another factorial as a subproblem.
In gcd, the answer to the new gcd
computation is the answer to the original problem. To compute the next
gcd, we simply place the new arguments in the input
registers of the gcd machine and reuse the machine's data
paths by executing the same controller sequence. When the machine is
finished solving the final gcd problem, it has completed
the entire computation.
In the case of factorial (or any recursive process) the answer to the
new factorial subproblem is not the answer to the original problem. The
value obtained for $(n - 1)!$ must be multiplied by $n$ to get the final
answer. If we try to imitate the gcd design, and solve the
factorial subproblem by decrementing the `n` register and rerunning the
factorial machine, we will no longer have available the old value of `n`
by which to multiply the result. We thus need a second factorial machine
to work on the subproblem. This second factorial computation itself has
a factorial subproblem, which requires a third factorial machine, and so
on. Since each factorial machine contains another factorial machine
within it, the total machine contains an infinite nest of similar
machines and hence cannot be constructed from a fixed, finite number of
parts.
Nevertheless, we can implement the factorial process as a register
machine if we can arrange to use the same components for each nested
instance of the machine. Specifically, the machine that computes $n!$
should use the same components to work on the subproblem of computing
$(n - 1)!$, on the subproblem for $(n - 2)!$, and so on. This is
plausible because, although the factorial process dictates that an
unbounded number of copies of the same machine are needed to perform a
computation, only one of these copies needs to be active at any given
time. When the machine encounters a recursive subproblem, it can suspend
work on the main problem, reuse the same physical parts to work on the
subproblem, then continue the suspended computation.
In the subproblem, the contents of the registers will be different than
they were in the main problem. (In this case the `n` register is
decremented.) In order to be able to continue the suspended computation,
the machine must save the contents of any registers that will be needed
after the subproblem is solved so that these can be restored to continue
the suspended computation. In the case of factorial, we will save the
old value of `n`, to be restored when we are finished computing the
factorial of the decremented `n` register.[^287]
Since there is no *a priori* limit on the depth of nested recursive
calls, we may need to save an arbitrary number of register values. These
values must be restored in the reverse of the order in which they were
saved, since in a nest of recursions the last subproblem to be entered
is the first to be finished. This dictates the use of a *stack*, or
"last in, first out" data structure, to save register values. We can
extend the register-machine language to include a stack by adding two
kinds of instructions: Values are placed on the stack using a `save`
instruction and restored from the stack using a `restore` instruction.
After a sequence of values has been `save`d on the stack, a sequence of
`restore`s will retrieve these values in reverse order.[^288]
With the aid of the stack, we can reuse a single copy of the factorial
machine's data paths for each factorial subproblem. There is a similar
design issue in reusing the controller sequence that operates the data
paths. To reexecute the factorial computation, the controller cannot
simply loop back to the beginning, as with an iterative process, because
after solving the $(n - 1)!$ subproblem the machine must still multiply
the result by $n$. The controller must suspend its computation of $n!$,
solve the $(n - 1)!$ subproblem, then continue its computation of $n!$.
This view of the factorial computation suggests the use of the
subroutine mechanism described in [Section 5.1.3](#Section 5.1.3), which
has the controller use a `continue` register to transfer to the part of
the sequence that solves a subproblem and then continue where it left
off on the main problem. We can thus make a factorial subroutine that
returns to the entry point stored in the `continue` register. Around
each subroutine call, we save and restore `continue` just as we do the
`n` register, since each "level" of the factorial computation will use
the same `continue` register. That is, the factorial subroutine must put
a new value in `continue` when it calls itself for a subproblem, but it
will need the old value in order to return to the place that called it
to solve a subproblem.
[Figure 5.11](#Figure 5.11) shows the data paths and controller for a
machine that implements the recursive `factorial` procedure. The machine
has a stack and three registers, called `n`, `val`, and `continue`. To
simplify the data-path diagram, we have not named the
register-assignment buttons, only the stack-operation buttons (`sc` and
`sn` to save registers, `rc` and `rn` to restore registers). To operate
the machine, we put in register `n` the number whose factorial we wish
to compute and start the machine. When the machine reaches `fact/done`,
the computation is finished and the answer will be found in the `val`
register. In the controller sequence, `n` and `continue` are saved
before each recursive call and restored upon return from the call.
Returning from a call is accomplished by branching to the location
stored in `continue`. `continue` is initialized when the machine starts
so that the last return will go to `fact/done`. The `val` register,
which holds the result of the factorial computation, is not saved before
the recursive call, because the old contents of `val` is not useful
after the subroutine returns. Only the new value, which is the value
produced by the subcomputation, is needed.
[]{#Figure 5.11 label="Figure 5.11"}
![image](fig/chap5/Fig5.11a.pdf){width="106mm"}
**Figure 5.11:** A recursive factorial machine.
Although in principle the factorial computation requires an infinite
machine, the machine in [Figure 5.11](#Figure 5.11) is actually finite
except for the stack, which is potentially unbounded. Any particular
physical implementation of a stack, however, will be of finite size, and
this will limit the depth of recursive calls that can be handled by the
machine. This implementation of factorial illustrates the general
strategy for realizing recursive algorithms as ordinary register
machines augmented by stacks. When a recursive subproblem is
encountered, we save on the stack the registers whose current values
will be required after the subproblem is solved, solve the recursive
subproblem, then restore the saved registers and continue execution on
the main problem. The `continue` register must always be saved. Whether
there are other registers that need to be saved depends on the
particular machine, since not all recursive computations need the
original values of registers that are modified during solution of the
subproblem (see [Exercise 5.4](#Exercise 5.4)).
#### A double recursion {#a-double-recursion .unnumbered}
Let us examine a more complex recursive process, the tree-recursive
computation of the Fibonacci numbers, which we introduced in [Section
1.2.2](#Section 1.2.2):
::: scheme
(define (fib n) (if (\< n 2) n (+ (fib (- n 1)) (fib (- n 2)))))
:::
Just as with factorial, we can implement the recursive Fibonacci
computation as a register machine with registers `n`, `val`, and
`continue`. The machine is more complex than the one for factorial,
because there are two places in the controller sequence where we need to
perform recursive calls---once to compute ${\rm Fib}(n - 1)$ and once to
compute ${\rm Fib}(n - 2)$. To set up for each of these calls, we save
the registers whose values will be needed later, set the `n` register to
the number whose Fib we need to compute recursively ($n - 1$ or
$n - 2$), and assign to `continue` the entry point in the main sequence
to which to return (`afterfib/n/1` or `afterfib/n/2`, respectively). We
then go to `fib/loop`. When we return from the recursive call, the
answer is in `val`. [Figure 5.12](#Figure 5.12) shows the controller
sequence for this machine.
> **[]{#Figure 5.12 label="Figure 5.12"}Figure 5.12:** $\downarrow$
> Controller for a machine to compute Fibonacci numbers.
>
> ::: scheme
> (controller (assign continue (label fib-done)) fib-loop (test (op \<)
> (reg n) (const 2)) (branch (label immediate-answer)) [;; set up to
> compute Fib$(n-1)$]{.roman} (save continue) (assign continue (label
> afterfib-n-1)) (save n) [; save old value of `n`]{.roman} (assign n
> (op -) (reg n) (const 1)) [; clobber `n` to `n/1`]{.roman} (goto
> (label fib-loop)) [; perform recursive call]{.roman} afterfib-n-1
> [; upon return, `val` contains Fib$(n-1)$]{.roman} (restore n)
> (restore continue) [;; set up to compute Fib$(n - 2)$]{.roman}
> (assign n (op -) (reg n) (const 2)) (save continue) (assign continue
> (label afterfib-n-2)) (save val) [; save Fib$(n-1)$]{.roman} (goto
> (label fib-loop)) afterfib-n-2 [; upon return, `val` contains
> Fib$(n-2)$]{.roman} (assign n (reg val)) [; `n` now contains
> Fib$(n-2)$]{.roman} (restore val) [; `val` now contains
> Fib$(n-1)$]{.roman} (restore continue) (assign val [; Fib$(n-1)$ +
> Fib$(n-2)$]{.roman} (op +) (reg val) (reg n)) (goto (reg continue))
> [; return to caller, answer is in `val`]{.roman} immediate-answer
> (assign val (reg n)) [; base case: Fib$(n) = n$]{.roman} (goto (reg
> continue)) fib-done)
> :::
> **[]{#Exercise 5.4 label="Exercise 5.4"}Exercise 5.4:** Specify
> register machines that implement each of the following procedures. For
> each machine, write a controller instruction sequence and draw a
> diagram showing the data paths.
>
> a. Recursive exponentiation:
>
> ::: scheme
> (define (expt b n) (if (= n 0) 1 (\* b (expt b (- n 1)))))
> :::
>
> b. Iterative exponentiation:
>
> ::: scheme
> (define (expt b n) (define (expt-iter counter product) (if (=
> counter 0) product (expt-iter (- counter 1) (\* b product))))
> (expt-iter n 1))
> :::
> **[]{#Exercise 5.5 label="Exercise 5.5"}Exercise 5.5:** Hand-simulate
> the factorial and Fibonacci machines, using some nontrivial input
> (requiring execution of at least one recursive call). Show the
> contents of the stack at each significant point in the execution.
> **[]{#Exercise 5.6 label="Exercise 5.6"}Exercise 5.6:** Ben Bitdiddle
> observes that the Fibonacci machine's controller sequence has an extra
> `save` and an extra `restore`, which can be removed to make a faster
> machine. Where are these instructions?
### Instruction Summary {#Section 5.1.5}
A controller instruction in our register-machine language has one of the
following forms, where each $\langle$*input*$_i\rangle$ is either
`(reg `$\langle$*`register/name`*$\rangle$`)` or
`(const `$\langle$*`constant/value`*$\rangle$`)`. These instructions
were introduced in [Section 5.1.1](#Section 5.1.1):
::: scheme
(assign
$\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$
(reg
$\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$ ))
(assign
$\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$
(const
$\color{SchemeDark}\langle$ *constant-value* $\color{SchemeDark}\rangle$ ))
(assign
$\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$
(op
$\color{SchemeDark}\langle$ *operation-name* $\color{SchemeDark}\rangle$ )
$\color{SchemeDark}\langle$ *input* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *input* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
(perform (op
$\color{SchemeDark}\langle$ *operation-name* $\color{SchemeDark}\rangle$ )
$\color{SchemeDark}\langle$ *input* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *input* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
(test (op
$\color{SchemeDark}\langle$ *operation-name* $\color{SchemeDark}\rangle$ )
$\color{SchemeDark}\langle$ *input* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\dots$
$\color{SchemeDark}\langle$ *input* $\color{SchemeDark}_{\hbox{\ttfamily\itshape\scriptsize n}}\rangle$ )
(branch (label
$\color{SchemeDark}\langle$ *label-name* $\color{SchemeDark}\rangle$ ))
(goto (label
$\color{SchemeDark}\langle$ *label-name* $\color{SchemeDark}\rangle$ ))
:::
The use of registers to hold labels was introduced in [Section
5.1.3](#Section 5.1.3):
::: scheme
(assign
$\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$
(label
$\color{SchemeDark}\langle$ *label-name* $\color{SchemeDark}\rangle$ ))
(goto (reg
$\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$ ))
:::
Instructions to use the stack were introduced in [Section
5.1.4](#Section 5.1.4):
::: scheme
(save
$\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$ )
(restore
$\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$ )
:::
The only kind of $\langle$*constant-value*$\rangle$ we have seen so far
is a number, but later we will use strings, symbols, and lists. For
example,
::: scheme
(const \"abc\") [is the string]{.roman} \"abc\", (const abc) [is the
symbol]{.roman} abc, (const (a b c)) [is the list]{.roman} (a b c),
[and]{.roman} (const ()) [is the empty list.]{.roman}
:::
## A Register-Machine Simulator {#Section 5.2}
In order to gain a good understanding of the design of register
machines, we must test the machines we design to see if they perform as
expected. One way to test a design is to hand-simulate the operation of
the controller, as in [Exercise 5.5](#Exercise 5.5). But this is
extremely tedious for all but the simplest machines. In this section we
construct a simulator for machines described in the register-machine
language. The simulator is a Scheme program with four interface
procedures. The first uses a description of a register machine to
construct a model of the machine (a data structure whose parts
correspond to the parts of the machine to be simulated), and the other
three allow us to simulate the machine by manipulating the model:
> ::: scheme
> (make-machine
> $\color{SchemeDark}\langle$ *register-names* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *operations* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *controller* $\color{SchemeDark}\rangle$ )
> :::
>
> constructs and returns a model of the machine with the given
> registers, operations, and controller.
>
> ::: scheme
> (set-register-contents!
> $\color{SchemeDark}\langle\kern0.08em$ *machine-model* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *value* $\color{SchemeDark}\rangle$ )
> :::
>
> stores a value in a simulated register in the given machine.
>
> ::: scheme
> (get-register-contents
> $\color{SchemeDark}\langle\kern0.08em$ *machine-model* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *register-name* $\color{SchemeDark}\rangle$ )
> :::
>
> returns the contents of a simulated register in the given machine.
>
> ::: scheme
> (start
> $\color{SchemeDark}\langle\kern0.08em$ *machine-model* $\color{SchemeDark}\rangle$ )
> :::
>
> simulates the execution of the given machine, starting from the
> beginning of the controller sequence and stopping when it reaches the
> end of the sequence.
As an example of how these procedures are used, we can define
`gcd/machine` to be a model of the gcd machine of [Section
5.1.1](#Section 5.1.1) as follows:
::: scheme
(define gcd-machine (make-machine '(a b t) (list (list 'rem remainder)
(list '= =)) '(test-b (test (op =) (reg b) (const 0)) (branch (label
gcd-done)) (assign t (op rem) (reg a) (reg b)) (assign a (reg b))
(assign b (reg t)) (goto (label test-b)) gcd-done)))
:::
The first argument to `make/machine` is a list of register names. The
next argument is a table (a list of two-element lists) that pairs each
operation name with a Scheme procedure that implements the operation
(that is, produces the same output value given the same input values).
The last argument specifies the controller as a list of labels and
machine instructions, as in [Section 5.1](#Section 5.1).
To compute gcds with this machine, we set the input
registers, start the machine, and examine the result when the simulation
terminates:
::: scheme
(set-register-contents! gcd-machine 'a 206) *done*
(set-register-contents! gcd-machine 'b 40) *done* (start gcd-machine)
*done* (get-register-contents gcd-machine 'a) *2*
:::
This computation will run much more slowly than a `gcd` procedure
written in Scheme, because we will simulate low-level machine
instructions, such as `assign`, by much more complex operations.
> **[]{#Exercise 5.7 label="Exercise 5.7"}Exercise 5.7:** Use the
> simulator to test the machines you designed in [Exercise
> 5.4](#Exercise 5.4).
### The Machine Model {#Section 5.2.1}
The machine model generated by `make/machine` is represented as a
procedure with local state using the message-passing techniques
developed in [Chapter 3](#Chapter 3). To build this model,
`make/machine` begins by calling the procedure `make/new/machine` to
construct the parts of the machine model that are common to all register
machines. This basic machine model constructed by `make/new/machine` is
essentially a container for some registers and a stack, together with an
execution mechanism that processes the controller instructions one by
one.
`make/machine` then extends this basic model (by sending it messages) to
include the registers, operations, and controller of the particular
machine being defined. First it allocates a register in the new machine
for each of the supplied register names and installs the designated
operations in the machine. Then it uses an *assembler* (described below
in [Section 5.2.2](#Section 5.2.2)) to transform the controller list
into instructions for the new machine and installs these as the
machine's instruction sequence. `make/machine` returns as its value the
modified machine model.
::: scheme
(define (make-machine register-names ops controller-text) (let ((machine
(make-new-machine))) (for-each (lambda (register-name) ((machine
'allocate-register) register-name)) register-names) ((machine
'install-operations) ops) ((machine 'install-instruction-sequence)
(assemble controller-text machine)) machine))
:::
#### Registers {#registers .unnumbered}
We will represent a register as a procedure with local state, as in
[Chapter 3](#Chapter 3). The procedure `make/register` creates a
register that holds a value that can be accessed or changed:
::: scheme
(define (make-register name) (let ((contents '\*unassigned\*)) (define
(dispatch message) (cond ((eq? message 'get) contents) ((eq? message
'set) (lambda (value) (set! contents value))) (else (error \"Unknown
request: REGISTER\" message)))) dispatch))
:::
The following procedures are used to access registers:
::: scheme
(define (get-contents register) (register 'get)) (define (set-contents!
register value) ((register 'set) value))
:::
#### The stack {#the-stack .unnumbered}
We can also represent a stack as a procedure with local state. The
procedure `make/stack` creates a stack whose local state consists of a
list of the items on the stack. A stack accepts requests to `push` an
item onto the stack, to `pop` the top item off the stack and return it,
and to `initialize` the stack to empty.
::: scheme
(define (make-stack) (let ((s '())) (define (push x) (set! s (cons x
s))) (define (pop) (if (null? s) (error \"Empty stack: POP\") (let ((top
(car s))) (set! s (cdr s)) top))) (define (initialize) (set! s '())
'done) (define (dispatch message) (cond ((eq? message 'push) push) ((eq?
message 'pop) (pop)) ((eq? message 'initialize) (initialize)) (else
(error \"Unknown request: STACK\" message)))) dispatch))
:::
The following procedures are used to access stacks:
::: scheme
(define (pop stack) (stack 'pop)) (define (push stack value) ((stack
'push) value))
:::
#### The basic machine {#the-basic-machine .unnumbered}
The `make/new/machine` procedure, shown in [Figure 5.13](#Figure 5.13),
constructs an object whose local state consists of a stack, an initially
empty instruction sequence, a list of operations that initially contains
an operation to initialize the stack, and a *register table* that
initially contains two registers, named `flag` and `pc` (for "program
counter"). The internal procedure `allocate/register` adds new entries
to the register table, and the internal procedure `lookup/register`
looks up registers in the table.
The `flag` register is used to control branching in the simulated
machine. `test` instructions set the contents of `flag` to the result of
the test (true or false). `branch` instructions decide whether or not to
branch by examining the contents of `flag`.
The `pc` register determines the sequencing of instructions as the
machine runs. This sequencing is implemented by the internal procedure
`execute`. In the simulation model, each machine instruction is a data
structure that includes a procedure of no arguments, called the
*instruction execution procedure*, such that calling this procedure
simulates executing the instruction. As the simulation runs, `pc` points
to the place in the instruction sequence beginning with the next
instruction to be executed. `execute` gets that instruction, executes it
by calling the instruction execution procedure, and repeats this cycle
until there are no more instructions to execute (i.e., until `pc` points
to the end of the instruction sequence).
> **[]{#Figure 5.13 label="Figure 5.13"}Figure 5.13:** $\downarrow$ The
> `make/new/machine` procedure, which implements the basic machine
> model.
>
> ::: scheme
> (define (make-new-machine) (let ((pc (make-register 'pc)) (flag
> (make-register 'flag)) (stack (make-stack)) (the-instruction-sequence
> '())) (let ((the-ops (list (list 'initialize-stack (lambda () (stack
> 'initialize))))) (register-table (list (list 'pc pc) (list 'flag
> flag)))) (define (allocate-register name) (if (assoc name
> register-table) (error \"Multiply defined register: \" name) (set!
> register-table (cons (list name (make-register name))
> register-table))) 'register-allocated) (define (lookup-register name)
> (let ((val (assoc name register-table))) (if val (cadr val) (error
> \"Unknown register:\" name)))) (define (execute) (let ((insts
> (get-contents pc))) (if (null? insts) 'done (begin
> ((instruction-execution-proc (car insts))) (execute))))) (define
> (dispatch message) (cond ((eq? message 'start) (set-contents! pc
> the-instruction-sequence) (execute)) ((eq? message
> 'install-instruction-sequence) (lambda (seq) (set!
> the-instruction-sequence seq))) ((eq? message 'allocate-register)
> allocate-register) ((eq? message 'get-register) lookup-register) ((eq?
> message 'install-operations) (lambda (ops) (set! the-ops (append
> the-ops ops)))) ((eq? message 'stack) stack) ((eq? message
> 'operations) the-ops) (else (error \"Unknown request: MACHINE\"
> message)))) dispatch)))
> :::
As part of its operation, each instruction execution procedure modifies
`pc` to indicate the next instruction to be executed. `branch` and
`goto` instructions change `pc` to point to the new destination. All
other instructions simply advance `pc`, making it point to the next
instruction in the sequence. Observe that each call to `execute` calls
`execute` again, but this does not produce an infinite loop because
running the instruction execution procedure changes the contents of
`pc`.
`make/new/machine` returns a `dispatch` procedure that implements
message-passing access to the internal state. Notice that starting the
machine is accomplished by setting `pc` to the beginning of the
instruction sequence and calling `execute`.
For convenience, we provide an alternate procedural interface to a
machine's `start` operation, as well as procedures to set and examine
register contents, as specified at the beginning of [Section
5.2](#Section 5.2):
::: scheme
(define (start machine) (machine 'start)) (define (get-register-contents
machine register-name) (get-contents (get-register machine
register-name))) (define (set-register-contents! machine register-name
value) (set-contents! (get-register machine register-name) value) 'done)
:::
These procedures (and many procedures in [Section 5.2.2](#Section 5.2.2)
and [Section 5.2.3](#Section 5.2.3)) use the following to look up the
register with a given name in a given machine:
::: scheme
(define (get-register machine reg-name) ((machine 'get-register)
reg-name))
:::
### The Assembler {#Section 5.2.2}
The assembler transforms the sequence of controller expressions for a
machine into a corresponding list of machine instructions, each with its
execution procedure. Overall, the assembler is much like the evaluators
we studied in [Chapter 4](#Chapter 4)---there is an input language (in
this case, the register-machine language) and we must perform an
appropriate action for each type of expression in the language.
The technique of producing an execution procedure for each instruction
is just what we used in [Section 4.1.7](#Section 4.1.7) to speed up the
evaluator by separating analysis from runtime execution. As we saw in
[Chapter 4](#Chapter 4), much useful analysis of Scheme expressions
could be performed without knowing the actual values of variables. Here,
analogously, much useful analysis of register-machine-language
expressions can be performed without knowing the actual contents of
machine registers. For example, we can replace references to registers
by pointers to the register objects, and we can replace references to
labels by pointers to the place in the instruction sequence that the
label designates.
Before it can generate the instruction execution procedures, the
assembler must know what all the labels refer to, so it begins by
scanning the controller text to separate the labels from the
instructions. As it scans the text, it constructs both a list of
instructions and a table that associates each label with a pointer into
that list. Then the assembler augments the instruction list by inserting
the execution procedure for each instruction.
The `assemble` procedure is the main entry to the assembler. It takes
the controller text and the machine model as arguments and returns the
instruction sequence to be stored in the model. `assemble` calls
`extract/labels` to build the initial instruction list and label table
from the supplied controller text. The second argument to
`extract/labels` is a procedure to be called to process these results:
This procedure uses `update/insts!` to generate the instruction
execution procedures and insert them into the instruction list, and
returns the modified list.
::: scheme
(define (assemble controller-text machine) (extract-labels
controller-text (lambda (insts labels) (update-insts! insts labels
machine) insts)))
:::
`extract/labels` takes as arguments a list `text` (the sequence of
controller instruction expressions) and a `receive` procedure. `receive`
will be called with two values: (1) a list `insts` of instruction data
structures, each containing an instruction from `text`; and (2) a table
called `labels`, which associates each label from `text` with the
position in the list `insts` that the label designates.
::: scheme
(define (extract-labels text receive) (if (null? text) (receive '() '())
(extract-labels (cdr text) (lambda (insts labels) (let ((next-inst (car
text))) (if (symbol? next-inst) (receive insts (cons (make-label-entry
next-inst insts) labels)) (receive (cons (make-instruction next-inst)
insts) labels)))))))
:::
`extract/labels` works by sequentially scanning the elements of the
`text` and accumulating the `insts` and the `labels`. If an element is a
symbol (and thus a label) an appropriate entry is added to the `labels`
table. Otherwise the element is accumulated onto the `insts` list.[^289]
`update/insts!` modifies the instruction list, which initially contains
only the text of the instructions, to include the corresponding
execution procedures:
::: scheme
(define (update-insts! insts labels machine) (let ((pc (get-register
machine 'pc)) (flag (get-register machine 'flag)) (stack (machine
'stack)) (ops (machine 'operations))) (for-each (lambda (inst)
(set-instruction-execution-proc! inst (make-execution-procedure
(instruction-text inst) labels machine pc flag stack ops))) insts)))
:::
The machine instruction data structure simply pairs the instruction text
with the corresponding execution procedure. The execution procedure is
not yet available when `extract/labels` constructs the instruction, and
is inserted later by `update/insts!`.
::: scheme
(define (make-instruction text) (cons text '())) (define
(instruction-text inst) (car inst)) (define (instruction-execution-proc
inst) (cdr inst)) (define (set-instruction-execution-proc! inst proc)
(set-cdr! inst proc))
:::
The instruction text is not used by our simulator, but it is handy to
keep around for debugging (see [Exercise 5.16](#Exercise 5.16)).
Elements of the label table are pairs:
::: scheme
(define (make-label-entry label-name insts) (cons label-name insts))
:::
Entries will be looked up in the table with
::: scheme
(define (lookup-label labels label-name) (let ((val (assoc label-name
labels))) (if val (cdr val) (error \"Undefined label: ASSEMBLE\"
label-name))))
:::
> **[]{#Exercise 5.8 label="Exercise 5.8"}Exercise 5.8:** The following
> register-machine code is ambiguous, because the label `here` is
> defined more than once:
>
> ::: scheme
> start (goto (label here)) here (assign a (const 3)) (goto (label
> there)) here (assign a (const 4)) (goto (label there)) there
> :::
>
> With the simulator as written, what will the contents of register `a`
> be when control reaches `there`? Modify the `extract/labels` procedure
> so that the assembler will signal an error if the same label name is
> used to indicate two different locations.
### Generating Execution Procedures for Instructions {#Section 5.2.3}
The assembler calls `make/execution/procedure` to generate the execution
procedure for an instruction. Like the `analyze` procedure in the
evaluator of [Section 4.1.7](#Section 4.1.7), this dispatches on the
type of instruction to generate the appropriate execution procedure.
::: scheme
(define (make-execution-procedure inst labels machine pc flag stack ops)
(cond ((eq? (car inst) 'assign) (make-assign inst machine labels ops
pc)) ((eq? (car inst) 'test) (make-test inst machine labels ops flag
pc)) ((eq? (car inst) 'branch) (make-branch inst machine labels flag
pc)) ((eq? (car inst) 'goto) (make-goto inst machine labels pc)) ((eq?
(car inst) 'save) (make-save inst machine stack pc)) ((eq? (car inst)
'restore) (make-restore inst machine stack pc)) ((eq? (car inst)
'perform) (make-perform inst machine labels ops pc)) (else (error
\"Unknown instruction type: ASSEMBLE\" inst))))
:::
For each type of instruction in the register-machine language, there is
a generator that builds an appropriate execution procedure. The details
of these procedures determine both the syntax and meaning of the
individual instructions in the register-machine language. We use data
abstraction to isolate the detailed syntax of register-machine
expressions from the general execution mechanism, as we did for
evaluators in [Section 4.1.2](#Section 4.1.2), by using syntax
procedures to extract and classify the parts of an instruction.
#### `assign` instructions {#assign-instructions .unnumbered}
The `make/assign` procedure handles `assign` instructions:
::: scheme
(define (make-assign inst machine labels operations pc) (let ((target
(get-register machine (assign-reg-name inst))) (value-exp
(assign-value-exp inst))) (let ((value-proc (if (operation-exp?
value-exp) (make-operation-exp value-exp machine labels operations)
(make-primitive-exp (car value-exp) machine labels)))) (lambda () [;
execution procedure for `assign`]{.roman} (set-contents! target
(value-proc)) (advance-pc pc)))))
:::
`make/assign` extracts the target register name (the second element of
the instruction) and the value expression (the rest of the list that
forms the instruction) from the `assign` instruction using the selectors
::: scheme
(define (assign-reg-name assign-instruction) (cadr assign-instruction))
(define (assign-value-exp assign-instruction) (cddr assign-instruction))
:::
The register name is looked up with `get/register` to produce the target
register object. The value expression is passed to `make/operation/exp`
if the value is the result of an operation, and to `make/primitive/exp`
otherwise. These procedures (shown below) parse the value expression and
produce an execution procedure for the value. This is a procedure of no
arguments, called `value/proc`, which will be evaluated during the
simulation to produce the actual value to be assigned to the register.
Notice that the work of looking up the register name and parsing the
value expression is performed just once, at assembly time, not every
time the instruction is simulated. This saving of work is the reason we
use execution procedures, and corresponds directly to the saving in work
we obtained by separating program analysis from execution in the
evaluator of [Section 4.1.7](#Section 4.1.7).
The result returned by `make/assign` is the execution procedure for the
`assign` instruction. When this procedure is called (by the machine
model's `execute` procedure), it sets the contents of the target
register to the result obtained by executing `value/proc`. Then it
advances the `pc` to the next instruction by running the procedure
::: scheme
(define (advance-pc pc) (set-contents! pc (cdr (get-contents pc))))
:::
`advance/pc` is the normal termination for all instructions except
`branch` and `goto`.
#### `test`, `branch`, and `goto` instructions {#test-branch-and-goto-instructions .unnumbered}
`make/test` handles `test` instructions in a similar way. It extracts
the expression that specifies the condition to be tested and generates
an execution procedure for it. At simulation time, the procedure for the
condition is called, the result is assigned to the `flag` register, and
the `pc` is advanced:
::: scheme
(define (make-test inst machine labels operations flag pc) (let
((condition (test-condition inst))) (if (operation-exp? condition) (let
((condition-proc (make-operation-exp condition machine labels
operations))) (lambda () (set-contents! flag (condition-proc))
(advance-pc pc))) (error \"Bad TEST instruction: ASSEMBLE\" inst))))
(define (test-condition test-instruction) (cdr test-instruction))
:::
The execution procedure for a `branch` instruction checks the contents
of the `flag` register and either sets the contents of the `pc` to the
branch destination (if the branch is taken) or else just advances the
`pc` (if the branch is not taken). Notice that the indicated destination
in a `branch` instruction must be a label, and the `make/branch`
procedure enforces this. Notice also that the label is looked up at
assembly time, not each time the `branch` instruction is simulated.
::: scheme
(define (make-branch inst machine labels flag pc) (let ((dest
(branch-dest inst))) (if (label-exp? dest) (let ((insts (lookup-label
labels (label-exp-label dest)))) (lambda () (if (get-contents flag)
(set-contents! pc insts) (advance-pc pc)))) (error \"Bad BRANCH
instruction: ASSEMBLE\" inst)))) (define (branch-dest
branch-instruction) (cadr branch-instruction))
:::
A `goto` instruction is similar to a branch, except that the destination
may be specified either as a label or as a register, and there is no
condition to check---the `pc` is always set to the new destination.
::: scheme
(define (make-goto inst machine labels pc) (let ((dest (goto-dest
inst))) (cond ((label-exp? dest) (let ((insts (lookup-label labels
(label-exp-label dest)))) (lambda () (set-contents! pc insts))))
((register-exp? dest) (let ((reg (get-register machine (register-exp-reg
dest)))) (lambda () (set-contents! pc (get-contents reg))))) (else
(error \"Bad GOTO instruction: ASSEMBLE\" inst))))) (define (goto-dest
goto-instruction) (cadr goto-instruction))
:::
#### Other instructions {#other-instructions .unnumbered}
The stack instructions `save` and `restore` simply use the stack with
the designated register and advance the `pc`:
::: scheme
(define (make-save inst machine stack pc) (let ((reg (get-register
machine (stack-inst-reg-name inst)))) (lambda () (push stack
(get-contents reg)) (advance-pc pc)))) (define (make-restore inst
machine stack pc) (let ((reg (get-register machine (stack-inst-reg-name
inst)))) (lambda () (set-contents! reg (pop stack)) (advance-pc pc))))
(define (stack-inst-reg-name stack-instruction) (cadr
stack-instruction))
:::
The final instruction type, handled by `make/perform`, generates an
execution procedure for the action to be performed. At simulation time,
the action procedure is executed and the `pc` advanced.
::: scheme
(define (make-perform inst machine labels operations pc) (let ((action
(perform-action inst))) (if (operation-exp? action) (let ((action-proc
(make-operation-exp action machine labels operations))) (lambda ()
(action-proc) (advance-pc pc))) (error \"Bad PERFORM instruction:
ASSEMBLE\" inst)))) (define (perform-action inst) (cdr inst))
:::
#### Execution procedures for subexpressions {#execution-procedures-for-subexpressions .unnumbered}
The value of a `reg`, `label`, or `const` expression may be needed for
assignment to a register (`make/assign`) or for input to an operation
(`make/operation/exp`, below). The following procedure generates
execution procedures to produce values for these expressions during the
simulation:
::: scheme
(define (make-primitive-exp exp machine labels) (cond ((constant-exp?
exp) (let ((c (constant-exp-value exp))) (lambda () c))) ((label-exp?
exp) (let ((insts (lookup-label labels (label-exp-label exp)))) (lambda
() insts))) ((register-exp? exp) (let ((r (get-register machine
(register-exp-reg exp)))) (lambda () (get-contents r)))) (else (error
\"Unknown expression type: ASSEMBLE\" exp))))
:::
The syntax of `reg`, `label`, and `const` expressions is determined by
::: scheme
(define (register-exp? exp) (tagged-list? exp 'reg)) (define
(register-exp-reg exp) (cadr exp)) (define (constant-exp? exp)
(tagged-list? exp 'const)) (define (constant-exp-value exp) (cadr exp))
(define (label-exp? exp) (tagged-list? exp 'label)) (define
(label-exp-label exp) (cadr exp))
:::
`assign`, `perform`, and `test` instructions may include the application
of a machine operation (specified by an `op` expression) to some
operands (specified by `reg` and `const` expressions). The following
procedure produces an execution procedure for an "operation
expression"---a list containing the operation and operand expressions
from the instruction:
::: scheme
(define (make-operation-exp exp machine labels operations) (let ((op
(lookup-prim (operation-exp-op exp) operations)) (aprocs (map (lambda
(e) (make-primitive-exp e machine labels)) (operation-exp-operands
exp)))) (lambda () (apply op (map (lambda (p) (p)) aprocs)))))
:::
The syntax of operation expressions is determined by
::: scheme
(define (operation-exp? exp) (and (pair? exp) (tagged-list? (car exp)
'op))) (define (operation-exp-op operation-exp) (cadr (car
operation-exp))) (define (operation-exp-operands operation-exp) (cdr
operation-exp))
:::
Observe that the treatment of operation expressions is very much like
the treatment of procedure applications by the `analyze/application`
procedure in the evaluator of [Section 4.1.7](#Section 4.1.7) in that we
generate an execution procedure for each operand. At simulation time, we
call the operand procedures and apply the Scheme procedure that
simulates the operation to the resulting values. The simulation
procedure is found by looking up the operation name in the operation
table for the machine:
::: scheme
(define (lookup-prim symbol operations) (let ((val (assoc symbol
operations))) (if val (cadr val) (error \"Unknown operation: ASSEMBLE\"
symbol))))
:::
> **[]{#Exercise 5.9 label="Exercise 5.9"}Exercise 5.9:** The treatment
> of machine operations above permits them to operate on labels as well
> as on constants and the contents of registers. Modify the
> expression-processing procedures to enforce the condition that
> operations can be used only with registers and constants.
> **[]{#Exercise 5.10 label="Exercise 5.10"}Exercise 5.10:** Design a
> new syntax for register-machine instructions and modify the simulator
> to use your new syntax. Can you implement your new syntax without
> changing any part of the simulator except the syntax procedures in
> this section?
> **[]{#Exercise 5.11 label="Exercise 5.11"}Exercise 5.11:** When we
> introduced `save` and `restore` in [Section 5.1.4](#Section 5.1.4), we
> didn't specify what would happen if you tried to restore a register
> that was not the last one saved, as in the sequence
>
> ::: scheme
> (save y) (save x) (restore y)
> :::
>
> There are several reasonable possibilities for the meaning of
> `restore`:
>
> a. `(restore y)` puts into `y` the last value saved on the stack,
> regardless of what register that value came from. This is the way
> our simulator behaves. Show how to take advantage of this behavior
> to eliminate one instruction from the Fibonacci machine of
> [Section 5.1.4](#Section 5.1.4) ([Figure 5.12](#Figure 5.12)).
>
> b. `(restore y)` puts into `y` the last value saved on the stack, but
> only if that value was saved from `y`; otherwise, it signals an
> error. Modify the simulator to behave this way. You will have to
> change `save` to put the register name on the stack along with the
> value.
>
> c. `(restore y)` puts into `y` the last value saved from `y`
> regardless of what other registers were saved after `y` and not
> restored. Modify the simulator to behave this way. You will have
> to associate a separate stack with each register. You should make
> the `initialize/stack` operation initialize all the register
> stacks.
> **[]{#Exercise 5.12 label="Exercise 5.12"}Exercise 5.12:** The
> simulator can be used to help determine the data paths required for
> implementing a machine with a given controller. Extend the assembler
> to store the following information in the machine model:
>
> - a list of all instructions, with duplicates removed, sorted by
> instruction type (`assign`, `goto`, and so on);
>
> - a list (without duplicates) of the registers used to hold entry
> points (these are the registers referenced by `goto`
> instructions);
>
> - a list (without duplicates) of the registers that are `save`d or
> `restore`d;
>
> - for each register, a list (without duplicates) of the sources from
> which it is assigned (for example, the sources for register `val`
> in the factorial machine of [Figure 5.11](#Figure 5.11) are
> `(const 1)` and `((op *) (reg n) (reg val))`).
>
> Extend the message-passing interface to the machine to provide access
> to this new information. To test your analyzer, define the Fibonacci
> machine from [Figure 5.12](#Figure 5.12) and examine the lists you
> constructed.
> **[]{#Exercise 5.13 label="Exercise 5.13"}Exercise 5.13:** Modify the
> simulator so that it uses the controller sequence to determine what
> registers the machine has rather than requiring a list of registers as
> an argument to `make/machine`. Instead of pre-allocating the registers
> in `make/machine`, you can allocate them one at a time when they are
> first seen during assembly of the instructions.
### Monitoring Machine Performance {#Section 5.2.4}
Simulation is useful not only for verifying the correctness of a
proposed machine design but also for measuring the machine's
performance. For example, we can install in our simulation program a
"meter" that measures the number of stack operations used in a
computation. To do this, we modify our simulated stack to keep track of
the number of times registers are saved on the stack and the maximum
depth reached by the stack, and add a message to the stack's interface
that prints the statistics, as shown below. We also add an operation to
the basic machine model to print the stack statistics, by initializing
`the/ops` in `make/new/machine` to
::: scheme
(list (list 'initialize-stack (lambda () (stack 'initialize))) (list
'print-stack-statistics (lambda () (stack 'print-statistics))))
:::
Here is the new version of `make/stack`:
::: scheme
(define (make-stack) (let ((s '()) (number-pushes 0) (max-depth 0)
(current-depth 0)) (define (push x) (set! s (cons x s)) (set!
number-pushes (+ 1 number-pushes)) (set! current-depth (+ 1
current-depth)) (set! max-depth (max current-depth max-depth))) (define
(pop) (if (null? s) (error \"Empty stack: POP\") (let ((top (car s)))
(set! s (cdr s)) (set! current-depth (- current-depth 1)) top))) (define
(initialize) (set! s '()) (set! number-pushes 0) (set! max-depth 0)
(set! current-depth 0) 'done) (define (print-statistics) (newline)
(display (list 'total-pushes '= number-pushes 'maximum-depth '=
max-depth))) (define (dispatch message) (cond ((eq? message 'push) push)
((eq? message 'pop) (pop)) ((eq? message 'initialize) (initialize))
((eq? message 'print-statistics) (print-statistics)) (else (error
\"Unknown request: STACK\" message)))) dispatch))
:::
[Exercise 5.15](#Exercise 5.15) through [Exercise 5.19](#Exercise 5.19)
describe other useful monitoring and debugging features that can be
added to the register-machine simulator.
> **[]{#Exercise 5.14 label="Exercise 5.14"}Exercise 5.14:** Measure the
> number of pushes and the maximum stack depth required to compute $n!$
> for various small values of $n$ using the factorial machine shown in
> [Figure 5.11](#Figure 5.11). From your data determine formulas in
> terms of $n$ for the total number of push operations and the maximum
> stack depth used in computing $n!$ for any $n > 1$. Note that each of
> these is a linear function of $n$ and is thus determined by two
> constants. In order to get the statistics printed, you will have to
> augment the factorial machine with instructions to initialize the
> stack and print the statistics. You may want to also modify the
> machine so that it repeatedly reads a value for $n$, computes the
> factorial, and prints the result (as we did for the gcd
> machine in [Figure 5.4](#Figure 5.4)), so that you will not have to
> repeatedly invoke `get/register/contents`, `set/register/contents!`,
> and `start`.
> **[]{#Exercise 5.15 label="Exercise 5.15"}Exercise 5.15:** Add
> *instruction counting* to the register machine simulation. That is,
> have the machine model keep track of the number of instructions
> executed. Extend the machine model's interface to accept a new message
> that prints the value of the instruction count and resets the count to
> zero.
> **[]{#Exercise 5.16 label="Exercise 5.16"}Exercise 5.16:** Augment the
> simulator to provide for *instruction tracing*. That is, before each
> instruction is executed, the simulator should print the text of the
> instruction. Make the machine model accept `trace/on` and `trace/off`
> messages to turn tracing on and off.
> **[]{#Exercise 5.17 label="Exercise 5.17"}Exercise 5.17:** Extend the
> instruction tracing of [Exercise 5.16](#Exercise 5.16) so that before
> printing an instruction, the simulator prints any labels that
> immediately precede that instruction in the controller sequence. Be
> careful to do this in a way that does not interfere with instruction
> counting ([Exercise 5.15](#Exercise 5.15)). You will have to make the
> simulator retain the necessary label information.
> **[]{#Exercise 5.18 label="Exercise 5.18"}Exercise 5.18:** Modify the
> `make/register` procedure of [Section 5.2.1](#Section 5.2.1) so that
> registers can be traced. Registers should accept messages that turn
> tracing on and off. When a register is traced, assigning a value to
> the register should print the name of the register, the old contents
> of the register, and the new contents being assigned. Extend the
> interface to the machine model to permit you to turn tracing on and
> off for designated machine registers.
> **[]{#Exercise 5.19 label="Exercise 5.19"}Exercise 5.19:** Alyssa P.
> Hacker wants a *breakpoint* feature in the simulator to help her debug
> her machine designs. You have been hired to install this feature for
> her. She wants to be able to specify a place in the controller
> sequence where the simulator will stop and allow her to examine the
> state of the machine. You are to implement a procedure
>
> ::: scheme
> (set-breakpoint
> $\color{SchemeDark}\langle\kern0.08em$ *machine* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *label* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *n* $\color{SchemeDark}\rangle$ )
> :::
>
> that sets a breakpoint just before the $n^{\mathrm{th}}$ instruction
> after the given label. For example,
>
> ::: scheme
> (set-breakpoint gcd-machine 'test-b 4)
> :::
>
> installs a breakpoint in `gcd/machine` just before the assignment to
> register `a`. When the simulator reaches the breakpoint it should
> print the label and the offset of the breakpoint and stop executing
> instructions. Alyssa can then use `get/register/contents` and
> `set/register/contents!` to manipulate the state of the simulated
> machine. She should then be able to continue execution by saying
>
> ::: scheme
> (proceed-machine
> $\color{SchemeDark}\langle\kern0.08em$ *machine* $\color{SchemeDark}\rangle$ )
> :::
>
> She should also be able to remove a specific breakpoint by means of
>
> ::: scheme
> (cancel-breakpoint
> $\color{SchemeDark}\langle\kern0.08em$ *machine* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *label* $\color{SchemeDark}\rangle$
> $\color{SchemeDark}\langle$ *n* $\color{SchemeDark}\rangle$ )
> :::
>
> or to remove all breakpoints by means of
>
> ::: scheme
> (cancel-all-breakpoints
> $\color{SchemeDark}\langle\kern0.08em$ *machine* $\color{SchemeDark}\rangle$ )
> :::
## Storage Allocation and Garbage Collection {#Section 5.3}
In [Section 5.4](#Section 5.4), we will show how to implement a Scheme
evaluator as a register machine. In order to simplify the discussion, we
will assume that our register machines can be equipped with a
*list-structured memory*, in which the basic operations for manipulating
list-structured data are primitive. Postulating the existence of such a
memory is a useful abstraction when one is focusing on the mechanisms of
control in a Scheme interpreter, but this does not reflect a realistic
view of the actual primitive data operations of contemporary computers.
To obtain a more complete picture of how a Lisp system operates, we must
investigate how list structure can be represented in a way that is
compatible with conventional computer memories.
There are two considerations in implementing list structure. The first
is purely an issue of representation: how to represent the
"box-and-pointer" structure of Lisp pairs, using only the storage and
addressing capabilities of typical computer memories. The second issue
concerns the management of memory as a computation proceeds. The
operation of a Lisp system depends crucially on the ability to
continually create new data objects. These include objects that are
explicitly created by the Lisp procedures being interpreted as well as
structures created by the interpreter itself, such as environments and
argument lists. Although the constant creation of new data objects would
pose no problem on a computer with an infinite amount of rapidly
addressable memory, computer memories are available only in finite sizes
(more's the pity). Lisp systems thus provide an *automatic storage
allocation* facility to support the illusion of an infinite memory. When
a data object is no longer needed, the memory allocated to it is
automatically recycled and used to construct new data objects. There are
various techniques for providing such automatic storage allocation. The
method we shall discuss in this section is called *garbage collection*.
### Memory as Vectors {#Section 5.3.1}
A conventional computer memory can be thought of as an array of
cubbyholes, each of which can contain a piece of information. Each
cubbyhole has a unique name, called its *address* or *location*. Typical
memory systems provide two primitive operations: one that fetches the
data stored in a specified location and one that assigns new data to a
specified location. Memory addresses can be incremented to support
sequential access to some set of the cubbyholes. More generally, many
important data operations require that memory addresses be treated as
data, which can be stored in memory locations and manipulated in machine
registers. The representation of list structure is one application of
such *address arithmetic*.
To model computer memory, we use a new kind of data structure called a
*vector*. Abstractly, a vector is a compound data object whose
individual elements can be accessed by means of an integer index in an
amount of time that is independent of the index.[^290] In order to
describe memory operations, we use two primitive Scheme procedures for
manipulating vectors:
- `(vector/ref `$\langle$*`vector`*$\rangle$` `$\langle$*`n`*$\rangle$`)`
returns the $n^{\mathrm{th}}$ element of the vector.
- `(vector/set! `$\langle$*`vector`*$\rangle$` `$\langle$*`n`*$\rangle$` `$\langle$*`value`*$\rangle$`)`
sets the $n^{\mathrm{th}}$ element of the vector to the designated
value.
For example, if `v` is a vector, then `(vector/ref v 5)` gets the fifth
entry in the vector `v` and `(vector/set! v 5 7)` changes the value of
the fifth entry of the vector `v` to 7.[^291] For computer memory, this
access can be implemented through the use of address arithmetic to
combine a *base address* that specifies the beginning location of a
vector in memory with an *index* that specifies the offset of a
particular element of the vector.
#### Representing Lisp data {#representing-lisp-data .unnumbered}
We can use vectors to implement the basic pair structures required for a
list-structured memory. Let us imagine that computer memory is divided
into two vectors: `the/cars` and `the/cdrs`. We will represent list
structure as follows: A pointer to a pair is an index into the two
vectors. The `car` of the pair is the entry in `the/cars` with the
designated index, and the `cdr` of the pair is the entry in `the/cdrs`
with the designated index. We also need a representation for objects
other than pairs (such as numbers and symbols) and a way to distinguish
one kind of data from another. There are many methods of accomplishing
this, but they all reduce to using *typed pointers*, that is, to
extending the notion of "pointer" to include information on data
type.[^292] The data type enables the system to distinguish a pointer to
a pair (which consists of the "pair" data type and an index into the
memory vectors) from pointers to other kinds of data (which consist of
some other data type and whatever is being used to represent data of
that type). Two data objects are considered to be the same (`eq?`) if
their pointers are identical.[^293] [Figure 5.14](#Figure 5.14)
illustrates the use of this method to represent the list `((1 2) 3 4)`,
whose box-and-pointer diagram is also shown. We use letter prefixes to
denote the data-type information. Thus, a pointer to the pair with index
5 is denoted `p5`, the empty list is denoted by the pointer `e0`, and a
pointer to the number 4 is denoted `n4`. In the box-and-pointer diagram,
we have indicated at the lower left of each pair the vector index that
specifies where the `car` and `cdr` of the pair are stored. The blank
locations in `the/cars` and `the/cdrs` may contain parts of other list
structures (not of interest here).
A pointer to a number, such as `n4`, might consist of a type indicating
numeric data together with the actual representation of the number
4.[^294] To deal with numbers that are too large to be represented in
the fixed amount of space allocated for a single pointer, we could use a
distinct *bignum* data type, for which the pointer designates a list in
which the parts of the number are stored.[^295]
[]{#Figure 5.14 label="Figure 5.14"}
![image](fig/chap5/Fig5.14a.pdf){width="91mm"}
> **Figure 5.14:** Box-and-pointer and memory-vector representations of
> the list `((1 2) 3 4)`.
A symbol might be represented as a typed pointer that designates a
sequence of the characters that form the symbol's printed
representation. This sequence is constructed by the Lisp reader when the
character string is initially encountered in input. Since we want two
instances of a symbol to be recognized as the "same" symbol by `eq?` and
we want `eq?` to be a simple test for equality of pointers, we must
ensure that if the reader sees the same character string twice, it will
use the same pointer (to the same sequence of characters) to represent
both occurrences. To accomplish this, the reader maintains a table,
traditionally called the *obarray*, of all the symbols it has ever
encountered. When the reader encounters a character string and is about
to construct a symbol, it checks the obarray to see if it has ever
before seen the same character string. If it has not, it uses the
characters to construct a new symbol (a typed pointer to a new character
sequence) and enters this pointer in the obarray. If the reader has seen
the string before, it returns the symbol pointer stored in the obarray.
This process of replacing character strings by unique pointers is called
*interning* symbols.
#### Implementing the primitive list operations {#implementing-the-primitive-list-operations .unnumbered}
Given the above representation scheme, we can replace each "primitive"
list operation of a register machine with one or more primitive vector
operations. We will use two registers, `the/cars` and `the/cdrs`, to
identify the memory vectors, and will assume that `vector/ref` and
`vector/set!` are available as primitive operations. We also assume that
numeric operations on pointers (such as incrementing a pointer, using a
pair pointer to index a vector, or adding two numbers) use only the
index portion of the typed pointer.
For example, we can make a register machine support the instructions
::: scheme
(assign
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
(op car) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ ))
(assign
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
(op cdr) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ ))
:::
if we implement these, respectively, as
::: scheme
(assign
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
(op vector-ref) (reg the-cars) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ ))
(assign
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
(op vector-ref) (reg the-cdrs) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ ))
:::
The instructions
::: scheme
(perform (op set-car!) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
(reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ ))
(perform (op set-cdr!) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
(reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ ))
:::
are implemented as
::: scheme
(perform (op vector-set!) (reg the-cars) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
(reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ ))
(perform (op vector-set!) (reg the-cdrs) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
(reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ ))
:::
`cons` is performed by allocating an unused index and storing the
arguments to `cons` in `the/cars` and `the/cdrs` at that indexed vector
position. We presume that there is a special register, `free`, that
always holds a pair pointer containing the next available index, and
that we can increment the index part of that pointer to find the next
free location.[^296] For example, the instruction
::: scheme
(assign
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
(op cons) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ )
(reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 3}}\rangle$ ))
:::
is implemented as the following sequence of vector operations:[^297]
::: scheme
(perform (op vector-set!) (reg the-cars) (reg free) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ ))
(perform (op vector-set!) (reg the-cdrs) (reg free) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 3}}\rangle$ ))
(assign
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
(reg free)) (assign free (op +) (reg free) (const 1))
:::
The `eq?` operation
::: scheme
(op eq?) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$ )
(reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ )
:::
simply tests the equality of all fields in the registers, and predicates
such as `pair?`, `null?`, `symbol?`, and `number?` need only check the
type field.
#### Implementing stacks {#implementing-stacks .unnumbered}
Although our register machines use stacks, we need do nothing special
here, since stacks can be modeled in terms of lists. The stack can be a
list of the saved values, pointed to by a special register `the/stack`.
Thus, ` (save `$\langle$*`reg`*$\rangle$`)` can be implemented as
::: scheme
(assign the-stack (op cons) (reg
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}\rangle$ ) (reg
the-stack))
:::
Similarly, `(restore `$\langle$*`reg`*$\rangle$`)` can be implemented as
::: scheme
(assign
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}\rangle$ (op
car) (reg the-stack)) (assign the-stack (op cdr) (reg the-stack))
:::
and `(perform (op initialize/stack))` can be implemented as
::: scheme
(assign the-stack (const ()))
:::
These operations can be further expanded in terms of the vector
operations given above. In conventional computer architectures, however,
it is usually advantageous to allocate the stack as a separate vector.
Then pushing and popping the stack can be accomplished by incrementing
or decrementing an index into that vector.
> **[]{#Exercise 5.20 label="Exercise 5.20"}Exercise 5.20:** Draw the
> box-and-pointer representation and the memory-vector representation
> (as in [Figure 5.14](#Figure 5.14)) of the list structure produced by
>
> ::: scheme
> (define x (cons 1 2)) (define y (list x x))
> :::
>
> with the `free` pointer initially `p1`. What is the final value of
> `free` ? What pointers represent the values of `x` and `y` ?
> **[]{#Exercise 5.21 label="Exercise 5.21"}Exercise 5.21:** Implement
> register machines for the following procedures. Assume that the
> list-structure memory operations are available as machine primitives.
>
> a. Recursive `count/leaves`:
>
> ::: scheme
> (define (count-leaves tree) (cond ((null? tree) 0) ((not (pair?
> tree)) 1) (else (+ (count-leaves (car tree)) (count-leaves (cdr
> tree))))))
> :::
>
> b. Recursive `count/leaves` with explicit counter:
>
> ::: scheme
> (define (count-leaves tree) (define (count-iter tree n) (cond
> ((null? tree) n) ((not (pair? tree)) (+ n 1)) (else (count-iter
> (cdr tree) (count-iter (car tree) n))))) (count-iter tree 0))
> :::
> **[]{#Exercise 5.22 label="Exercise 5.22"}Exercise 5.22:** [Exercise
> 3.12](#Exercise 3.12) of [Section 3.3.1](#Section 3.3.1) presented an
> `append` procedure that appends two lists to form a new list and an
> `append!` procedure that splices two lists together. Design a register
> machine to implement each of these procedures. Assume that the
> list-structure memory operations are available as primitive
> operations.
### Maintaining the Illusion of Infinite Memory {#Section 5.3.2}
The representation method outlined in [Section 5.3.1](#Section 5.3.1)
solves the problem of implementing list structure, provided that we have
an infinite amount of memory. With a real computer we will eventually
run out of free space in which to construct new pairs.[^298] However,
most of the pairs generated in a typical computation are used only to
hold intermediate results. After these results are accessed, the pairs
are no longer needed---they are *garbage*. For instance, the computation
::: scheme
(accumulate + 0 (filter odd? (enumerate-interval 0 n)))
:::
constructs two lists: the enumeration and the result of filtering the
enumeration. When the accumulation is complete, these lists are no
longer needed, and the allocated memory can be reclaimed. If we can
arrange to collect all the garbage periodically, and if this turns out
to recycle memory at about the same rate at which we construct new
pairs, we will have preserved the illusion that there is an infinite
amount of memory.
In order to recycle pairs, we must have a way to determine which
allocated pairs are not needed (in the sense that their contents can no
longer influence the future of the computation). The method we shall
examine for accomplishing this is known as *garbage collection*. Garbage
collection is based on the observation that, at any moment in a Lisp
interpretation, the only objects that can affect the future of the
computation are those that can be reached by some succession of `car`
and `cdr` operations starting from the pointers that are currently in
the machine registers.[^299] Any memory cell that is not so accessible
may be recycled.
There are many ways to perform garbage collection. The method we shall
examine here is called *stop-and-copy*. The basic idea is to divide
memory into two halves: "working memory" and "free memory." When `cons`
constructs pairs, it allocates these in working memory. When working
memory is full, we perform garbage collection by locating all the useful
pairs in working memory and copying these into consecutive locations in
free memory. (The useful pairs are located by tracing all the `car` and
`cdr` pointers, starting with the machine registers.) Since we do not
copy the garbage, there will presumably be additional free memory that
we can use to allocate new pairs. In addition, nothing in the working
memory is needed, since all the useful pairs in it have been copied.
Thus, if we interchange the roles of working memory and free memory, we
can continue processing; new pairs will be allocated in the new working
memory (which was the old free memory). When this is full, we can copy
the useful pairs into the new free memory (which was the old working
memory).[^300]
#### Implementation of a stop-and-copy garbage collector {#implementation-of-a-stop-and-copy-garbage-collector .unnumbered}
We now use our register-machine language to describe the stop-and-copy
algorithm in more detail. We will assume that there is a register called
`root` that contains a pointer to a structure that eventually points at
all accessible data. This can be arranged by storing the contents of all
the machine registers in a pre-allocated list pointed at by `root` just
before starting garbage collection.[^301] We also assume that, in
addition to the current working memory, there is free memory available
into which we can copy the useful data. The current working memory
consists of vectors whose base addresses are in registers called
`the/cars` and `the/cdrs`, and the free memory is in registers called
`new/cars` and `new/cdrs`.
Garbage collection is triggered when we exhaust the free cells in the
current working memory, that is, when a `cons` operation attempts to
increment the `free` pointer beyond the end of the memory vector. When
the garbage-collection process is complete, the `root` pointer will
point into the new memory, all objects accessible from the `root` will
have been moved to the new memory, and the `free` pointer will indicate
the next place in the new memory where a new pair can be allocated. In
addition, the roles of working memory and new memory will have been
interchanged---new pairs will be constructed in the new memory,
beginning at the place indicated by `free`, and the (previous) working
memory will be available as the new memory for the next garbage
collection. [Figure 5.15](#Figure 5.15) shows the arrangement of memory
just before and just after garbage collection.
[]{#Figure 5.15 label="Figure 5.15"}
![image](fig/chap5/Fig5.15a.pdf){width="91mm"}
> **Figure 5.15:** Reconfiguration of memory by the garbage-collection
> process.
The state of the garbage-collection process is controlled by maintaining
two pointers: `free` and `scan`. These are initialized to point to the
beginning of the new memory. The algorithm begins by relocating the pair
pointed at by `root` to the beginning of the new memory. The pair is
copied, the `root` pointer is adjusted to point to the new location, and
the `free` pointer is incremented. In addition, the old location of the
pair is marked to show that its contents have been moved. This marking
is done as follows: In the `car` position, we place a special tag that
signals that this is an already-moved object. (Such an object is
traditionally called a *broken heart*.)[^302] In the `cdr` position we
place a *forwarding address* that points at the location to which the
object has been moved.
After relocating the root, the garbage collector enters its basic cycle.
At each step in the algorithm, the `scan` pointer (initially pointing at
the relocated root) points at a pair that has been moved to the new
memory but whose `car` and `cdr` pointers still refer to objects in the
old memory. These objects are each relocated, and the `scan` pointer is
incremented. To relocate an object (for example, the object indicated by
the `car` pointer of the pair we are scanning) we check to see if the
object has already been moved (as indicated by the presence of a
broken-heart tag in the `car` position of the object). If the object has
not already been moved, we copy it to the place indicated by `free`,
update `free`, set up a broken heart at the object's old location, and
update the pointer to the object (in this example, the `car` pointer of
the pair we are scanning) to point to the new location. If the object
has already been moved, its forwarding address (found in the `cdr`
position of the broken heart) is substituted for the pointer in the pair
being scanned. Eventually, all accessible objects will have been moved
and scanned, at which point the `scan` pointer will overtake the `free`
pointer and the process will terminate.
We can specify the stop-and-copy algorithm as a sequence of instructions
for a register machine. The basic step of relocating an object is
accomplished by a subroutine called `relocate/old/result/in/new`. This
subroutine gets its argument, a pointer to the object to be relocated,
from a register named `old`. It relocates the designated object
(incrementing `free` in the process), puts a pointer to the relocated
object into a register called `new`, and returns by branching to the
entry point stored in the register `relocate/continue`. To begin garbage
collection, we invoke this subroutine to relocate the `root` pointer,
after initializing `free` and `scan`. When the relocation of `root` has
been accomplished, we install the new pointer as the new `root` and
enter the main loop of the garbage collector.
::: scheme
begin-garbage-collection (assign free (const 0)) (assign scan (const 0))
(assign old (reg root)) (assign relocate-continue (label reassign-root))
(goto (label relocate-old-result-in-new)) reassign-root (assign root
(reg new)) (goto (label gc-loop))
:::
In the main loop of the garbage collector we must determine whether
there are any more objects to be scanned. We do this by testing whether
the `scan` pointer is coincident with the `free` pointer. If the
pointers are equal, then all accessible objects have been relocated, and
we branch to `gc/flip`, which cleans things up so that we can continue
the interrupted computation. If there are still pairs to be scanned, we
call the relocate subroutine to relocate the `car` of the next pair (by
placing the `car` pointer in `old`). The `relocate/continue` register is
set up so that the subroutine will return to update the `car` pointer.
::: scheme
gc-loop (test (op =) (reg scan) (reg free)) (branch (label gc-flip))
(assign old (op vector-ref) (reg new-cars) (reg scan)) (assign
relocate-continue (label update-car)) (goto (label
relocate-old-result-in-new))
:::
At `update/car`, we modify the `car` pointer of the pair being scanned,
then proceed to relocate the `cdr` of the pair. We return to
`update/cdr` when that relocation has been accomplished. After
relocating and updating the `cdr`, we are finished scanning that pair,
so we continue with the main loop.
::: scheme
update-car (perform (op vector-set!) (reg new-cars) (reg scan) (reg
new)) (assign old (op vector-ref) (reg new-cdrs) (reg scan)) (assign
relocate-continue (label update-cdr)) (goto (label
relocate-old-result-in-new)) update-cdr (perform (op vector-set!) (reg
new-cdrs) (reg scan) (reg new)) (assign scan (op +) (reg scan) (const
1)) (goto (label gc-loop))
:::
The subroutine `relocate/old/result/in/new` relocates objects as
follows: If the object to be relocated (pointed at by `old`) is not a
pair, then we return the same pointer to the object unchanged (in
`new`). (For example, we may be scanning a pair whose `car` is the
number 4. If we represent the `car` by `n4`, as described in [Section
5.3.1](#Section 5.3.1), then we want the "relocated" `car` pointer to
still be `n4`.) Otherwise, we must perform the relocation. If the `car`
position of the pair to be relocated contains a broken-heart tag, then
the pair has in fact already been moved, so we retrieve the forwarding
address (from the `cdr` position of the broken heart) and return this in
`new`. If the pointer in `old` points at a yet-unmoved pair, then we
move the pair to the first free cell in new memory (pointed at by
`free`) and set up the broken heart by storing a broken-heart tag and
forwarding address at the old location. `relocate/old/result/in/new`
uses a register `oldcr` to hold the `car` or the `cdr` of the object
pointed at by `old`.[^303]
::: scheme
relocate-old-result-in-new (test (op pointer-to-pair?) (reg old))
(branch (label pair)) (assign new (reg old)) (goto (reg
relocate-continue)) pair (assign oldcr (op vector-ref) (reg the-cars)
(reg old)) (test (op broken-heart?) (reg oldcr)) (branch (label
already-moved)) (assign new (reg free)) [; new location for
pair]{.roman} [;; Update `free` pointer.]{.roman} (assign free (op +)
(reg free) (const 1)) [;; Copy the `car` and `cdr` to new
memory.]{.roman} (perform (op vector-set!) (reg new-cars) (reg new)
(reg oldcr)) (assign oldcr (op vector-ref) (reg the-cdrs) (reg old))
(perform (op vector-set!) (reg new-cdrs) (reg new) (reg oldcr)) [;;
Construct the broken heart.]{.roman} (perform (op vector-set!) (reg
the-cars) (reg old) (const broken-heart)) (perform (op vector-set!) (reg
the-cdrs) (reg old) (reg new)) (goto (reg relocate-continue))
already-moved (assign new (op vector-ref) (reg the-cdrs) (reg old))
(goto (reg relocate-continue))
:::
At the very end of the garbage-collection process, we interchange the
role of old and new memories by interchanging pointers: interchanging
`the/cars` with `new/cars`, and `the/cdrs` with `new/cdrs`. We will then
be ready to perform another garbage collection the next time memory runs
out.
::: scheme
gc-flip (assign temp (reg the-cdrs)) (assign the-cdrs (reg new-cdrs))
(assign new-cdrs (reg temp)) (assign temp (reg the-cars)) (assign
the-cars (reg new-cars)) (assign new-cars (reg temp))
:::
## The Explicit-Control Evaluator {#Section 5.4}
In [Section 5.1](#Section 5.1) we saw how to transform simple Scheme
programs into descriptions of register machines. We will now perform
this transformation on a more complex program, the metacircular
evaluator of [Section 4.1.1](#Section 4.1.1)--[Section
4.1.4](#Section 4.1.4), which shows how the behavior of a Scheme
interpreter can be described in terms of the procedures `eval` and
`apply`. The *explicit-control evaluator* that we develop in this
section shows how the underlying procedure-calling and argument-passing
mechanisms used in the evaluation process can be described in terms of
operations on registers and stacks. In addition, the explicit-control
evaluator can serve as an implementation of a Scheme interpreter,
written in a language that is very similar to the native machine
language of conventional computers. The evaluator can be executed by the
register-machine simulator of [Section 5.2](#Section 5.2).
Alternatively, it can be used as a starting point for building a
machine-language implementation of a Scheme evaluator, or even a
special-purpose machine for evaluating Scheme expressions. [Figure
5.16](#Figure 5.16) shows such a hardware implementation: a silicon chip
that acts as an evaluator for Scheme. The chip designers started with
the data-path and controller specifications for a register machine
similar to the evaluator described in this section and used design
automation programs to construct the integrated-circuit layout.[^304]
[]{#Figure 5.16 label="Figure 5.16"}
![image](fig/chap5/chip.jpg){width="91mm"}
> **Figure 5.16:** A silicon-chip implementation of an evaluator for
> Scheme.
#### Registers and operations {#registers-and-operations .unnumbered}
In designing the explicit-control evaluator, we must specify the
operations to be used in our register machine. We described the
metacircular evaluator in terms of abstract syntax, using procedures
such as `quoted?` and `make/procedure`. In implementing the register
machine, we could expand these procedures into sequences of elementary
list-structure memory operations, and implement these operations on our
register machine. However, this would make our evaluator very long,
obscuring the basic structure with details. To clarify the presentation,
we will include as primitive operations of the register machine the
syntax procedures given in [Section 4.1.2](#Section 4.1.2) and the
procedures for representing environments and other run-time data given
in sections [Section 4.1.3](#Section 4.1.3) and [Section
4.1.4](#Section 4.1.4). In order to completely specify an evaluator that
could be programmed in a low-level machine language or implemented in
hardware, we would replace these operations by more elementary
operations, using the list-structure implementation we described in
[Section 5.3](#Section 5.3).
Our Scheme evaluator register machine includes a stack and seven
registers: `exp`, `env`, `val`, `continue`, `proc`, `argl`, and `unev`.
`exp` is used to hold the expression to be evaluated, and `env` contains
the environment in which the evaluation is to be performed. At the end
of an evaluation, `val` contains the value obtained by evaluating the
expression in the designated environment. The `continue` register is
used to implement recursion, as explained in [Section
5.1.4](#Section 5.1.4). (The evaluator needs to call itself recursively,
since evaluating an expression requires evaluating its subexpressions.)
The registers `proc`, `argl`, and `unev` are used in evaluating
combinations.
We will not provide a data-path diagram to show how the registers and
operations of the evaluator are connected, nor will we give the complete
list of machine operations. These are implicit in the evaluator's
controller, which will be presented in detail.
### The Core of the Explicit-Control Evaluator {#Section 5.4.1}
The central element in the evaluator is the sequence of instructions
beginning at `eval/dispatch`. This corresponds to the `eval` procedure
of the metacircular evaluator described in [Section
4.1.1](#Section 4.1.1). When the controller starts at `eval/dispatch`,
it evaluates the expression specified by `exp` in the environment
specified by `env`. When evaluation is complete, the controller will go
to the entry point stored in `continue`, and the `val` register will
hold the value of the expression. As with the metacircular `eval`, the
structure of `eval/dispatch` is a case analysis on the syntactic type of
the expression to be evaluated.[^305]
::: scheme
eval-dispatch (test (op self-evaluating?) (reg exp)) (branch (label
ev-self-eval)) (test (op variable?) (reg exp)) (branch (label
ev-variable)) (test (op quoted?) (reg exp)) (branch (label ev-quoted))
(test (op assignment?) (reg exp)) (branch (label ev-assignment)) (test
(op definition?) (reg exp)) (branch (label ev-definition)) (test (op
if?) (reg exp)) (branch (label ev-if)) (test (op lambda?) (reg exp))
(branch (label ev-lambda)) (test (op begin?) (reg exp)) (branch (label
ev-begin)) (test (op application?) (reg exp)) (branch (label
ev-application)) (goto (label unknown-expression-type))
:::
#### Evaluating simple expressions {#evaluating-simple-expressions .unnumbered}
Numbers and strings (which are self-evaluating), variables, quotations,
and `lambda` expressions have no subexpressions to be evaluated. For
these, the evaluator simply places the correct value in the `val`
register and continues execution at the entry point specified by
`continue`. Evaluation of simple expressions is performed by the
following controller code:
::: scheme
ev-self-eval (assign val (reg exp)) (goto (reg continue)) ev-variable
(assign val (op lookup-variable-value) (reg exp) (reg env)) (goto (reg
continue)) ev-quoted (assign val (op text-of-quotation) (reg exp)) (goto
(reg continue)) ev-lambda (assign unev (op lambda-parameters) (reg exp))
(assign exp (op lambda-body) (reg exp)) (assign val (op make-procedure)
(reg unev) (reg exp) (reg env)) (goto (reg continue))
:::
Observe how `ev/lambda` uses the `unev` and `exp` registers to hold the
parameters and body of the lambda expression so that they can be passed
to the `make/procedure` operation, along with the environment in `env`.
#### Evaluating procedure applications {#evaluating-procedure-applications .unnumbered}
A procedure application is specified by a combination containing an
operator and operands. The operator is a subexpression whose value is a
procedure, and the operands are subexpressions whose values are the
arguments to which the procedure should be applied. The metacircular
`eval` handles applications by calling itself recursively to evaluate
each element of the combination, and then passing the results to
`apply`, which performs the actual procedure application. The
explicit-control evaluator does the same thing; these recursive calls
are implemented by `goto` instructions, together with use of the stack
to save registers that will be restored after the recursive call
returns. Before each call we will be careful to identify which registers
must be saved (because their values will be needed later).[^306]
We begin the evaluation of an application by evaluating the operator to
produce a procedure, which will later be applied to the evaluated
operands. To evaluate the operator, we move it to the `exp` register and
go to `eval/dispatch`. The environment in the `env` register is already
the correct one in which to evaluate the operator. However, we save
`env` because we will need it later to evaluate the operands. We also
extract the operands into `unev` and save this on the stack. We set up
`continue` so that `eval/dispatch` will resume at `ev/appl/did/operator`
after the operator has been evaluated. First, however, we save the old
value of `continue`, which tells the controller where to continue after
the application.
::: scheme
ev-application (save continue) (save env) (assign unev (op operands)
(reg exp)) (save unev) (assign exp (op operator) (reg exp)) (assign
continue (label ev-appl-did-operator)) (goto (label eval-dispatch))
:::
Upon returning from evaluating the operator subexpression, we proceed to
evaluate the operands of the combination and to accumulate the resulting
arguments in a list, held in `argl`. First we restore the unevaluated
operands and the environment. We initialize `argl` to an empty list.
Then we assign to the `proc` register the procedure that was produced by
evaluating the operator. If there are no operands, we go directly to
`apply/dispatch`. Otherwise we save `proc` on the stack and start the
argument-evaluation loop:[^307]
::: scheme
ev-appl-did-operator (restore unev) [; the operands]{.roman} (restore
env) (assign argl (op empty-arglist)) (assign proc (reg val)) [; the
operator]{.roman} (test (op no-operands?) (reg unev)) (branch (label
apply-dispatch)) (save proc)
:::
Each cycle of the argument-evaluation loop evaluates an operand from the
list in `unev` and accumulates the result into `argl`. To evaluate an
operand, we place it in the `exp` register and go to `eval/dispatch`,
after setting `continue` so that execution will resume with the
argument-accumulation phase. But first we save the arguments accumulated
so far (held in `argl`), the environment (held in `env`), and the
remaining operands to be evaluated (held in `unev`). A special case is
made for the evaluation of the last operand, which is handled at
`ev/appl/last/arg`.
::: scheme
ev-appl-operand-loop (save argl) (assign exp (op first-operand) (reg
unev)) (test (op last-operand?) (reg unev)) (branch (label
ev-appl-last-arg)) (save env) (save unev) (assign continue (label
ev-appl-accumulate-arg)) (goto (label eval-dispatch))
:::
When an operand has been evaluated, the value is accumulated into the
list held in `argl`. The operand is then removed from the list of
unevaluated operands in `unev`, and the argument-evaluation continues.
::: scheme
ev-appl-accumulate-arg (restore unev) (restore env) (restore argl)
(assign argl (op adjoin-arg) (reg val) (reg argl)) (assign unev (op
rest-operands) (reg unev)) (goto (label ev-appl-operand-loop))
:::
Evaluation of the last argument is handled differently. There is no need
to save the environment or the list of unevaluated operands before going
to `eval/dispatch`, since they will not be required after the last
operand is evaluated. Thus, we return from the evaluation to a special
entry point `ev/appl/accum/last/arg`, which restores the argument list,
accumulates the new argument, restores the saved procedure, and goes off
to perform the application.[^308]
::: scheme
ev-appl-last-arg (assign continue (label ev-appl-accum-last-arg)) (goto
(label eval-dispatch)) ev-appl-accum-last-arg (restore argl) (assign
argl (op adjoin-arg) (reg val) (reg argl)) (restore proc) (goto (label
apply-dispatch))
:::
The details of the argument-evaluation loop determine the order in which
the interpreter evaluates the operands of a combination (e.g., left to
right or right to left---see [Exercise 3.8](#Exercise 3.8)). This order
is not determined by the metacircular evaluator, which inherits its
control structure from the underlying Scheme in which it is
implemented.[^309] Because the `first/operand` selector (used in
`ev/appl/operand/loop` to extract successive operands from `unev`) is
implemented as `car` and the `rest/operands` selector is implemented as
`cdr`, the explicit-control evaluator will evaluate the operands of a
combination in left-to-right order.
#### Procedure application {#procedure-application .unnumbered}
The entry point `apply/dispatch` corresponds to the `apply` procedure of
the metacircular evaluator. By the time we get to `apply/dispatch`, the
`proc` register contains the procedure to apply and `argl` contains the
list of evaluated arguments to which it must be applied. The saved value
of `continue` (originally passed to `eval/dispatch` and saved at
`ev/application`), which tells where to return with the result of the
procedure application, is on the stack. When the application is
complete, the controller transfers to the entry point specified by the
saved `continue`, with the result of the application in `val`. As with
the metacircular `apply`, there are two cases to consider. Either the
procedure to be applied is a primitive or it is a compound procedure.
::: scheme
apply-dispatch (test (op primitive-procedure?) (reg proc)) (branch
(label primitive-apply)) (test (op compound-procedure?) (reg proc))
(branch (label compound-apply)) (goto (label unknown-procedure-type))
:::
We assume that each primitive is implemented so as to obtain its
arguments from `argl` and place its result in `val`. To specify how the
machine handles primitives, we would have to provide a sequence of
controller instructions to implement each primitive and arrange for
`primitive/apply` to dispatch to the instructions for the primitive
identified by the contents of `proc`. Since we are interested in the
structure of the evaluation process rather than the details of the
primitives, we will instead just use an `apply/primitive/procedure`
operation that applies the procedure in `proc` to the arguments in
`argl`. For the purpose of simulating the evaluator with the simulator
of [Section 5.2](#Section 5.2) we use the procedure
`apply/primitive/procedure`, which calls on the underlying Scheme system
to perform the application, just as we did for the metacircular
evaluator in [Section 4.1.4](#Section 4.1.4). After computing the value
of the primitive application, we restore `continue` and go to the
designated entry point.
::: scheme
primitive-apply (assign val (op apply-primitive-procedure) (reg proc)
(reg argl)) (restore continue) (goto (reg continue))
:::
To apply a compound procedure, we proceed just as with the metacircular
evaluator. We construct a frame that binds the procedure's parameters to
the arguments, use this frame to extend the environment carried by the
procedure, and evaluate in this extended environment the sequence of
expressions that forms the body of the procedure. `ev/sequence`,
described below in [Section 5.4.2](#Section 5.4.2), handles the
evaluation of the sequence.
::: scheme
compound-apply (assign unev (op procedure-parameters) (reg proc))
(assign env (op procedure-environment) (reg proc)) (assign env (op
extend-environment) (reg unev) (reg argl) (reg env)) (assign unev (op
procedure-body) (reg proc)) (goto (label ev-sequence))
:::
`compound/apply` is the only place in the interpreter where the `env`
register is ever assigned a new value. Just as in the metacircular
evaluator, the new environment is constructed from the environment
carried by the procedure, together with the argument list and the
corresponding list of variables to be bound.
### Sequence Evaluation and Tail Recursion {#Section 5.4.2}
The portion of the explicit-control evaluator at `ev/sequence` is
analogous to the metacircular evaluator's `eval/sequence` procedure. It
handles sequences of expressions in procedure bodies or in explicit
`begin` expressions.
Explicit `begin` expressions are evaluated by placing the sequence of
expressions to be evaluated in `unev`, saving `continue` on the stack,
and jumping to `ev/sequence`.
::: scheme
ev-begin (assign unev (op begin-actions) (reg exp)) (save continue)
(goto (label ev-sequence))
:::
The implicit sequences in procedure bodies are handled by jumping to
`ev/sequence` from `compound/apply`, at which point `continue` is
already on the stack, having been saved at `ev/application`.
The entries at `ev/sequence` and `ev/sequence/continue` form a loop that
successively evaluates each expression in a sequence. The list of
unevaluated expressions is kept in `unev`. Before evaluating each
expression, we check to see if there are additional expressions to be
evaluated in the sequence. If so, we save the rest of the unevaluated
expressions (held in `unev`) and the environment in which these must be
evaluated (held in `env`) and call `eval/dispatch` to evaluate the
expression. The two saved registers are restored upon the return from
this evaluation, at `ev/sequence/continue`.
The final expression in the sequence is handled differently, at the
entry point `ev/sequence/last/exp`. Since there are no more expressions
to be evaluated after this one, we need not save `unev` or `env` before
going to `eval/dispatch`. The value of the whole sequence is the value
of the last expression, so after the evaluation of the last expression
there is nothing left to do except continue at the entry point currently
held on the stack (which was saved by `ev/application` or `ev/begin`.)
Rather than setting up `continue` to arrange for `eval/dispatch` to
return here and then restoring `continue` from the stack and continuing
at that entry point, we restore `continue` from the stack before going
to `eval/dispatch`, so that `eval/dispatch` will continue at that entry
point after evaluating the expression.
::: scheme
ev-sequence (assign exp (op first-exp) (reg unev)) (test (op last-exp?)
(reg unev)) (branch (label ev-sequence-last-exp)) (save unev) (save env)
(assign continue (label ev-sequence-continue)) (goto (label
eval-dispatch)) ev-sequence-continue (restore env) (restore unev)
(assign unev (op rest-exps) (reg unev)) (goto (label ev-sequence))
ev-sequence-last-exp (restore continue) (goto (label eval-dispatch))
:::
#### Tail recursion {#tail-recursion .unnumbered}
In [Chapter 1](#Chapter 1) we said that the process described by a
procedure such as
::: scheme
(define (sqrt-iter guess x) (if (good-enough? guess x) guess (sqrt-iter
(improve guess x) x)))
:::
is an iterative process. Even though the procedure is syntactically
recursive (defined in terms of itself), it is not logically necessary
for an evaluator to save information in passing from one call to
`sqrt/iter` to the next.[^310] An evaluator that can execute a procedure
such as `sqrt/iter` without requiring increasing storage as the
procedure continues to call itself is called a *tail-recursive*
evaluator. The metacircular implementation of the evaluator in [Chapter
4](#Chapter 4) does not specify whether the evaluator is tail-recursive,
because that evaluator inherits its mechanism for saving state from the
underlying Scheme. With the explicit-control evaluator, however, we can
trace through the evaluation process to see when procedure calls cause a
net accumulation of information on the stack.
Our evaluator is tail-recursive, because in order to evaluate the final
expression of a sequence we transfer directly to `eval/dispatch` without
saving any information on the stack. Hence, evaluating the final
expression in a sequence---even if it is a procedure call (as in
`sqrt/iter`, where the `if` expression, which is the last expression in
the procedure body, reduces to a call to `sqrt/iter`)---will not cause
any information to be accumulated on the stack.[^311]
If we did not think to take advantage of the fact that it was
unnecessary to save information in this case, we might have implemented
`eval/sequence` by treating all the expressions in a sequence in the
same way---saving the registers, evaluating the expression, returning to
restore the registers, and repeating this until all the expressions have
been evaluated:[^312]
::: scheme
ev-sequence (test (op no-more-exps?) (reg unev)) (branch (label
ev-sequence-end)) (assign exp (op first-exp) (reg unev)) (save unev)
(save env) (assign continue (label ev-sequence-continue)) (goto (label
eval-dispatch)) ev-sequence-continue (restore env) (restore unev)
(assign unev (op rest-exps) (reg unev)) (goto (label ev-sequence))
ev-sequence-end (restore continue) (goto (reg continue))
:::
This may seem like a minor change to our previous code for evaluation of
a sequence: The only difference is that we go through the save-restore
cycle for the last expression in a sequence as well as for the others.
The interpreter will still give the same value for any expression. But
this change is fatal to the tail-recursive implementation, because we
must now return after evaluating the final expression in a sequence in
order to undo the (useless) register saves. These extra saves will
accumulate during a nest of procedure calls. Consequently, processes
such as `sqrt/iter` will require space proportional to the number of
iterations rather than requiring constant space. This difference can be
significant. For example, with tail recursion, an infinite loop can be
expressed using only the procedure-call mechanism:
::: scheme
(define (count n) (newline) (display n) (count (+ n 1)))
:::
Without tail recursion, such a procedure would eventually run out of
stack space, and expressing a true iteration would require some control
mechanism other than procedure call.
### Conditionals, Assignments, and Definitions {#Section 5.4.3}
As with the metacircular evaluator, special forms are handled by
selectively evaluating fragments of the expression. For an `if`
expression, we must evaluate the predicate and decide, based on the
value of predicate, whether to evaluate the consequent or the
alternative.
Before evaluating the predicate, we save the `if` expression itself so
that we can later extract the consequent or alternative. We also save
the environment, which we will need later in order to evaluate the
consequent or the alternative, and we save `continue`, which we will
need later in order to return to the evaluation of the expression that
is waiting for the value of the `if`.
::: scheme
ev-if (save exp) [; save expression for later]{.roman} (save env)
(save continue) (assign continue (label ev-if-decide)) (assign exp (op
if-predicate) (reg exp)) (goto (label eval-dispatch)) [; evaluate the
predicate]{.roman}
:::
When we return from evaluating the predicate, we test whether it was
true or false and, depending on the result, place either the consequent
or the alternative in `exp` before going to `eval/dispatch`. Notice that
restoring `env` and `continue` here sets up `eval/dispatch` to have the
correct environment and to continue at the right place to receive the
value of the `if` expression.
::: scheme
ev-if-decide (restore continue) (restore env) (restore exp) (test (op
true?) (reg val)) (branch (label ev-if-consequent)) ev-if-alternative
(assign exp (op if-alternative) (reg exp)) (goto (label eval-dispatch))
ev-if-consequent (assign exp (op if-consequent) (reg exp)) (goto (label
eval-dispatch))
:::
#### Assignments and definitions {#assignments-and-definitions-1 .unnumbered}
Assignments are handled by `ev/assignment`, which is reached from
`eval/dispatch` with the assignment expression in `exp`. The code at
`ev/assignment` first evaluates the value part of the expression and
then installs the new value in the environment. `set/variable/value!` is
assumed to be available as a machine operation.
::: scheme
ev-assignment (assign unev (op assignment-variable) (reg exp)) (save
unev) [; save variable for later]{.roman} (assign exp (op
assignment-value) (reg exp)) (save env) (save continue) (assign continue
(label ev-assignment-1)) (goto (label eval-dispatch)) [; evaluate the
assignment value]{.roman} ev-assignment-1 (restore continue) (restore
env) (restore unev) (perform (op set-variable-value!) (reg unev) (reg
val) (reg env)) (assign val (const ok)) (goto (reg continue))
:::
Definitions are handled in a similar way:
::: scheme
ev-definition (assign unev (op definition-variable) (reg exp)) (save
unev) [; save variable for later]{.roman} (assign exp (op
definition-value) (reg exp)) (save env) (save continue) (assign continue
(label ev-definition-1)) (goto (label eval-dispatch)) [; evaluate the
definition value]{.roman} ev-definition-1 (restore continue) (restore
env) (restore unev) (perform (op define-variable!) (reg unev) (reg val)
(reg env)) (assign val (const ok)) (goto (reg continue))
:::
> **[]{#Exercise 5.23 label="Exercise 5.23"}Exercise 5.23:** Extend the
> evaluator to handle derived expressions such as `cond`, `let`, and so
> on ([Section 4.1.2](#Section 4.1.2)). You may "cheat" and assume that
> the syntax transformers such as `cond/>if` are available as machine
> operations.[^313]
> **[]{#Exercise 5.24 label="Exercise 5.24"}Exercise 5.24:** Implement
> `cond` as a new basic special form without reducing it to `if`. You
> will have to construct a loop that tests the predicates of successive
> `cond` clauses until you find one that is true, and then use
> `ev/sequence` to evaluate the actions of the clause.
> **[]{#Exercise 5.25 label="Exercise 5.25"}Exercise 5.25:** Modify the
> evaluator so that it uses normal-order evaluation, based on the lazy
> evaluator of [Section 4.2](#Section 4.2).
### Running the Evaluator {#Section 5.4.4}
With the implementation of the explicit-control evaluator we come to the
end of a development, begun in [Chapter 1](#Chapter 1), in which we have
explored successively more precise models of the evaluation process. We
started with the relatively informal substitution model, then extended
this in [Chapter 3](#Chapter 3) to the environment model, which enabled
us to deal with state and change. In the metacircular evaluator of
[Chapter 4](#Chapter 4), we used Scheme itself as a language for making
more explicit the environment structure constructed during evaluation of
an expression. Now, with register machines, we have taken a close look
at the evaluator's mechanisms for storage management, argument passing,
and control. At each new level of description, we have had to raise
issues and resolve ambiguities that were not apparent at the previous,
less precise treatment of evaluation. To understand the behavior of the
explicit-control evaluator, we can simulate it and monitor its
performance.
We will install a driver loop in our evaluator machine. This plays the
role of the `driver/loop` procedure of [Section 4.1.4](#Section 4.1.4).
The evaluator will repeatedly print a prompt, read an expression,
evaluate the expression by going to `eval/dispatch`, and print the
result. The following instructions form the beginning of the
explicit-control evaluator's controller sequence:[^314]
::: scheme
read-eval-print-loop (perform (op initialize-stack)) (perform (op
prompt-for-input) (const \";;EC-Eval input:\")) (assign exp (op read))
(assign env (op get-global-environment)) (assign continue (label
print-result)) (goto (label eval-dispatch)) print-result (perform (op
announce-output) (const \";;EC-Eval value:\")) (perform (op user-print)
(reg val)) (goto (label read-eval-print-loop))
:::
When we encounter an error in a procedure (such as the "unknown
procedure type error" indicated at `apply/dispatch`), we print an error
message and return to the driver loop.[^315]
::: scheme
unknown-expression-type (assign val (const
unknown-expression-type-error)) (goto (label signal-error))
unknown-procedure-type (restore continue) [; clean up stack (from
`apply/dispatch`)]{.roman} (assign val (const
unknown-procedure-type-error)) (goto (label signal-error)) signal-error
(perform (op user-print) (reg val)) (goto (label read-eval-print-loop))
:::
For the purposes of the simulation, we initialize the stack each time
through the driver loop, since it might not be empty after an error
(such as an undefined variable) interrupts an evaluation.[^316]
If we combine all the code fragments presented in [Section
5.4.1](#Section 5.4.1)--[Section 5.4.4](#Section 5.4.4), we can create
an evaluator machine model that we can run using the register-machine
simulator of [Section 5.2](#Section 5.2).
::: scheme
(define eceval (make-machine '(exp env val proc argl continue unev)
eceval-operations '(read-eval-print-loop
$\color{SchemeDark}\langle$ *entire machine controller as given
above* $\color{SchemeDark}\rangle$ )))
:::
We must define Scheme procedures to simulate the operations used as
primitives by the evaluator. These are the same procedures we used for
the metacircular evaluator in [Section 4.1](#Section 4.1), together with
the few additional ones defined in footnotes throughout [Section
5.4](#Section 5.4).
::: scheme
(define eceval-operations (list (list 'self-evaluating? self-evaluating)
$\color{SchemeDark}\langle$ *complete list of operations for eceval
machine* $\color{SchemeDark}\rangle$ ))
:::
Finally, we can initialize the global environment and run the evaluator:
::: scheme
(define the-global-environment (setup-environment)) (start eceval) *;;;
EC-Eval input:* (define (append x y) (if (null? x) y (cons (car x)
(append (cdr x) y)))) *;;; EC-Eval value:* *ok* *;;; EC-Eval
input:* (append '(a b c) '(d e f)) *;;; EC-Eval value:* *(a b c d e
f)*
:::
Of course, evaluating expressions in this way will take much longer than
if we had directly typed them into Scheme, because of the multiple
levels of simulation involved. Our expressions are evaluated by the
explicit-control-evaluator machine, which is being simulated by a Scheme
program, which is itself being evaluated by the Scheme interpreter.
#### Monitoring the performance of the evaluator {#monitoring-the-performance-of-the-evaluator .unnumbered}
Simulation can be a powerful tool to guide the implementation of
evaluators. Simulations make it easy not only to explore variations of
the register-machine design but also to monitor the performance of the
simulated evaluator. For example, one important factor in performance is
how efficiently the evaluator uses the stack. We can observe the number
of stack operations required to evaluate various expressions by defining
the evaluator register machine with the version of the simulator that
collects statistics on stack use ([Section 5.2.4](#Section 5.2.4)), and
adding an instruction at the evaluator's `print/result` entry point to
print the statistics:
::: scheme
print-result (perform (op print-stack-statistics)) [; added
instruction]{.roman} (perform (op announce-output) (const \";;; EC-Eval
value:\")) $\dots$ [; same as before]{.roman}
:::
Interactions with the evaluator now look like this:
::: scheme
*;;; EC-Eval input:* (define (factorial n) (if (= n 1) 1 (\*
(factorial (- n 1)) n))) *(total-pushes = 3 maximum-depth = 3)* *;;;
EC-Eval value:* *ok* *;;; EC-Eval input:* (factorial 5)
*(total-pushes = 144 maximum-depth = 28)* *;;; EC-Eval value:*
*120*
:::
Note that the driver loop of the evaluator reinitializes the stack at
the start of each interaction, so that the statistics printed will refer
only to stack operations used to evaluate the previous expression.
> **[]{#Exercise 5.26 label="Exercise 5.26"}Exercise 5.26:** Use the
> monitored stack to explore the tail-recursive property of the
> evaluator ([Section 5.4.2](#Section 5.4.2)). Start the evaluator and
> define the iterative `factorial` procedure from [Section
> 1.2.1](#Section 1.2.1):
>
> ::: scheme
> (define (factorial n) (define (iter product counter) (if (\> counter
> n) product (iter (\* counter product) (+ counter 1)))) (iter 1 1))
> :::
>
> Run the procedure with some small values of $n$. Record the maximum
> stack depth and the number of pushes required to compute $n!$ for each
> of these values.
>
> a. You will find that the maximum depth required to evaluate $n!$ is
> independent of $n$. What is that depth?
>
> b. Determine from your data a formula in terms of $n$ for the total
> number of push operations used in evaluating $n!$ for any
> $n \ge 1$. Note that the number of operations used is a linear
> function of $n$ and is thus determined by two constants.
> **[]{#Exercise 5.27 label="Exercise 5.27"}Exercise 5.27:** For
> comparison with [Exercise 5.26](#Exercise 5.26), explore the behavior
> of the following procedure for computing factorials recursively:
>
> ::: scheme
> (define (factorial n) (if (= n 1) 1 (\* (factorial (- n 1)) n)))
> :::
>
> By running this procedure with the monitored stack, determine, as a
> function of $n$, the maximum depth of the stack and the total number
> of pushes used in evaluating $n!$ for $n \ge 1$. (Again, these
> functions will be linear.) Summarize your experiments by filling in
> the following table with the appropriate expressions in terms of $n$:
>
> $$\vbox{
> \offinterlineskip
> \halign{
> \strut \hfil \quad #\quad \hfil & \vrule
> \hfil \quad #\quad \hfil & \vrule
> \hfil \quad #\quad \hfil \cr
>
> & Maximum depth & Number of pushes \cr
> \noalign{\hrule}
> Recursive & & \cr
> factorial & & \cr
> \noalign{\hrule}
> Iterative & & \cr
> factorial & & \cr
> }
> }$$
>
> The maximum depth is a measure of the amount of space used by the
> evaluator in carrying out the computation, and the number of pushes
> correlates well with the time required.
> **[]{#Exercise 5.28 label="Exercise 5.28"}Exercise 5.28:** Modify the
> definition of the evaluator by changing `eval/sequence` as described
> in [Section 5.4.2](#Section 5.4.2) so that the evaluator is no longer
> tail-recursive. Rerun your experiments from [Exercise
> 5.26](#Exercise 5.26) and [Exercise 5.27](#Exercise 5.27) to
> demonstrate that both versions of the `factorial` procedure now
> require space that grows linearly with their input.
> **[]{#Exercise 5.29 label="Exercise 5.29"}Exercise 5.29:** Monitor the
> stack operations in the tree-recursive Fibonacci computation:
>
> ::: scheme
> (define (fib n) (if (\< n 2) n (+ (fib (- n 1)) (fib (- n 2)))))
> :::
>
> a. Give a formula in terms of $n$ for the maximum depth of the stack
> required to compute ${\rm Fib}(n)$ for $n \ge 2$. Hint: In
> [Section 1.2.2](#Section 1.2.2) we argued that the space used by
> this process grows linearly with $n$.
>
> b. Give a formula for the total number of pushes used to compute
> ${\rm Fib}(n)$ for $n \ge 2$. You should find that the number of
> pushes (which correlates well with the time used) grows
> exponentially with $n$. Hint: Let $S(n)$ be the number of pushes
> used in computing ${\rm Fib}(n)$. You should be able to argue that
> there is a formula that expresses $S(n)$ in terms of $S(n - 1)$,
> $S(n - 2)$, and some fixed "overhead" constant $k$ that is
> independent of $n$. Give the formula, and say what $k$ is. Then
> show that $S(n)$ can be expressed as $a\cdot{\rm Fib}(n + 1) + b$
> and give the values of $a$ and $b$.
> **[]{#Exercise 5.30 label="Exercise 5.30"}Exercise 5.30:** Our
> evaluator currently catches and signals only two kinds of
> errors---unknown expression types and unknown procedure types. Other
> errors will take us out of the evaluator read-eval-print loop. When we
> run the evaluator using the register-machine simulator, these errors
> are caught by the underlying Scheme system. This is analogous to the
> computer crashing when a user program makes an error.[^317] It is a
> large project to make a real error system work, but it is well worth
> the effort to understand what is involved here.
>
> a. Errors that occur in the evaluation process, such as an attempt to
> access an unbound variable, could be caught by changing the lookup
> operation to make it return a distinguished condition code, which
> cannot be a possible value of any user variable. The evaluator can
> test for this condition code and then do what is necessary to go
> to `signal/error`. Find all of the places in the evaluator where
> such a change is necessary and fix them. This is lots of work.
>
> b. Much worse is the problem of handling errors that are signaled by
> applying primitive procedures, such as an attempt to divide by
> zero or an attempt to extract the `car` of a symbol. In a
> professionally written high-quality system, each primitive
> application is checked for safety as part of the primitive. For
> example, every call to `car` could first check that the argument
> is a pair. If the argument is not a pair, the application would
> return a distinguished condition code to the evaluator, which
> would then report the failure. We could arrange for this in our
> register-machine simulator by making each primitive procedure
> check for applicability and returning an appropriate distinguished
> condition code on failure. Then the `primitive/apply` code in the
> evaluator can check for the condition code and go to
> `signal/error` if necessary. Build this structure and make it
> work. This is a major project.
## Compilation {#Section 5.5}
The explicit-control evaluator of [Section 5.4](#Section 5.4) is a
register machine whose controller interprets Scheme programs. In this
section we will see how to run Scheme programs on a register machine
whose controller is not a Scheme interpreter.
The explicit-control evaluator machine is universal---it can carry out
any computational process that can be described in Scheme. The
evaluator's controller orchestrates the use of its data paths to perform
the desired computation. Thus, the evaluator's data paths are universal:
They are sufficient to perform any computation we desire, given an
appropriate controller.[^318]
Commercial general-purpose computers are register machines organized
around a collection of registers and operations that constitute an
efficient and convenient universal set of data paths. The controller for
a general-purpose machine is an interpreter for a register-machine
language like the one we have been using. This language is called the
*native language* of the machine, or simply *machine language*. Programs
written in machine language are sequences of instructions that use the
machine's data paths. For example, the explicit-control evaluator's
instruction sequence can be thought of as a machine-language program for
a general-purpose computer rather than as the controller for a
specialized interpreter machine.
There are two common strategies for bridging the gap between
higher-level languages and register-machine languages. The
explicit-control evaluator illustrates the strategy of interpretation.
An interpreter written in the native language of a machine configures
the machine to execute programs written in a language (called the
*source language*) that may differ from the native language of the
machine performing the evaluation. The primitive procedures of the
source language are implemented as a library of subroutines written in
the native language of the given machine. A program to be interpreted
(called the *source program*) is represented as a data structure. The
interpreter traverses this data structure, analyzing the source program.
As it does so, it simulates the intended behavior of the source program
by calling appropriate primitive subroutines from the library.
In this section, we explore the alternative strategy of *compilation*. A
compiler for a given source language and machine translates a source
program into an equivalent program (called the *object program*) written
in the machine's native language. The compiler that we implement in this
section translates programs written in Scheme into sequences of
instructions to be executed using the explicit-control evaluator
machine's data paths.[^319]
Compared with interpretation, compilation can provide a great increase
in the efficiency of program execution, as we will explain below in the
overview of the compiler. On the other hand, an interpreter provides a
more powerful environment for interactive program development and
debugging, because the source program being executed is available at run
time to be examined and modified. In addition, because the entire
library of primitives is present, new programs can be constructed and
added to the system during debugging.
In view of the complementary advantages of compilation and
interpretation, modern program-development environments pursue a mixed
strategy. Lisp interpreters are generally organized so that interpreted
procedures and compiled procedures can call each other. This enables a
programmer to compile those parts of a program that are assumed to be
debugged, thus gaining the efficiency advantage of compilation, while
retaining the interpretive mode of execution for those parts of the
program that are in the flux of interactive development and debugging.
In [Section 5.5.7](#Section 5.5.7), after we have implemented the
compiler, we will show how to interface it with our interpreter to
produce an integrated interpreter-compiler development system.
#### An overview of the compiler {#an-overview-of-the-compiler .unnumbered}
Our compiler is much like our interpreter, both in its structure and in
the function it performs. Accordingly, the mechanisms used by the
compiler for analyzing expressions will be similar to those used by the
interpreter. Moreover, to make it easy to interface compiled and
interpreted code, we will design the compiler to generate code that
obeys the same conventions of register usage as the interpreter: The
environment will be kept in the `env` register, argument lists will be
accumulated in `argl`, a procedure to be applied will be in `proc`,
procedures will return their answers in `val`, and the location to which
a procedure should return will be kept in `continue`. In general, the
compiler translates a source program into an object program that
performs essentially the same register operations as would the
interpreter in evaluating the same source program.
This description suggests a strategy for implementing a rudimentary
compiler: We traverse the expression in the same way the interpreter
does. When we encounter a register instruction that the interpreter
would perform in evaluating the expression, we do not execute the
instruction but instead accumulate it into a sequence. The resulting
sequence of instructions will be the object code. Observe the efficiency
advantage of compilation over interpretation. Each time the interpreter
evaluates an expression---for example, `(f 84 96)`---it performs the
work of classifying the expression (discovering that this is a procedure
application) and testing for the end of the operand list (discovering
that there are two operands). With a compiler, the expression is
analyzed only once, when the instruction sequence is generated at
compile time. The object code produced by the compiler contains only the
instructions that evaluate the operator and the two operands, assemble
the argument list, and apply the procedure (in `proc`) to the arguments
(in `argl`).
This is the same kind of optimization we implemented in the analyzing
evaluator of [Section 4.1.7](#Section 4.1.7). But there are further
opportunities to gain efficiency in compiled code. As the interpreter
runs, it follows a process that must be applicable to any expression in
the language. In contrast, a given segment of compiled code is meant to
execute some particular expression. This can make a big difference, for
example in the use of the stack to save registers. When the interpreter
evaluates an expression, it must be prepared for any contingency. Before
evaluating a subexpression, the interpreter saves all registers that
will be needed later, because the subexpression might require an
arbitrary evaluation. A compiler, on the other hand, can exploit the
structure of the particular expression it is processing to generate code
that avoids unnecessary stack operations.
As a case in point, consider the combination `(f 84 96)`. Before the
interpreter evaluates the operator of the combination, it prepares for
this evaluation by saving the registers containing the operands and the
environment, whose values will be needed later. The interpreter then
evaluates the operator to obtain the result in `val`, restores the saved
registers, and finally moves the result from `val` to `proc`. However,
in the particular expression we are dealing with, the operator is the
symbol `f`, whose evaluation is accomplished by the machine operation
`lookup/variable/value`, which does not alter any registers. The
compiler that we implement in this section will take advantage of this
fact and generate code that evaluates the operator using the instruction
::: scheme
(assign proc (op lookup-variable-value) (const f) (reg env))
:::
This code not only avoids the unnecessary saves and restores but also
assigns the value of the lookup directly to `proc`, whereas the
interpreter would obtain the result in `val` and then move this to
`proc`.
A compiler can also optimize access to the environment. Having analyzed
the code, the compiler can in many cases know in which frame a
particular variable will be located and access that frame directly,
rather than performing the `lookup/variable/value` search. We will
discuss how to implement such variable access in [Section
5.5.6](#Section 5.5.6). Until then, however, we will focus on the kind
of register and stack optimizations described above. There are many
other optimizations that can be performed by a compiler, such as coding
primitive operations "in line" instead of using a general `apply`
mechanism (see [Exercise 5.38](#Exercise 5.38)); but we will not
emphasize these here. Our main goal in this section is to illustrate the
compilation process in a simplified (but still interesting) context.
### Structure of the Compiler {#Section 5.5.1}
In [Section 4.1.7](#Section 4.1.7) we modified our original metacircular
interpreter to separate analysis from execution. We analyzed each
expression to produce an execution procedure that took an environment as
argument and performed the required operations. In our compiler, we will
do essentially the same analysis. Instead of producing execution
procedures, however, we will generate sequences of instructions to be
run by our register machine.
The procedure `compile` is the top-level dispatch in the compiler. It
corresponds to the `eval` procedure of [Section 4.1.1](#Section 4.1.1),
the `analyze` procedure of [Section 4.1.7](#Section 4.1.7), and the
`eval/dispatch` entry point of the explicit-control-evaluator in
[Section 5.4.1](#Section 5.4.1). The compiler, like the interpreters,
uses the expression-syntax procedures defined in [Section
4.1.2](#Section 4.1.2).[^320] `compile` performs a case analysis on the
syntactic type of the expression to be compiled. For each type of
expression, it dispatches to a specialized *code generator*:
::: scheme
(define (compile exp target linkage) (cond ((self-evaluating? exp)
(compile-self-evaluating exp target linkage)) ((quoted? exp)
(compile-quoted exp target linkage)) ((variable? exp) (compile-variable
exp target linkage)) ((assignment? exp) (compile-assignment exp target
linkage)) ((definition? exp) (compile-definition exp target linkage))
((if? exp) (compile-if exp target linkage)) ((lambda? exp)
(compile-lambda exp target linkage)) ((begin? exp) (compile-sequence
(begin-actions exp) target linkage)) ((cond? exp) (compile (cond-\>if
exp) target linkage)) ((application? exp) (compile-application exp
target linkage)) (else (error \"Unknown expression type: COMPILE\"
exp))))
:::
#### Targets and linkages {#targets-and-linkages .unnumbered}
`compile` and the code generators that it calls take two arguments in
addition to the expression to compile. There is a *target*, which
specifies the register in which the compiled code is to return the value
of the expression. There is also a *linkage descriptor*, which describes
how the code resulting from the compilation of the expression should
proceed when it has finished its execution. The linkage descriptor can
require that the code do one of the following three things:
- continue at the next instruction in sequence (this is specified by
the linkage descriptor `next`),
- return from the procedure being compiled (this is specified by the
linkage descriptor `return`), or
- jump to a named entry point (this is specified by using the
designated label as the linkage descriptor).
For example, compiling the expression `5` (which is self-evaluating)
with a target of the `val` register and a linkage of `next` should
produce the instruction
::: scheme
(assign val (const 5))
:::
Compiling the same expression with a linkage of `return` should produce
the instructions
::: scheme
(assign val (const 5)) (goto (reg continue))
:::
In the first case, execution will continue with the next instruction in
the sequence. In the second case, we will return from a procedure call.
In both cases, the value of the expression will be placed into the
target `val` register.
#### Instruction sequences and stack usage {#instruction-sequences-and-stack-usage .unnumbered}
Each code generator returns an *instruction sequence* containing the
object code it has generated for the expression. Code generation for a
compound expression is accomplished by combining the output from simpler
code generators for component expressions, just as evaluation of a
compound expression is accomplished by evaluating the component
expressions.
The simplest method for combining instruction sequences is a procedure
called `append/instruction/sequences`. It takes as arguments any number
of instruction sequences that are to be executed sequentially; it
appends them and returns the combined sequence. That is, if
$\langle$*seq*$_1\rangle$ and $\langle$*seq*$_2\rangle$ are sequences of
instructions, then evaluating
::: scheme
(append-instruction-sequences
$\color{SchemeDark}\langle$ *seq* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *seq* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ )
:::
produces the sequence
::: scheme
$\color{SchemeDark}\langle$ *seq* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *seq* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$
:::
Whenever registers might need to be saved, the compiler's code
generators use `preserving`, which is a more subtle method for combining
instruction sequences. `preserving` takes three arguments: a set of
registers and two instruction sequences that are to be executed
sequentially. It appends the sequences in such a way that the contents
of each register in the set is preserved over the execution of the first
sequence, if this is needed for the execution of the second sequence.
That is, if the first sequence modifies the register and the second
sequence actually needs the register's original contents, then
`preserving` wraps a `save` and a `restore` of the register around the
first sequence before appending the sequences. Otherwise, `preserving`
simply returns the appended instruction sequences. Thus, for example,
::: scheme
(preserving (list
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *reg* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ )
$\color{SchemeDark}\langle$ *seq* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 1}}\rangle$
$\color{SchemeDark}\langle$ *seq* $\color{SchemeDark}_{\hbox{\ttfamily\scriptsize 2}}\rangle$ )
:::
produces one of the following four sequences of instructions, depending
on how $\langle$*seq*$_1\rangle$ and $\langle$*seq*$_2\rangle$ use
$\langle$*reg*$_1\rangle$ and $\langle$*reg*$_2\rangle$:
$$\vbox{
\offinterlineskip
\halign{
\strut \kern0.8em # \kern0.4em \hfil & \vrule
\kern0.8em # \kern0.4em \hfil & \vrule
\kern0.8em # \kern0.4em \hfil & \vrule
\kern0.8em # \kern0.4em \hfil \cr
$\langle{\mathit{seq}_1}\rangle$
& \hbox{\tt (save} $\langle{\mathit{reg}_1}\rangle${\tt)}
& \hbox{\tt (save} $\langle{\mathit{reg}_2}\rangle${\tt)}
& \hbox{\tt (save} $\langle{\mathit{reg}_2}\rangle${\tt)} \cr
$\langle{\mathit{seq}_2}\rangle$
& $\langle{\mathit{seq}_1}\rangle$
& $\langle{\mathit{seq}_1}\rangle$
& \hbox{\tt (save} $\langle{\mathit{reg}_1}\rangle${\tt)} \cr
& \hbox{\tt (restore} $\langle{\mathit{reg}_1}\rangle${\tt)}
& \hbox{\tt (restore} $\langle{\mathit{reg}_2}\rangle${\tt)}
& $\langle{\mathit{seq}_1}\rangle$ \cr
& $\langle{\mathit{seq}_2}\rangle$
& $\langle{\mathit{seq}_2}\rangle$
& \hbox{\tt (restore} $\langle{\mathit{reg}_1}\rangle${\tt)} \cr
& &
& \hbox{\tt (restore} $\langle{\mathit{reg}_2}\rangle${\tt)} \cr
& &
& $\langle{\mathit{seq}_2}\rangle$ \cr
}
}$$
By using `preserving` to combine instruction sequences the compiler
avoids unnecessary stack operations. This also isolates the details of
whether or not to generate `save` and `restore` instructions within the
`preserving` procedure, separating them from the concerns that arise in
writing each of the individual code generators. In fact no `save` or
`restore` instructions are explicitly produced by the code generators.
In principle, we could represent an instruction sequence simply as a
list of instructions. `append/instruction/sequences` could then combine
instruction sequences by performing an ordinary list `append`. However,
`preserving` would then be a complex operation, because it would have to
analyze each instruction sequence to determine how the sequence uses its
registers. `preserving` would be inefficient as well as complex, because
it would have to analyze each of its instruction sequence arguments,
even though these sequences might themselves have been constructed by
calls to `preserving`, in which case their parts would have already been
analyzed. To avoid such repetitious analysis we will associate with each
instruction sequence some information about its register use. When we
construct a basic instruction sequence we will provide this information
explicitly, and the procedures that combine instruction sequences will
derive register-use information for the combined sequence from the
information associated with the component sequences.
An instruction sequence will contain three pieces of information:
- the set of registers that must be initialized before the
instructions in the sequence are executed (these registers are said
to be *needed* by the sequence),
- the set of registers whose values are modified by the instructions
in the sequence, and
- the actual instructions (also called *statements*) in the sequence.
We will represent an instruction sequence as a list of its three parts.
The constructor for instruction sequences is thus
::: scheme
(define (make-instruction-sequence needs modifies statements) (list
needs modifies statements))
:::
For example, the two-instruction sequence that looks up the value of the
variable `x` in the current environment, assigns the result to `val`,
and then returns, requires registers `env` and `continue` to have been
initialized, and modifies register `val`. This sequence would therefore
be constructed as
::: scheme
(make-instruction-sequence '(env continue) '(val) '((assign val (op
lookup-variable-value) (const x) (reg env)) (goto (reg continue))))
:::
We sometimes need to construct an instruction sequence with no
statements:
::: scheme
(define (empty-instruction-sequence) (make-instruction-sequence '() '()
'()))
:::
The procedures for combining instruction sequences are shown in [Section
5.5.4](#Section 5.5.4).
> **[]{#Exercise 5.31 label="Exercise 5.31"}Exercise 5.31:** In
> evaluating a procedure application, the explicit-control evaluator
> always saves and restores the `env` register around the evaluation of
> the operator, saves and restores `env` around the evaluation of each
> operand (except the final one), saves and restores `argl` around the
> evaluation of each operand, and saves and restores `proc` around the
> evaluation of the operand sequence. For each of the following
> combinations, say which of these `save` and `restore` operations are
> superfluous and thus could be eliminated by the compiler's
> `preserving` mechanism:
>
> ::: scheme
> (f 'x 'y) ((f) 'x 'y) (f (g 'x) y) (f (g 'x) 'y)
> :::
> **[]{#Exercise 5.32 label="Exercise 5.32"}Exercise 5.32:** Using the
> `preserving` mechanism, the compiler will avoid saving and restoring
> `env` around the evaluation of the operator of a combination in the
> case where the operator is a symbol. We could also build such
> optimizations into the evaluator. Indeed, the explicit-control
> evaluator of [Section 5.4](#Section 5.4) already performs a similar
> optimization, by treating combinations with no operands as a special
> case.
>
> a. Extend the explicit-control evaluator to recognize as a separate
> class of expressions combinations whose operator is a symbol, and
> to take advantage of this fact in evaluating such expressions.
>
> b. Alyssa P. Hacker suggests that by extending the evaluator to
> recognize more and more special cases we could incorporate all the
> compiler's optimizations, and that this would eliminate the
> advantage of compilation altogether. What do you think of this
> idea?
### Compiling Expressions {#Section 5.5.2}
In this section and the next we implement the code generators to which
the `compile` procedure dispatches.
#### Compiling linkage code {#compiling-linkage-code .unnumbered}
In general, the output of each code generator will end with
instructions---generated by the procedure `compile/linkage`---that
implement the required linkage. If the linkage is `return` then we must
generate the instruction `(goto (reg continue))`. This needs the
`continue` register and does not modify any registers. If the linkage is
`next`, then we needn't include any additional instructions. Otherwise,
the linkage is a label, and we generate a `goto` to that label, an
instruction that does not need or modify any registers.[^321]
::: scheme
(define (compile-linkage linkage) (cond ((eq? linkage 'return)
(make-instruction-sequence '(continue) '() '((goto (reg continue)))))
((eq? linkage 'next) (empty-instruction-sequence)) (else
(make-instruction-sequence '() '() '((goto (label ,linkage)))))))
:::
The linkage code is appended to an instruction sequence by `preserving`
the `continue` register, since a `return` linkage will require the
`continue` register: If the given instruction sequence modifies
`continue` and the linkage code needs it, `continue` will be saved and
restored.
::: scheme
(define (end-with-linkage linkage instruction-sequence) (preserving
'(continue) instruction-sequence (compile-linkage linkage)))
:::
#### Compiling simple expressions {#compiling-simple-expressions .unnumbered}
The code generators for self-evaluating expressions, quotations, and
variables construct instruction sequences that assign the required value
to the target register and then proceed as specified by the linkage
descriptor.
::: scheme
(define (compile-self-evaluating exp target linkage) (end-with-linkage
linkage (make-instruction-sequence '() (list target) '((assign ,target
(const ,exp)))))) (define (compile-quoted exp target linkage)
(end-with-linkage linkage (make-instruction-sequence '() (list target)
'((assign ,target (const ,(text-of-quotation exp))))))) (define
(compile-variable exp target linkage) (end-with-linkage linkage
(make-instruction-sequence '(env) (list target) '((assign ,target (op
lookup-variable-value) (const ,exp) (reg env))))))
:::
All these assignment instructions modify the target register, and the
one that looks up a variable needs the `env` register.
Assignments and definitions are handled much as they are in the
interpreter. We recursively generate code that computes the value to be
assigned to the variable, and append to it a two-instruction sequence
that actually sets or defines the variable and assigns the value of the
whole expression (the symbol `ok`) to the target register. The recursive
compilation has target `val` and linkage `next` so that the code will
put its result into `val` and continue with the code that is appended
after it. The appending is done preserving `env`, since the environment
is needed for setting or defining the variable and the code for the
variable value could be the compilation of a complex expression that
might modify the registers in arbitrary ways.
::: scheme
(define (compile-assignment exp target linkage) (let ((var
(assignment-variable exp)) (get-value-code (compile (assignment-value
exp) 'val 'next))) (end-with-linkage linkage (preserving '(env)
get-value-code (make-instruction-sequence '(env val) (list target)
'((perform (op set-variable-value!) (const ,var) (reg val) (reg env))
(assign ,target (const ok)))))))) (define (compile-definition exp target
linkage) (let ((var (definition-variable exp)) (get-value-code (compile
(definition-value exp) 'val 'next))) (end-with-linkage linkage
(preserving '(env) get-value-code (make-instruction-sequence '(env val)
(list target) '((perform (op define-variable!) (const ,var) (reg val)
(reg env)) (assign ,target (const ok))))))))
:::
The appended two-instruction sequence requires `env` and `val` and
modifies the target. Note that although we preserve `env` for this
sequence, we do not preserve `val`, because the `get/value/code` is
designed to explicitly place its result in `val` for use by this
sequence. (In fact, if we did preserve `val`, we would have a bug,
because this would cause the previous contents of `val` to be restored
right after the `get/value/code` is run.)
#### Compiling conditional expressions {#compiling-conditional-expressions .unnumbered}
The code for an `if` expression compiled with a given target and linkage
has the form
::: scheme
$\color{SchemeDark}\langle$ *compilation of predicate, target *val*,
linkage *next** $\color{SchemeDark}\rangle$ (test (op false?) (reg
val)) (branch (label false-branch)) true-branch
$\color{SchemeDark}\langle$ *compilation of consequent with given
target* *and given linkage or
*after-if** $\color{SchemeDark}\rangle$ false-branch
$\color{SchemeDark}\langle$ *compilation of alternative with given
target and linkage* $\color{SchemeDark}\rangle$ after-if
:::
To generate this code, we compile the predicate, consequent, and
alternative, and combine the resulting code with instructions to test
the predicate result and with newly generated labels to mark the true
and false branches and the end of the conditional.[^322] In this
arrangement of code, we must branch around the true branch if the test
is false. The only slight complication is in how the linkage for the
true branch should be handled. If the linkage for the conditional is
`return` or a label, then the true and false branches will both use this
same linkage. If the linkage is `next`, the true branch ends with a jump
around the code for the false branch to the label at the end of the
conditional.
::: scheme
(define (compile-if exp target linkage) (let ((t-branch (make-label
'true-branch)) (f-branch (make-label 'false-branch)) (after-if
(make-label 'after-if))) (let ((consequent-linkage (if (eq? linkage
'next) after-if linkage))) (let ((p-code (compile (if-predicate exp)
'val 'next)) (c-code (compile (if-consequent exp) target
consequent-linkage)) (a-code (compile (if-alternative exp) target
linkage))) (preserving '(env continue) p-code
(append-instruction-sequences (make-instruction-sequence '(val) '()
'((test (op false?) (reg val)) (branch (label ,f-branch))))
(parallel-instruction-sequences (append-instruction-sequences t-branch
c-code) (append-instruction-sequences f-branch a-code)) after-if))))))
:::
`env` is preserved around the predicate code because it could be needed
by the true and false branches, and `continue` is preserved because it
could be needed by the linkage code in those branches. The code for the
true and false branches (which are not executed sequentially) is
appended using a special combiner `parallel/instruction/sequences`
described in [Section 5.5.4](#Section 5.5.4).
Note that `cond` is a derived expression, so all that the compiler needs
to do handle it is to apply the `cond/>if` transformer (from [Section
4.1.2](#Section 4.1.2)) and compile the resulting `if` expression.
#### Compiling sequences {#compiling-sequences .unnumbered}
The compilation of sequences (from procedure bodies or explicit `begin`
expressions) parallels their evaluation. Each expression of the sequence
is compiled---the last expression with the linkage specified for the
sequence, and the other expressions with linkage `next` (to execute the
rest of the sequence). The instruction sequences for the individual
expressions are appended to form a single instruction sequence, such
that `env` (needed for the rest of the sequence) and `continue`
(possibly needed for the linkage at the end of the sequence) are
preserved.
::: scheme
(define (compile-sequence seq target linkage) (if (last-exp? seq)
(compile (first-exp seq) target linkage) (preserving '(env continue)
(compile (first-exp seq) target 'next) (compile-sequence (rest-exps seq)
target linkage))))
:::
#### Compiling `lambda` expressions {#compiling-lambda-expressions .unnumbered}
`lambda` expressions construct procedures. The object code for a
`lambda` expression must have the form
::: scheme
$\color{SchemeDark}\langle$ *construct procedure object and assign it
to target register* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *linkage* $\color{SchemeDark}\rangle$
:::
When we compile the `lambda` expression, we also generate the code for
the procedure body. Although the body won't be executed at the time of
procedure construction, it is convenient to insert it into the object
code right after the code for the `lambda`. If the linkage for the
`lambda` expression is a label or `return`, this is fine. But if the
linkage is `next`, we will need to skip around the code for the
procedure body by using a linkage that jumps to a label that is inserted
after the body. The object code thus has the form
::: scheme
$\color{SchemeDark}\langle$ *construct procedure object and assign it
to target register* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *code for given
linkage* $\color{SchemeDark}\rangle$ *or*
`(goto (label after/lambda))`
$\color{SchemeDark}\langle$ *compilation of procedure
body* $\color{SchemeDark}\rangle$ after-lambda
:::
`compile/lambda` generates the code for constructing the procedure
object followed by the code for the procedure body. The procedure object
will be constructed at run time by combining the current environment
(the environment at the point of definition) with the entry point to the
compiled procedure body (a newly generated label).[^323]
::: scheme
(define (compile-lambda exp target linkage) (let ((proc-entry
(make-label 'entry)) (after-lambda (make-label 'after-lambda))) (let
((lambda-linkage (if (eq? linkage 'next) after-lambda linkage)))
(append-instruction-sequences (tack-on-instruction-sequence
(end-with-linkage lambda-linkage (make-instruction-sequence '(env) (list
target) '((assign ,target (op make-compiled-procedure) (label
,proc-entry) (reg env))))) (compile-lambda-body exp proc-entry))
after-lambda))))
:::
`compile/lambda` uses the special combiner
`tack/on/instruction/sequence` rather than
`append/instruction/sequences` ([Section 5.5.4](#Section 5.5.4)) to
append the procedure body to the `lambda` expression code, because the
body is not part of the sequence of instructions that will be executed
when the combined sequence is entered; rather, it is in the sequence
only because that was a convenient place to put it.
`compile/lambda/body` constructs the code for the body of the procedure.
This code begins with a label for the entry point. Next come
instructions that will cause the run-time evaluation environment to
switch to the correct environment for evaluating the procedure
body---namely, the definition environment of the procedure, extended to
include the bindings of the formal parameters to the arguments with
which the procedure is called. After this comes the code for the
sequence of expressions that makes up the procedure body. The sequence
is compiled with linkage `return` and target `val` so that it will end
by returning from the procedure with the procedure result in `val`.
::: scheme
(define (compile-lambda-body exp proc-entry) (let ((formals
(lambda-parameters exp))) (append-instruction-sequences
(make-instruction-sequence '(env proc argl) '(env) '(,proc-entry (assign
env (op compiled-procedure-env) (reg proc)) (assign env (op
extend-environment) (const ,formals) (reg argl) (reg env))))
(compile-sequence (lambda-body exp) 'val 'return))))
:::
### Compiling Combinations {#Section 5.5.3}
The essence of the compilation process is the compilation of procedure
applications. The code for a combination compiled with a given target
and linkage has the form
::: scheme
$\color{SchemeDark}\langle$ *compilation of operator, target *proc*,
linkage *next** $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *evaluate operands and construct argument
list in *argl** $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *compilation of procedure call with given
target and linkage* $\color{SchemeDark}\rangle$
:::
The registers `env`, `proc`, and `argl` may have to be saved and
restored during evaluation of the operator and operands. Note that this
is the only place in the compiler where a target other than `val` is
specified.
The required code is generated by `compile/application`. This
recursively compiles the operator, to produce code that puts the
procedure to be applied into `proc`, and compiles the operands, to
produce code that evaluates the individual operands of the application.
The instruction sequences for the operands are combined (by
`construct/arglist`) with code that constructs the list of arguments in
`argl`, and the resulting argument-list code is combined with the
procedure code and the code that performs the procedure call (produced
by `compile/procedure/call`). In appending the code sequences, the `env`
register must be preserved around the evaluation of the operator (since
evaluating the operator might modify `env`, which will be needed to
evaluate the operands), and the `proc` register must be preserved around
the construction of the argument list (since evaluating the operands
might modify `proc`, which will be needed for the actual procedure
application). `continue` must also be preserved throughout, since it is
needed for the linkage in the procedure call.
::: scheme
(define (compile-application exp target linkage) (let ((proc-code
(compile (operator exp) 'proc 'next)) (operand-codes (map (lambda
(operand) (compile operand 'val 'next)) (operands exp)))) (preserving
'(env continue) proc-code (preserving '(proc continue)
(construct-arglist operand-codes) (compile-procedure-call target
linkage)))))
:::
The code to construct the argument list will evaluate each operand into
`val` and then `cons` that value onto the argument list being
accumulated in `argl`. Since we `cons` the arguments onto `argl` in
sequence, we must start with the last argument and end with the first,
so that the arguments will appear in order from first to last in the
resulting list. Rather than waste an instruction by initializing `argl`
to the empty list to set up for this sequence of evaluations, we make
the first code sequence construct the initial `argl`. The general form
of the argument-list construction is thus as follows:
::: scheme
$\color{SchemeDark}\langle$ *compilation of last operand, targeted to
*val** $\color{SchemeDark}\rangle$ (assign argl (op list) (reg val))
$\color{SchemeDark}\langle$ *compilation of next operand, targeted to
*val** $\color{SchemeDark}\rangle$ (assign argl (op cons) (reg val)
(reg argl)) $\dots$ $\color{SchemeDark}\langle$ *compilation of
first operand, targeted to *val** $\color{SchemeDark}\rangle$ (assign
argl (op cons) (reg val) (reg argl))
:::
`argl` must be preserved around each operand evaluation except the first
(so that arguments accumulated so far won't be lost), and `env` must be
preserved around each operand evaluation except the last (for use by
subsequent operand evaluations).
Compiling this argument code is a bit tricky, because of the special
treatment of the first operand to be evaluated and the need to preserve
`argl` and `env` in different places. The `construct/arglist` procedure
takes as arguments the code that evaluates the individual operands. If
there are no operands at all, it simply emits the instruction
::: scheme
(assign argl (const ()))
:::
Otherwise, `construct/arglist` creates code that initializes `argl` with
the last argument, and appends code that evaluates the rest of the
arguments and adjoins them to `argl` in succession. In order to process
the arguments from last to first, we must reverse the list of operand
code sequences from the order supplied by `compile/application`.
::: scheme
(define (construct-arglist operand-codes) (let ((operand-codes (reverse
operand-codes))) (if (null? operand-codes) (make-instruction-sequence
'() '(argl) '((assign argl (const ())))) (let ((code-to-get-last-arg
(append-instruction-sequences (car operand-codes)
(make-instruction-sequence '(val) '(argl) '((assign argl (op list) (reg
val))))))) (if (null? (cdr operand-codes)) code-to-get-last-arg
(preserving '(env) code-to-get-last-arg (code-to-get-rest-args (cdr
operand-codes))))))))
(define (code-to-get-rest-args operand-codes) (let ((code-for-next-arg
(preserving '(argl) (car operand-codes) (make-instruction-sequence '(val
argl) '(argl) '((assign argl (op cons) (reg val) (reg argl))))))) (if
(null? (cdr operand-codes)) code-for-next-arg (preserving '(env)
code-for-next-arg (code-to-get-rest-args (cdr operand-codes))))))
:::
#### Applying procedures {#applying-procedures .unnumbered}
After evaluating the elements of a combination, the compiled code must
apply the procedure in `proc` to the arguments in `argl`. The code
performs essentially the same dispatch as the `apply` procedure in the
metacircular evaluator of [Section 4.1.1](#Section 4.1.1) or the
`apply/dispatch` entry point in the explicit-control evaluator of
[Section 5.4.1](#Section 5.4.1). It checks whether the procedure to be
applied is a primitive procedure or a compiled procedure. For a
primitive procedure, it uses `apply/primitive/procedure`; we will see
shortly how it handles compiled procedures. The procedure-application
code has the following form:
::: scheme
(test (op primitive-procedure?) (reg proc)) (branch (label
primitive-branch)) compiled-branch $\color{SchemeDark}\langle$ *code
to apply compiled procedure with given target* *and appropriate
linkage* $\color{SchemeDark}\rangle$ primitive-branch (assign
$\color{SchemeDark}\langle$ *target* $\color{SchemeDark}\rangle$ (op
apply-primitive-procedure) (reg proc) (reg argl))
$\color{SchemeDark}\langle$ *linkage* $\color{SchemeDark}\rangle$
after-call
:::
Observe that the compiled branch must skip around the primitive branch.
Therefore, if the linkage for the original procedure call was `next`,
the compound branch must use a linkage that jumps to a label that is
inserted after the primitive branch. (This is similar to the linkage
used for the true branch in `compile/if`.)
::: scheme
(define (compile-procedure-call target linkage) (let ((primitive-branch
(make-label 'primitive-branch)) (compiled-branch (make-label
'compiled-branch)) (after-call (make-label 'after-call)))
(let ((compiled-linkage (if (eq? linkage 'next) after-call linkage)))
(append-instruction-sequences (make-instruction-sequence '(proc) '()
'((test (op primitive-procedure?) (reg proc)) (branch (label
,primitive-branch)))) (parallel-instruction-sequences
(append-instruction-sequences compiled-branch (compile-proc-appl target
compiled-linkage)) (append-instruction-sequences primitive-branch
(end-with-linkage linkage (make-instruction-sequence '(proc argl) (list
target) '((assign ,target (op apply-primitive-procedure) (reg proc) (reg
argl))))))) after-call))))
:::
The primitive and compound branches, like the true and false branches in
`compile/if`, are appended using `parallel/instruction/sequences` rather
than the ordinary `append/instruction/sequences`, because they will not
be executed sequentially.
#### Applying compiled procedures {#applying-compiled-procedures .unnumbered}
The code that handles procedure application is the most subtle part of
the compiler, even though the instruction sequences it generates are
very short. A compiled procedure (as constructed by `compile/lambda`)
has an entry point, which is a label that designates where the code for
the procedure starts. The code at this entry point computes a result in
`val` and returns by executing the instruction `(goto (reg continue))`.
Thus, we might expect the code for a compiled-procedure application (to
be generated by `compile/proc/appl`) with a given target and linkage to
look like this if the linkage is a label
::: scheme
(assign continue (label proc-return)) (assign val (op
compiled-procedure-entry) (reg proc)) (goto (reg val)) proc-return
(assign
$\color{SchemeDark}\langle$ *target* $\color{SchemeDark}\rangle$
(reg val)) [; included if target is not `val`]{.roman} (goto (label
$\color{SchemeDark}\langle$ *linkage* $\color{SchemeDark}\rangle$ ))
[; linkage code]{.roman}
:::
or like this if the linkage is `return`.
::: scheme
(save continue) (assign continue (label proc-return)) (assign val (op
compiled-procedure-entry) (reg proc)) (goto (reg val)) proc-return
(assign
$\color{SchemeDark}\langle$ *target* $\color{SchemeDark}\rangle$
(reg val)) [; included if target is not `val`]{.roman} (restore
continue) (goto (reg continue)) [; linkage code]{.roman}
:::
This code sets up `continue` so that the procedure will return to a
label `proc/return` and jumps to the procedure's entry point. The code
at `proc/return` transfers the procedure's result from `val` to the
target register (if necessary) and then jumps to the location specified
by the linkage. (The linkage is always `return` or a label, because
`compile/procedure/call` replaces a `next` linkage for the
compound-procedure branch by an `after/call` label.)
In fact, if the target is not `val`, that is exactly the code our
compiler will generate.[^324] Usually, however, the target is `val` (the
only time the compiler specifies a different register is when targeting
the evaluation of an operator to `proc`), so the procedure result is put
directly into the target register and there is no need to return to a
special location that copies it. Instead, we simplify the code by
setting up `continue` so that the procedure will "return" directly to
the place specified by the caller's linkage:
::: scheme
$\color{SchemeDark}\langle$ *set up *continue* for
linkage* $\color{SchemeDark}\rangle$ (assign val (op
compiled-procedure-entry) (reg proc)) (goto (reg val))
:::
If the linkage is a label, we set up `continue` so that the procedure
will return to that label. (That is, the `(goto (reg continue))` the
procedure ends with becomes equivalent to the
`(goto (label `$\langle$*`linkage`*$\rangle$`))` at `proc/return`
above.)
::: scheme
(assign continue (label
$\color{SchemeDark}\langle$ *linkage* $\color{SchemeDark}\rangle$ ))
(assign val (op compiled-procedure-entry) (reg proc)) (goto (reg val))
:::
If the linkage is `return`, we don't need to set up `continue` at all:
It already holds the desired location. (That is, the
`(goto (reg continue))` the procedure ends with goes directly to the
place where the `(goto (reg continue))` at `proc/return` would have
gone.)
::: scheme
(assign val (op compiled-procedure-entry) (reg proc)) (goto (reg val))
:::
With this implementation of the `return` linkage, the compiler generates
tail-recursive code. Calling a procedure as the final step in a
procedure body does a direct transfer, without saving any information on
the stack.
Suppose instead that we had handled the case of a procedure call with a
linkage of `return` and a target of `val` as shown above for a non-`val`
target. This would destroy tail recursion. Our system would still give
the same value for any expression. But each time we called a procedure,
we would save `continue` and return after the call to undo the (useless)
save. These extra saves would accumulate during a nest of procedure
calls.[^325]
`compile/proc/appl` generates the above procedure-application code by
considering four cases, depending on whether the target for the call is
`val` and whether the linkage is `return`. Observe that the instruction
sequences are declared to modify all the registers, since executing the
procedure body can change the registers in arbitrary ways.[^326] Also
note that the code sequence for the case with target `val` and linkage
`return` is declared to need `continue`: Even though `continue` is not
explicitly used in the two-instruction sequence, we must be sure that
`continue` will have the correct value when we enter the compiled
procedure.
::: scheme
(define (compile-proc-appl target linkage) (cond ((and (eq? target 'val)
(not (eq? linkage 'return))) (make-instruction-sequence '(proc) all-regs
'((assign continue (label ,linkage)) (assign val (op
compiled-procedure-entry) (reg proc)) (goto (reg val))))) ((and (not
(eq? target 'val)) (not (eq? linkage 'return))) (let ((proc-return
(make-label 'proc-return))) (make-instruction-sequence '(proc) all-regs
'((assign continue (label ,proc-return)) (assign val (op
compiled-procedure-entry) (reg proc)) (goto (reg val)) ,proc-return
(assign ,target (reg val)) (goto (label ,linkage)))))) ((and (eq? target
'val) (eq? linkage 'return)) (make-instruction-sequence '(proc continue)
all-regs '((assign val (op compiled-procedure-entry) (reg proc)) (goto
(reg val))))) ((and (not (eq? target 'val)) (eq? linkage 'return))
(error \"return linkage, target not val: COMPILE\" target))))
:::
### Combining Instruction Sequences {#Section 5.5.4}
This section describes the details on how instruction sequences are
represented and combined. Recall from [Section 5.5.1](#Section 5.5.1)
that an instruction sequence is represented as a list of the registers
needed, the registers modified, and the actual instructions. We will
also consider a label (symbol) to be a degenerate case of an instruction
sequence, which doesn't need or modify any registers. So to determine
the registers needed and modified by instruction sequences we use the
selectors
::: scheme
(define (registers-needed s) (if (symbol? s) '() (car s))) (define
(registers-modified s) (if (symbol? s) '() (cadr s))) (define
(statements s) (if (symbol? s) (list s) (caddr s)))
:::
and to determine whether a given sequence needs or modifies a given
register we use the predicates
::: scheme
(define (needs-register? seq reg) (memq reg (registers-needed seq)))
(define (modifies-register? seq reg) (memq reg (registers-modified
seq)))
:::
In terms of these predicates and selectors, we can implement the various
instruction sequence combiners used throughout the compiler.
The basic combiner is `append/instruction/sequences`. This takes as
arguments an arbitrary number of instruction sequences that are to be
executed sequentially and returns an instruction sequence whose
statements are the statements of all the sequences appended together.
The subtle point is to determine the registers that are needed and
modified by the resulting sequence. It modifies those registers that are
modified by any of the sequences; it needs those registers that must be
initialized before the first sequence can be run (the registers needed
by the first sequence), together with those registers needed by any of
the other sequences that are not initialized (modified) by sequences
preceding it.
The sequences are appended two at a time by `append/2/sequences`. This
takes two instruction sequences `seq1` and `seq2` and returns the
instruction sequence whose statements are the statements of `seq1`
followed by the statements of `seq2`, whose modified registers are those
registers that are modified by either `seq1` or `seq2`, and whose needed
registers are the registers needed by `seq1` together with those
registers needed by `seq2` that are not modified by `seq1`. (In terms of
set operations, the new set of needed registers is the union of the set
of registers needed by `seq1` with the set difference of the registers
needed by `seq2` and the registers modified by `seq1`.) Thus,
`append/instruction/sequences` is implemented as follows:
::: scheme
(define (append-instruction-sequences . seqs) (define
(append-2-sequences seq1 seq2) (make-instruction-sequence (list-union
(registers-needed seq1) (list-difference (registers-needed seq2)
(registers-modified seq1))) (list-union (registers-modified seq1)
(registers-modified seq2)) (append (statements seq1) (statements
seq2)))) (define (append-seq-list seqs) (if (null? seqs)
(empty-instruction-sequence) (append-2-sequences (car seqs)
(append-seq-list (cdr seqs))))) (append-seq-list seqs))
:::
This procedure uses some simple operations for manipulating sets
represented as lists, similar to the (unordered) set representation
described in [Section 2.3.3](#Section 2.3.3):
::: scheme
(define (list-union s1 s2) (cond ((null? s1) s2) ((memq (car s1) s2)
(list-union (cdr s1) s2)) (else (cons (car s1) (list-union (cdr s1)
s2))))) (define (list-difference s1 s2) (cond ((null? s1) '()) ((memq
(car s1) s2) (list-difference (cdr s1) s2)) (else (cons (car s1)
(list-difference (cdr s1) s2)))))
:::
`preserving`, the second major instruction sequence combiner, takes a
list of registers `regs` and two instruction sequences `seq1` and `seq2`
that are to be executed sequentially. It returns an instruction sequence
whose statements are the statements of `seq1` followed by the statements
of `seq2`, with appropriate `save` and `restore` instructions around
`seq1` to protect the registers in `regs` that are modified by `seq1`
but needed by `seq2`. To accomplish this, `preserving` first creates a
sequence that has the required `save`s followed by the statements of
`seq1` followed by the required `restore`s. This sequence needs the
registers being saved and restored in addition to the registers needed
by `seq1`, and modifies the registers modified by `seq1` except for the
ones being saved and restored. This augmented sequence and `seq2` are
then appended in the usual way. The following procedure implements this
strategy recursively, walking down the list of registers to be
preserved:[^327]
::: scheme
(define (preserving regs seq1 seq2) (if (null? regs)
(append-instruction-sequences seq1 seq2) (let ((first-reg (car regs)))
(if (and (needs-register? seq2 first-reg) (modifies-register? seq1
first-reg)) (preserving (cdr regs) (make-instruction-sequence
(list-union (list first-reg) (registers-needed seq1)) (list-difference
(registers-modified seq1) (list first-reg)) (append '((save ,first-reg))
(statements seq1) '((restore ,first-reg)))) seq2) (preserving (cdr regs)
seq1 seq2)))))
:::
Another sequence combiner, `tack/on/instruction/sequence`, is used by
`compile/lambda` to append a procedure body to another sequence. Because
the procedure body is not "in line" to be executed as part of the
combined sequence, its register use has no impact on the register use of
the sequence in which it is embedded. We thus ignore the procedure
body's sets of needed and modified registers when we tack it onto the
other sequence.
::: scheme
(define (tack-on-instruction-sequence seq body-seq)
(make-instruction-sequence (registers-needed seq) (registers-modified
seq) (append (statements seq) (statements body-seq))))
:::
`compile/if` and `compile/procedure/call` use a special combiner called
`parallel/instruction/sequences` to append the two alternative branches
that follow a test. The two branches will never be executed
sequentially; for any particular evaluation of the test, one branch or
the other will be entered. Because of this, the registers needed by the
second branch are still needed by the combined sequence, even if these
are modified by the first branch.
::: scheme
(define (parallel-instruction-sequences seq1 seq2)
(make-instruction-sequence (list-union (registers-needed seq1)
(registers-needed seq2)) (list-union (registers-modified seq1)
(registers-modified seq2)) (append (statements seq1) (statements
seq2))))
:::
### An Example of Compiled Code {#Section 5.5.5}
Now that we have seen all the elements of the compiler, let us examine
an example of compiled code to see how things fit together. We will
compile the definition of a recursive `factorial` procedure by calling
`compile`:
::: scheme
(compile '(define (factorial n) (if (= n 1) 1 (\* (factorial (- n 1))
n))) 'val 'next)
:::
We have specified that the value of the `define` expression should be
placed in the `val` register. We don't care what the compiled code does
after executing the `define`, so our choice of `next` as the linkage
descriptor is arbitrary.
`compile` determines that the expression is a definition, so it calls
`compile/definition` to compile code to compute the value to be assigned
(targeted to `val`), followed by code to install the definition,
followed by code to put the value of the `define` (which is the symbol
`ok`) into the target register, followed finally by the linkage code.
`env` is preserved around the computation of the value, because it is
needed in order to install the definition. Because the linkage is
`next`, there is no linkage code in this case. The skeleton of the
compiled code is thus
::: scheme
$\color{SchemeDark}\langle$ *save *env* if modified by code to compute
value* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *compilation of definition value, target
*val*, linkage *next** $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *restore *env* if saved
above* $\color{SchemeDark}\rangle$ (perform (op define-variable!)
(const factorial) (reg val) (reg env)) (assign val (const ok))
:::
The expression that is to be compiled to produce the value for the
variable `factorial` is a `lambda` expression whose value is the
procedure that computes factorials. `compile` handles this by calling
`compile/lambda`, which compiles the procedure body, labels it as a new
entry point, and generates the instruction that will combine the
procedure body at the new entry point with the run-time environment and
assign the result to `val`. The sequence then skips around the compiled
procedure code, which is inserted at this point. The procedure code
itself begins by extending the procedure's definition environment by a
frame that binds the formal parameter `n` to the procedure argument.
Then comes the actual procedure body. Since this code for the value of
the variable doesn't modify the `env` register, the optional `save` and
`restore` shown above aren't generated. (The procedure code at `entry2`
isn't executed at this point, so its use of `env` is irrelevant.)
Therefore, the skeleton for the compiled code becomes
::: scheme
(assign val (op make-compiled-procedure) (label entry2) (reg env)) (goto
(label after-lambda1)) entry2 (assign env (op compiled-procedure-env)
(reg proc)) (assign env (op extend-environment) (const (n)) (reg argl)
(reg env)) $\color{SchemeDark}\langle$ *compilation of procedure
body* $\color{SchemeDark}\rangle$ after-lambda1 (perform (op
define-variable!) (const factorial) (reg val) (reg env)) (assign val
(const ok))
:::
A procedure body is always compiled (by `compile/lambda/body`) as a
sequence with target `val` and linkage `return`. The sequence in this
case consists of a single `if` expression:
::: scheme
(if (= n 1) 1 (\* (factorial (- n 1)) n))
:::
`compile/if` generates code that first computes the predicate (targeted
to `val`), then checks the result and branches around the true branch if
the predicate is false. `env` and `continue` are preserved around the
predicate code, since they may be needed for the rest of the `if`
expression. Since the `if` expression is the final expression (and only
expression) in the sequence making up the procedure body, its target is
`val` and its linkage is `return`, so the true and false branches are
both compiled with target `val` and linkage `return`. (That is, the
value of the conditional, which is the value computed by either of its
branches, is the value of the procedure.)
::: scheme
$\color{SchemeDark}\langle$ *save *continue*, *env* if modified by
predicate and needed by branches* $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *compilation of predicate, target *val*,
linkage *next** $\color{SchemeDark}\rangle$
$\color{SchemeDark}\langle$ *restore *continue*, *env* if saved
above* $\color{SchemeDark}\rangle$ (test (op false?) (reg val))
(branch (label false-branch4)) true-branch5
$\color{SchemeDark}\langle$ *compilation of true branch, target *val*,
linkage *return** $\color{SchemeDark}\rangle$ false-branch4
$\color{SchemeDark}\langle$ *compilation of false branch, target
*val*, linkage *return** $\color{SchemeDark}\rangle$ after-if3
:::
The predicate `(= n 1)` is a procedure call. This looks up the operator
(the symbol `=`) and places this value in `proc`. It then assembles the
arguments `1` and the value of `n` into `argl`. Then it tests whether
`proc` contains a primitive or a compound procedure, and dispatches to a
primitive branch or a compound branch accordingly. Both branches resume
at the `after/call` label. The requirements to preserve registers around
the evaluation of the operator and operands don't result in any saving
of registers, because in this case those evaluations don't modify the
registers in question.
::: scheme
(assign proc (op lookup-variable-value) (const =) (reg env)) (assign val
(const 1)) (assign argl (op list) (reg val)) (assign val (op
lookup-variable-value) (const n) (reg env)) (assign argl (op cons) (reg
val) (reg argl)) (test (op primitive-procedure?) (reg proc)) (branch
(label primitive-branch17)) compiled-branch16 (assign continue (label
after-call15)) (assign val (op compiled-procedure-entry) (reg proc))
(goto (reg val)) primitive-branch17 (assign val (op
apply-primitive-procedure) (reg proc) (reg argl)) after-call15
:::
The true branch, which is the constant 1, compiles (with target `val`
and linkage `return`) to
::: scheme
(assign val (const 1)) (goto (reg continue))
:::
The code for the false branch is another procedure call, where the
procedure is the value of the symbol `*`, and the arguments are `n` and
the result of another procedure call (a call to `factorial`). Each of
these calls sets up `proc` and `argl` and its own primitive and compound
branches. [Figure 5.17](#Figure 5.17) shows the complete compilation of
the definition of the `factorial` procedure. Notice that the possible
`save` and `restore` of `continue` and `env` around the predicate, shown
above, are in fact generated, because these registers are modified by
the procedure call in the predicate and needed for the procedure call
and the `return` linkage in the branches.
> **[]{#Exercise 5.33 label="Exercise 5.33"}Exercise 5.33:** Consider
> the following definition of a factorial procedure, which is slightly
> different from the one given above:
>
> ::: scheme
> (define (factorial-alt n) (if (= n 1) 1 (\* n (factorial-alt (- n
> 1)))))
> :::
>
> Compile this procedure and compare the resulting code with that
> produced for `factorial`. Explain any differences you find. Does
> either program execute more efficiently than the other?
> **[]{#Exercise 5.34 label="Exercise 5.34"}Exercise 5.34:** Compile the
> iterative factorial procedure
>
> ::: scheme
> (define (factorial n) (define (iter product counter) (if (\> counter
> n) product (iter (\* counter product) (+ counter 1)))) (iter 1 1))
> :::
>
> Annotate the resulting code, showing the essential difference between
> the code for iterative and recursive versions of `factorial` that
> makes one process build up stack space and the other run in constant
> stack space.
> **[]{#Figure 5.17 label="Figure 5.17"}Figure 5.17:** $\downarrow$
> Compilation of the definition of the `factorial` procedure.
>
> ::: smallscheme
> [;; construct the procedure and skip over code for the procedure
> body]{.roman} (assign val (op make-compiled-procedure) (label entry2)
> (reg env)) (goto (label after-lambda1)) entry2 [; calls to
> `factorial` will enter here]{.roman} (assign env (op
> compiled-procedure-env) (reg proc)) (assign env (op
> extend-environment) (const (n)) (reg argl) (reg env)) [;; begin
> actual procedure body]{.roman} (save continue) (save env) [;;
> compute `(= n 1)`]{.roman} (assign proc (op lookup-variable-value)
> (const =) (reg env)) (assign val (const 1)) (assign argl (op list)
> (reg val)) (assign val (op lookup-variable-value) (const n) (reg env))
> (assign argl (op cons) (reg val) (reg argl)) (test (op
> primitive-procedure?) (reg proc)) (branch (label primitive-branch17))
> compiled-branch16 (assign continue (label after-call15)) (assign val
> (op compiled-procedure-entry) (reg proc)) (goto (reg val))
> primitive-branch17 (assign val (op apply-primitive-procedure) (reg
> proc) (reg argl)) after-call15 [; `val` now contains result of
> `(= n 1)`]{.roman} (restore env) (restore continue) (test (op false?)
> (reg val)) (branch (label false-branch4)) true-branch5 [; return
> 1]{.roman} (assign val (const 1)) (goto (reg continue)) false-branch4
> [;; compute and return `(* (factorial (- n 1)) n)`]{.roman} (assign
> proc (op lookup-variable-value) (const \*) (reg env)) (save continue)
> (save proc) [; save `*`]{.roman} procedure (assign val (op
> lookup-variable-value) (const n) (reg env)) (assign argl (op list)
> (reg val)) (save argl) [; save partial argument list for
> `*`]{.roman} [;; compute `(factorial (- n 1))`, which is the other
> argument for `*`]{.roman} (assign proc (op lookup-variable-value)
> (const factorial) (reg env)) (save proc) [; save `factorial`
> procedure]{.roman} [;; compute `(- n 1)`, which is the argument for
> `factorial`]{.roman} (assign proc (op lookup-variable-value) (const
> -) (reg env)) (assign val (const 1)) (assign argl (op list) (reg val))
> (assign val (op lookup-variable-value) (const n) (reg env)) (assign
> argl (op cons) (reg val) (reg argl)) (test (op primitive-procedure?)
> (reg proc)) (branch (label primitive-branch8)) compiled-branch7
> (assign continue (label after-call6)) (assign val (op
> compiled-procedure-entry) (reg proc)) (goto (reg val))
> primitive-branch8 (assign val (op apply-primitive-procedure) (reg
> proc) (reg argl)) after-call6 [; `val` now contains result of
> `(- n 1)`]{.roman} (assign argl (op list) (reg val)) (restore proc)
> [; restore `factorial`]{.roman} [;; apply `factorial`]{.roman}
> (test (op primitive-procedure?) (reg proc)) (branch (label
> primitive-branch11)) compiled-branch10 (assign continue (label
> after-call9)) (assign val (op compiled-procedure-entry) (reg proc))
> (goto (reg val)) primitive-branch11 (assign val (op
> apply-primitive-procedure) (reg proc) (reg argl)) after-call9 [;
> `val` now contains result of `(factorial (- n 1))`]{.roman} (restore
> argl) [; restore partial argument list for `*`]{.roman} (assign argl
> (op cons) (reg val) (reg argl)) (restore proc) [; restore
> `*`]{.roman} (restore continue) [;; apply `*` and return its
> value]{.roman} (test (op primitive-procedure?) (reg proc)) (branch
> (label primitive-branch14)) compiled-branch13 [;; note that a
> compound procedure here is called tail-recursively]{.roman} (assign
> val (op compiled-procedure-entry) (reg proc)) (goto (reg val))
> primitive-branch14 (assign val (op apply-primitive-procedure) (reg
> proc) (reg argl)) (goto (reg continue)) after-call12 after-if3
> after-lambda1 [;; assign the procedure to the variable
> `factorial`]{.roman} (perform (op define-variable!) (const factorial)
> (reg val) (reg env)) (assign val (const ok))
> :::
> **[]{#Exercise 5.35 label="Exercise 5.35"}Exercise 5.35:** What
> expression was compiled to produce the code shown in [Figure
> 5.18](#Figure 5.18)?
> **[]{#Figure 5.18 label="Figure 5.18"}Figure 5.18:** $\downarrow$ An
> example of compiler output. See [Exercise 5.35](#Exercise 5.35).
>
> ::: smallscheme
> (assign val (op make-compiled-procedure) (label entry16) (reg env))
> (goto (label after-lambda15)) entry16 (assign env (op
> compiled-procedure-env) (reg proc)) (assign env (op
> extend-environment) (const (x)) (reg argl) (reg env)) (assign proc (op
> lookup-variable-value) (const +) (reg env)) (save continue) (save
> proc) (save env) (assign proc (op lookup-variable-value) (const g)
> (reg env)) (save proc) (assign proc (op lookup-variable-value) (const
> +) (reg env)) (assign val (const 2)) (assign argl (op list) (reg val))
> (assign val (op lookup-variable-value) (const x) (reg env)) (assign
> argl (op cons) (reg val) (reg argl)) (test (op primitive-procedure?)
> (reg proc)) (branch (label primitive-branch19)) compiled-branch18
> (assign continue (label after-call17)) (assign val (op
> compiled-procedure-entry) (reg proc)) (goto (reg val))
> primitive-branch19 (assign val (op apply-primitive-procedure) (reg
> proc) (reg argl)) after-call17 (assign argl (op list) (reg val))
> (restore proc) (test (op primitive-procedure?) (reg proc)) (branch
> (label primitive-branch22)) compiled-branch21 (assign continue (label
> after-call20)) (assign val (op compiled-procedure-entry) (reg proc))
> (goto (reg val)) primitive-branch22 (assign val (op
> apply-primitive-procedure) (reg proc) (reg argl)) after-call20 (assign
> argl (op list) (reg val)) (restore env) (assign val (op
> lookup-variable-value) (const x) (reg env)) (assign argl (op cons)
> (reg val) (reg argl)) (restore proc) (restore continue) (test (op
> primitive-procedure?) (reg proc)) (branch (label primitive-branch25))
> compiled-branch24 (assign val (op compiled-procedure-entry) (reg
> proc)) (goto (reg val)) primitive-branch25 (assign val (op
> apply-primitive-procedure) (reg proc) (reg argl)) (goto (reg
> continue)) after-call23 after-lambda15 (perform (op define-variable!)
> (const f) (reg val) (reg env)) (assign val (const ok))
> :::
> **[]{#Exercise 5.36 label="Exercise 5.36"}Exercise 5.36:** What order
> of evaluation does our compiler produce for operands of a combination?
> Is it left-to-right, right-to-left, or some other order? Where in the
> compiler is this order determined? Modify the compiler so that it
> produces some other order of evaluation. (See the discussion of order
> of evaluation for the explicit-control evaluator in [Section
> 5.4.1](#Section 5.4.1).) How does changing the order of operand
> evaluation affect the efficiency of the code that constructs the
> argument list?
> **[]{#Exercise 5.37 label="Exercise 5.37"}Exercise 5.37:** One way to
> understand the compiler's `preserving` mechanism for optimizing stack
> usage is to see what extra operations would be generated if we did not
> use this idea. Modify `preserving` so that it always generates the
> `save` and `restore` operations. Compile some simple expressions and
> identify the unnecessary stack operations that are generated. Compare
> the code to that generated with the `preserving` mechanism intact.
> **[]{#Exercise 5.38 label="Exercise 5.38"}Exercise 5.38:** Our
> compiler is clever about avoiding unnecessary stack operations, but it
> is not clever at all when it comes to compiling calls to the primitive
> procedures of the language in terms of the primitive operations
> supplied by the machine. For example, consider how much code is
> compiled to compute `(+ a 1)`: The code sets up an argument list in
> `argl`, puts the primitive addition procedure (which it finds by
> looking up the symbol `+` in the environment) into `proc`, and tests
> whether the procedure is primitive or compound. The compiler always
> generates code to perform the test, as well as code for primitive and
> compound branches (only one of which will be executed). We have not
> shown the part of the controller that implements primitives, but we
> presume that these instructions make use of primitive arithmetic
> operations in the machine's data paths. Consider how much less code
> would be generated if the compiler could *open-code* primitives---that
> is, if it could generate code to directly use these primitive machine
> operations. The expression `(+ a 1)` might be compiled into something
> as simple as[^328]
>
> ::: scheme
> (assign val (op lookup-variable-value) (const a) (reg env)) (assign
> val (op +) (reg val) (const 1))
> :::
>
> In this exercise we will extend our compiler to support open coding of
> selected primitives. Special-purpose code will be generated for calls
> to these primitive procedures instead of the general
> procedure-application code. In order to support this, we will augment
> our machine with special argument registers `arg1` and `arg2`. The
> primitive arithmetic operations of the machine will take their inputs
> from `arg1` and `arg2`. The results may be put into `val`, `arg1`, or
> `arg2`.
>
> The compiler must be able to recognize the application of an
> open-coded primitive in the source program. We will augment the
> dispatch in the `compile` procedure to recognize the names of these
> primitives in addition to the reserved words (the special forms) it
> currently recognizes.[^329] For each special form our compiler has a
> code generator. In this exercise we will construct a family of code
> generators for the open-coded primitives.
>
> a. The open-coded primitives, unlike the special forms, all need
> their operands evaluated. Write a code generator
> `spread/arguments` for use by all the open-coding code generators.
> `spread/arguments` should take an operand list and compile the
> given operands targeted to successive argument registers. Note
> that an operand may contain a call to an open-coded primitive, so
> argument registers will have to be preserved during operand
> evaluation.
>
> b. For each of the primitive procedures `=`, `*`, `-`, and `+`, write
> a code generator that takes a combination with that operator,
> together with a target and a linkage descriptor, and produces code
> to spread the arguments into the registers and then perform the
> operation targeted to the given target with the given linkage. You
> need only handle expressions with two operands. Make `compile`
> dispatch to these code generators.
>
> c. Try your new compiler on the `factorial` example. Compare the
> resulting code with the result produced without open coding.
>
> d. Extend your code generators for `+` and `*` so that they can
> handle expressions with arbitrary numbers of operands. An
> expression with more than two operands will have to be compiled
> into a sequence of operations, each with only two inputs.
### Lexical Addressing {#Section 5.5.6}
One of the most common optimizations performed by compilers is the
optimization of variable lookup. Our compiler, as we have implemented it
so far, generates code that uses the `lookup/variable/value` operation
of the evaluator machine. This searches for a variable by comparing it
with each variable that is currently bound, working frame by frame
outward through the run-time environment. This search can be expensive
if the frames are deeply nested or if there are many variables. For
example, consider the problem of looking up the value of `x` while
evaluating the expression `(* x y z)` in an application of the procedure
that is returned by
::: scheme
(let ((x 3) (y 4)) (lambda (a b c d e) (let ((y (\* a b x)) (z (+ c d
x))) (\* x y z))))
:::
Since a `let` expression is just syntactic sugar for a `lambda`
combination, this expression is equivalent to
::: scheme
((lambda (x y) (lambda (a b c d e) ((lambda (y z) (\* x y z)) (\* a b x)
(+ c d x)))) 3 4)
:::
Each time `lookup/variable/value` searches for `x`, it must determine
that the symbol `x` is not `eq?` to `y` or `z` (in the first frame), nor
to `a`, `b`, `c`, `d`, or `e` (in the second frame). We will assume, for
the moment, that our programs do not use `define`---that variables are
bound only with `lambda`. Because our language is lexically scoped, the
run-time environment for any expression will have a structure that
parallels the lexical structure of the program in which the expression
appears.[^330] Thus, the compiler can know, when it analyzes the above
expression, that each time the procedure is applied the variable `x` in
`(* x y z)` will be found two frames out from the current frame and will
be the first variable in that frame.
We can exploit this fact by inventing a new kind of variable-lookup
operation, `lexical/address/lookup`, that takes as arguments an
environment and a *lexical address* that consists of two numbers: a
*frame number*, which specifies how many frames to pass over, and a
*displacement number*, which specifies how many variables to pass over
in that frame. `lexical/address/lookup` will produce the value of the
variable stored at that lexical address relative to the current
environment. If we add the `lexical/address/lookup` operation to our
machine, we can make the compiler generate code that references
variables using this operation, rather than `lookup/variable/value`.
Similarly, our compiled code can use a new `lexical/address/set!`
operation instead of `set/variable/value!`.
In order to generate such code, the compiler must be able to determine
the lexical address of a variable it is about to compile a reference to.
The lexical address of a variable in a program depends on where one is
in the code. For example, in the following program, the address of `x`
in expression $\langle$*e1*$\kern0.08em\rangle$ is (2, 0)---two frames
back and the first variable in the frame. At that point `y` is at
address (0, 0) and `c` is at address (1, 2). In expression
$\langle$*e2*$\kern0.09em\rangle$, `x` is at (1, 0), `y` is at (1, 1),
and `c` is at (0, 2).
::: scheme
((lambda (x y) (lambda (a b c d e) ((lambda (y z)
$\color{SchemeDark}\langle$ *e1* $\color{SchemeDark}\rangle$ )
$\color{SchemeDark}\langle$ *e2* $\color{SchemeDark}\rangle$ (+ c d
x)))) 3 4)
:::
One way for the compiler to produce code that uses lexical addressing is
to maintain a data structure called a *compile-time environment*. This
keeps track of which variables will be at which positions in which
frames in the run-time environment when a particular variable-access
operation is executed. The compile-time environment is a list of frames,
each containing a list of variables. (There will of course be no values
bound to the variables, since values are not computed at compile time.)
The compile-time environment becomes an additional argument to `compile`
and is passed along to each code generator. The top-level call to
`compile` uses an empty compile-time environment. When a `lambda` body
is compiled, `compile/lambda/body` extends the compile-time environment
by a frame containing the procedure's parameters, so that the sequence
making up the body is compiled with that extended environment. At each
point in the compilation, `compile/variable` and `compile/assignment`
use the compile-time environment in order to generate the appropriate
lexical addresses.
[Exercise 5.39](#Exercise 5.39) through [Exercise 5.43](#Exercise 5.43)
describe how to complete this sketch of the lexical-addressing strategy
in order to incorporate lexical lookup into the compiler. [Exercise
5.44](#Exercise 5.44) describes another use for the compile-time
environment.
> **[]{#Exercise 5.39 label="Exercise 5.39"}Exercise 5.39:** Write a
> procedure `lexical/address/lookup` that implements the new lookup
> operation. It should take two arguments---a lexical address and a
> run-time environment---and return the value of the variable stored at
> the specified lexical address. `lexical/address/lookup` should signal
> an error if the value of the variable is the symbol
> `*unassigned*`.[^331] Also write a procedure `lexical/address/set!`
> that implements the operation that changes the value of the variable
> at a specified lexical address.
> **[]{#Exercise 5.40 label="Exercise 5.40"}Exercise 5.40:** Modify the
> compiler to maintain the compile-time environment as described above.
> That is, add a compile-time-environment argument to `compile` and the
> various code generators, and extend it in `compile/lambda/body`.
> **[]{#Exercise 5.41 label="Exercise 5.41"}Exercise 5.41:** Write a
> procedure `find/variable` that takes as arguments a variable and a
> compile-time environment and returns the lexical address of the
> variable with respect to that environment. For example, in the program
> fragment that is shown above, the compile-time environment during the
> compilation of expression $\langle$*e1*$\kern0.08em\rangle$ is
> `((y z) (a b c d e) (x y))`. `find/variable` should produce
>
> ::: scheme
> (find-variable 'c '((y z) (a b c d e) (x y))) *(1 2)* (find-variable
> 'x '((y z) (a b c d e) (x y))) *(2 0)* (find-variable 'w '((y z) (a
> b c d e) (x y))) *not-found*
> :::
> **[]{#Exercise 5.42 label="Exercise 5.42"}Exercise 5.42:** Using
> `find/variable` from [Exercise 5.41](#Exercise 5.41), rewrite
> `compile/variable` and `compile/assignment` to output lexical-address
> instructions. In cases where `find/variable` returns `not/found` (that
> is, where the variable is not in the compile-time environment), you
> should have the code generators use the evaluator operations, as
> before, to search for the binding. (The only place a variable that is
> not found at compile time can be is in the global environment, which
> is part of the run-time environment but is not part of the
> compile-time environment.[^332] Thus, if you wish, you may have the
> evaluator operations look directly in the global environment, which
> can be obtained with the operation `(op get/global/environment)`,
> instead of having them search the whole run-time environment found in
> `env`.) Test the modified compiler on a few simple cases, such as the
> nested `lambda` combination at the beginning of this section.
> **[]{#Exercise 5.43 label="Exercise 5.43"}Exercise 5.43:** We argued
> in [Section 4.1.6](#Section 4.1.6) that internal definitions for block
> structure should not be considered "real" `define`s. Rather, a
> procedure body should be interpreted as if the internal variables
> being defined were installed as ordinary `lambda` variables
> initialized to their correct values using `set!`. [Section
> 4.1.6](#Section 4.1.6) and [Exercise 4.16](#Exercise 4.16) showed how
> to modify the metacircular interpreter to accomplish this by scanning
> out internal definitions. Modify the compiler to perform the same
> transformation before it compiles a procedure body.
> **[]{#Exercise 5.44 label="Exercise 5.44"}Exercise 5.44:** In this
> section we have focused on the use of the compile-time environment to
> produce lexical addresses. But there are other uses for compile-time
> environments. For instance, in [Exercise 5.38](#Exercise 5.38) we
> increased the efficiency of compiled code by open-coding primitive
> procedures. Our implementation treated the names of open-coded
> procedures as reserved words. If a program were to rebind such a name,
> the mechanism described in [Exercise 5.38](#Exercise 5.38) would still
> open-code it as a primitive, ignoring the new binding. For example,
> consider the procedure
>
> ::: scheme
> (lambda (+ \* a b x y) (+ (\* a x) (\* b y)))
> :::
>
> which computes a linear combination of `x` and `y`. We might call it
> with arguments `+matrix`, `*matrix`, and four matrices, but the
> open-coding compiler would still open-code the `+` and the `*` in
> `(+ (* a x) (* b y))` as primitive `+` and `*`. Modify the open-coding
> compiler to consult the compile-time environment in order to compile
> the correct code for expressions involving the names of primitive
> procedures. (The code will work correctly as long as the program does
> not `define` or `set!` these names.)
### Interfacing Compiled Code to the Evaluator {#Section 5.5.7}
We have not yet explained how to load compiled code into the evaluator
machine or how to run it. We will assume that the
explicit-control-evaluator machine has been defined as in [Section
5.4.4](#Section 5.4.4), with the additional operations specified in
[Footnote 38](#Footnote 38). We will implement a procedure
`compile/and/go` that compiles a Scheme expression, loads the resulting
object code into the evaluator machine, and causes the machine to run
the code in the evaluator global environment, print the result, and
enter the evaluator's driver loop. We will also modify the evaluator so
that interpreted expressions can call compiled procedures as well as
interpreted ones. We can then put a compiled procedure into the machine
and use the evaluator to call it:
::: scheme
(compile-and-go '(define (factorial n) (if (= n 1) 1 (\* (factorial (- n
1)) n)))) *;;; EC-Eval value:* *ok* *;;; EC-Eval input:*
(factorial 5) *;;; EC-Eval value:* *120*
:::
To allow the evaluator to handle compiled procedures (for example, to
evaluate the call to `factorial` above), we need to change the code at
`apply/dispatch` ([Section 5.4.1](#Section 5.4.1)) so that it recognizes
compiled procedures (as distinct from compound or primitive procedures)
and transfers control directly to the entry point of the compiled
code:[^333]
::: scheme
apply-dispatch (test (op primitive-procedure?) (reg proc)) (branch
(label primitive-apply)) (test (op compound-procedure?) (reg proc))
(branch (label compound-apply)) (test (op compiled-procedure?) (reg
proc)) (branch (label compiled-apply)) (goto (label
unknown-procedure-type)) compiled-apply (restore continue) (assign val
(op compiled-procedure-entry) (reg proc)) (goto (reg val))
:::
Note the restore of `continue` at `compiled/apply`. Recall that the
evaluator was arranged so that at `apply/dispatch`, the continuation
would be at the top of the stack. The compiled code entry point, on the
other hand, expects the continuation to be in `continue`, so `continue`
must be restored before the compiled code is executed.
To enable us to run some compiled code when we start the evaluator
machine, we add a `branch` instruction at the beginning of the evaluator
machine, which causes the machine to go to a new entry point if the
`flag` register is set.[^334]
::: scheme
(branch (label external-entry)) [; branches if `flag` is set]{.roman}
read-eval-print-loop (perform (op initialize-stack)) $\dots$
:::
`external/entry` assumes that the machine is started with `val`
containing the location of an instruction sequence that puts a result
into `val` and ends with `(goto (reg continue))`. Starting at this entry
point jumps to the location designated by `val`, but first assigns
`continue` so that execution will return to `print/result`, which prints
the value in `val` and then goes to the beginning of the evaluator's
read-eval-print loop.[^335]
::: scheme
external-entry (perform (op initialize-stack)) (assign env (op
get-global-environment)) (assign continue (label print-result)) (goto
(reg val))
:::
Now we can use the following procedure to compile a procedure
definition, execute the compiled code, and run the read-eval-print loop
so we can try the procedure. Because we want the compiled code to return
to the location in `continue` with its result in `val`, we compile the
expression with a target of `val` and a linkage of `return`. In order to
transform the object code produced by the compiler into executable
instructions for the evaluator register machine, we use the procedure
`assemble` from the register-machine simulator ([Section
5.2.2](#Section 5.2.2)). We then initialize the `val` register to point
to the list of instructions, set the `flag` so that the evaluator will
go to `external/entry`, and start the evaluator.
::: scheme
(define (compile-and-go expression) (let ((instructions (assemble
(statements (compile expression 'val 'return)) eceval))) (set!
the-global-environment (setup-environment)) (set-register-contents!
eceval 'val instructions) (set-register-contents! eceval 'flag true)
(start eceval)))
:::
If we have set up stack monitoring, as at the end of [Section
5.4.4](#Section 5.4.4), we can examine the stack usage of compiled code:
::: scheme
(compile-and-go '(define (factorial n) (if (= n 1) 1 (\* (factorial (- n
1)) n)))) *(total-pushes = 0 maximum-depth = 0)* *;;; EC-Eval
value:* *ok* *;;; EC-Eval input:* (factorial 5) *(total-pushes =
31 maximum-depth = 14)* *;;; EC-Eval value:* *120*
:::
Compare this example with the evaluation of `(factorial 5)` using the
interpreted version of the same procedure, shown at the end of [Section
5.4.4](#Section 5.4.4). The interpreted version required 144 pushes and
a maximum stack depth of 28. This illustrates the optimization that
results from our compilation strategy.
#### Interpretation and compilation {#interpretation-and-compilation .unnumbered}
With the programs in this section, we can now experiment with the
alternative execution strategies of interpretation and
compilation.[^336] An interpreter raises the machine to the level of the
user program; a compiler lowers the user program to the level of the
machine language. We can regard the Scheme language (or any programming
language) as a coherent family of abstractions erected on the machine
language. Interpreters are good for interactive program development and
debugging because the steps of program execution are organized in terms
of these abstractions, and are therefore more intelligible to the
programmer. Compiled code can execute faster, because the steps of
program execution are organized in terms of the machine language, and
the compiler is free to make optimizations that cut across the
higher-level abstractions.[^337]
The alternatives of interpretation and compilation also lead to
different strategies for porting languages to new computers. Suppose
that we wish to implement Lisp for a new machine. One strategy is to
begin with the explicit-control evaluator of [Section 5.4](#Section 5.4)
and translate its instructions to instructions for the new machine. A
different strategy is to begin with the compiler and change the code
generators so that they generate code for the new machine. The second
strategy allows us to run any Lisp program on the new machine by first
compiling it with the compiler running on our original Lisp system, and
linking it with a compiled version of the run-time library.[^338] Better
yet, we can compile the compiler itself, and run this on the new machine
to compile other Lisp programs.[^339] Or we can compile one of the
interpreters of [Section 4.1](#Section 4.1) to produce an interpreter
that runs on the new machine.
> **[]{#Exercise 5.45 label="Exercise 5.45"}Exercise 5.45:** By
> comparing the stack operations used by compiled code to the stack
> operations used by the evaluator for the same computation, we can
> determine the extent to which the compiler optimizes use of the stack,
> both in speed (reducing the total number of stack operations) and in
> space (reducing the maximum stack depth). Comparing this optimized
> stack use to the performance of a special-purpose machine for the same
> computation gives some indication of the quality of the compiler.
>
> a. [Exercise 5.27](#Exercise 5.27) asked you to determine, as a
> function of $n$, the number of pushes and the maximum stack depth
> needed by the evaluator to compute $n!$ using the recursive
> factorial procedure given above. [Exercise 5.14](#Exercise 5.14)
> asked you to do the same measurements for the special-purpose
> factorial machine shown in [Figure 5.11](#Figure 5.11). Now
> perform the same analysis using the compiled `factorial`
> procedure.
>
> Take the ratio of the number of pushes in the compiled version to
> the number of pushes in the interpreted version, and do the same
> for the maximum stack depth. Since the number of operations and
> the stack depth used to compute $n!$ are linear in $n$, these
> ratios should approach constants as $n$ becomes large. What are
> these constants? Similarly, find the ratios of the stack usage in
> the special-purpose machine to the usage in the interpreted
> version.
>
> Compare the ratios for special-purpose versus interpreted code to
> the ratios for compiled versus interpreted code. You should find
> that the special-purpose machine does much better than the
> compiled code, since the hand-tailored controller code should be
> much better than what is produced by our rudimentary
> general-purpose compiler.
>
> b. Can you suggest improvements to the compiler that would help it
> generate code that would come closer in performance to the
> hand-tailored version?
> **[]{#Exercise 5.46 label="Exercise 5.46"}Exercise 5.46:** Carry out
> an analysis like the one in [Exercise 5.45](#Exercise 5.45) to
> determine the effectiveness of compiling the tree-recursive Fibonacci
> procedure
>
> ::: scheme
> (define (fib n) (if (\< n 2) n (+ (fib (- n 1)) (fib (- n 2)))))
> :::
>
> compared to the effectiveness of using the special-purpose Fibonacci
> machine of [Figure 5.12](#Figure 5.12). (For measurement of the
> interpreted performance, see [Exercise 5.29](#Exercise 5.29).) For
> Fibonacci, the time resource used is not linear in $n;$ hence the
> ratios of stack operations will not approach a limiting value that is
> independent of $n$.
> **[]{#Exercise 5.47 label="Exercise 5.47"}Exercise 5.47:** This
> section described how to modify the explicit-control evaluator so that
> interpreted code can call compiled procedures. Show how to modify the
> compiler so that compiled procedures can call not only primitive
> procedures and compiled procedures, but interpreted procedures as
> well. This requires modifying `compile/procedure/call` to handle the
> case of compound (interpreted) procedures. Be sure to handle all the
> same `target` and `linkage` combinations as in `compile/proc/appl`. To
> do the actual procedure application, the code needs to jump to the
> evaluator's `compound/apply` entry point. This label cannot be
> directly referenced in object code (since the assembler requires that
> all labels referenced by the code it is assembling be defined there),
> so we will add a register called `compapp` to the evaluator machine to
> hold this entry point, and add an instruction to initialize it:
>
> ::: scheme
> (assign compapp (label compound-apply)) (branch (label
> external-entry)) [; branches if `flag` is set]{.roman}
> read-eval-print-loop $\dots$
> :::
>
> To test your code, start by defining a procedure `f` that calls a
> procedure `g`. Use `compile/and/go` to compile the definition of `f`
> and start the evaluator. Now, typing at the evaluator, define `g` and
> try to call `f`.
> **[]{#Exercise 5.48 label="Exercise 5.48"}Exercise 5.48:** The
> `compile/and/go` interface implemented in this section is awkward,
> since the compiler can be called only once (when the evaluator machine
> is started). Augment the compiler-interpreter interface by providing a
> `compile/and/run` primitive that can be called from within the
> explicit-control evaluator as follows:
>
> ::: scheme
> *;;; EC-Eval input:* (compile-and-run '(define (factorial n) (if (=
> n 1) 1 (\* (factorial (- n 1)) n)))) *;;; EC-Eval value:* *ok*
> *;;; EC-Eval input:* (factorial 5) *;;; EC-Eval value:* *120*
> :::
> **[]{#Exercise 5.49 label="Exercise 5.49"}Exercise 5.49:** As an
> alternative to using the explicit-control evaluator's read-eval-print
> loop, design a register machine that performs a
> read-compile-execute-print loop. That is, the machine should run a
> loop that reads an expression, compiles it, assembles and executes the
> resulting code, and prints the result. This is easy to run in our
> simulated setup, since we can arrange to call the procedures `compile`
> and `assemble` as "register-machine operations."
> **[]{#Exercise 5.50 label="Exercise 5.50"}Exercise 5.50:** Use the
> compiler to compile the metacircular evaluator of [Section
> 4.1](#Section 4.1) and run this program using the register-machine
> simulator. (To compile more than one definition at a time, you can
> package the definitions in a `begin`.) The resulting interpreter will
> run very slowly because of the multiple levels of interpretation, but
> getting all the details to work is an instructive exercise.
> **[]{#Exercise 5.51 label="Exercise 5.51"}Exercise 5.51:** Develop a
> rudimentary implementation of Scheme in C (or some other low-level
> language of your choice) by translating the explicit-control evaluator
> of [Section 5.4](#Section 5.4) into C. In order to run this code you
> will need to also provide appropriate storage-allocation routines and
> other run-time support.
> **[]{#Exercise 5.52 label="Exercise 5.52"}Exercise 5.52:** As a
> counterpoint to [Exercise 5.51](#Exercise 5.51), modify the compiler
> so that it compiles Scheme procedures into sequences of C
> instructions. Compile the metacircular evaluator of [Section
> 4.1](#Section 4.1) to produce a Scheme interpreter written in C.
# References {#references .unnumbered}
[]{#References label="References"}
[]{#Abelson et al. 1992 label="Abelson et al. 1992"} **Abelson**,
Harold, Andrew Berlin, Jacob Katzenelson, William McAllister, Guillermo
Rozas, Gerald Jay Sussman, and Jack Wisdom. 1992. The Supercomputer
Toolkit: A general framework for special-purpose computing.
*International Journal of High-Speed Electronics* 3(3): 337-361.
[--›](http://www.hpl.hp.com/techreports/94/HPL-94-30.html)
[]{#Allen 1978 label="Allen 1978"} **Allen**, John. 1978. *Anatomy of
Lisp*. New York: McGraw-Hill.
[]{#ANSI 1994 label="ANSI 1994"} **ansi**
x3.226-1994. *American National Standard for Information
Systems---Programming Language---Common Lisp*.
[]{#Appel 1987 label="Appel 1987"} **Appel**, Andrew W. 1987. Garbage
collection can be faster than stack allocation. *Information Processing
Letters* 25(4): 275-279.
[--›](https://www.cs.princeton.edu/~appel/papers/45.ps)
[]{#Backus 1978 label="Backus 1978"} **Backus**, John. 1978. Can
programming be liberated from the von Neumann style? *Communications of
the acm* 21(8): 613-641.
[--›](http://worrydream.com/refs/Backus-CanProgrammingBeLiberated.pdf)
[]{#Baker (1978) label="Baker (1978)"} **Baker**, Henry G., Jr. 1978.
List processing in real time on a serial computer. *Communications of
the acm* 21(4): 280-293.
[--›](http://dspace.mit.edu/handle/1721.1/41976)
[]{#Batali et al. 1982 label="Batali et al. 1982"} **Batali**, John,
Neil Mayle, Howard Shrobe, Gerald Jay Sussman, and Daniel Weise. 1982.
The Scheme-81 architecture---System and chip. In *Proceedings of the
mit Conference on Advanced Research in
vlsi*, edited by Paul Penfield, Jr. Dedham,
ma: Artech House.
[]{#Borning (1977) label="Borning (1977)"} **Borning**, Alan. 1977.
ThingLab---An object-oriented system for building simulations using
constraints. In *Proceedings of the 5th International Joint Conference
on Artificial Intelligence*.
[--›](http://ijcai.org/Past%20Proceedings/IJCAI-77-VOL1/PDF/085.pdf)
[]{#Borodin and Munro (1975) label="Borodin and Munro (1975)"}
**Borodin**, Alan, and Ian Munro. 1975. *The Computational Complexity of
Algebraic and Numeric Problems*. New York: American Elsevier.
[]{#Chaitin 1975 label="Chaitin 1975"} **Chaitin**, Gregory J. 1975.
Randomness and mathematical proof. *Scientific American* 232(5): 47-52.
[--›](https://www.cs.auckland.ac.nz/~chaitin/sciamer.html)
[]{#Church (1941) label="Church (1941)"} **Church**, Alonzo. 1941. *The
Calculi of Lambda-Conversion*. Princeton, N.J.: Princeton University
Press.
[]{#Clark (1978) label="Clark (1978)"} **Clark**, Keith L. 1978.
Negation as failure. In *Logic and Data Bases*. New York: Plenum Press,
pp. 293-322. [--›](http://www.doc.ic.ac.uk/~klc/neg.html)
[]{#Clinger (1982) label="Clinger (1982)"} **Clinger**, William. 1982.
Nondeterministic call by need is neither lazy nor by name. In
*Proceedings of the acm Symposium on Lisp and Functional
Programming*, pp. 226-234.
[]{#Clinger and Rees 1991 label="Clinger and Rees 1991"} **Clinger**,
William, and Jonathan Rees. 1991. Macros that work. In *Proceedings of
the 1991 acm Conference on Principles of Programming
Languages*, pp. 155-162.
[--›](http://mumble.net/~jar/pubs/macros_that_work.ps)
[]{#Colmerauer et al. 1973 label="Colmerauer et al. 1973"}
**Colmerauer** A., H. Kanoui, R. Pasero, and P. Roussel. 1973. Un
système de communication homme-machine en français. Technical report,
Groupe Intelligence Artificielle, Université d'Aix Marseille, Luminy.
[--›](http://alain.colmerauer.free.fr/alcol/ArchivesPublications/HommeMachineFr/HoMa.pdf)
[]{#Cormen et al. 1990 label="Cormen et al. 1990"} **Cormen**, Thomas,
Charles Leiserson, and Ronald Rivest. 1990. *Introduction to
Algorithms*. Cambridge, ma: mit Press.
[]{#Darlington et al. 1982 label="Darlington et al. 1982"}
**Darlington**, John, Peter Henderson, and David Turner. 1982.
*Functional Programming and Its Applications*. New York: Cambridge
University Press.
[]{#Dijkstra 1968a label="Dijkstra 1968a"} **Dijkstra**, Edsger W.
1968a. The structure of the "the" multiprogramming system.
*Communications of the acm* 11(5): 341-346.
[--›](http://www.cs.utexas.edu/users/EWD/ewd01xx/EWD196.PDF)
[]{#Dijkstra 1968b label="Dijkstra 1968b"} **Dijkstra**, Edsger W.
1968b. Cooperating sequential processes. In *Programming Languages*,
edited by F. Genuys. New York: Academic Press, pp. 43-112.
[--›](http://www.cs.utexas.edu/users/EWD/ewd01xx/EWD123.PDF)
[]{#Dinesman 1968 label="Dinesman 1968"} **Dinesman**, Howard P. 1968.
*Superior Mathematical Puzzles*. New York: Simon and Schuster.
[]{#deKleer et al. 1977 label="deKleer et al. 1977"} **deKleer**, Johan,
Jon Doyle, Guy Steele, and Gerald J. Sussman. 1977. amord:
Explicit control of reasoning. In *Proceedings of the acm
Symposium on Artificial Intelligence and Programming Languages*, pp.
116-125. [--›](http://dspace.mit.edu/handle/1721.1/5750)
[]{#Doyle (1979) label="Doyle (1979)"} **Doyle**, Jon. 1979. A truth
maintenance system. *Artificial Intelligence* 12: 231-272.
[--›](http://dspace.mit.edu/handle/1721.1/5733)
[]{#Feigenbaum and Shrobe 1993 label="Feigenbaum and Shrobe 1993"}
**Feigenbaum**, Edward, and Howard Shrobe. 1993. The Japanese National
Fifth Generation Project: Introduction, survey, and evaluation. In
*Future Generation Computer Systems*, vol. 9, pp. 105-117.
[--›](https://saltworks.stanford.edu/assets/kv359wz9060.pdf)
[]{#Feeley (1986) label="Feeley (1986)"} **Feeley**, Marc. 1986. Deux
approches à l'implantation du language Scheme. Masters thesis,
Université de Montréal.
[--›](http://www.iro.umontreal.ca/~feeley/papers/FeeleyMSc.pdf)
[]{#Feeley and Lapalme 1987 label="Feeley and Lapalme 1987"} **Feeley**,
Marc and Guy Lapalme. 1987. Using closures for code generation. *Journal
of Computer Languages* 12(1): 47-66.
[--›](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.90.6978)
**Feller**, William. 1957. *An Introduction to Probability Theory and
Its Applications*, volume 1. New York: John Wiley & Sons.
[]{#Fenichel and Yochelson (1969) label="Fenichel and Yochelson (1969)"}
**Fenichel**, R., and J. Yochelson. 1969. A Lisp garbage collector for
virtual memory computer systems. *Communications of the
acm* 12(11): 611-612.
[--›](https://www.cs.purdue.edu/homes/hosking/690M/p611-fenichel.pdf)
[]{#Floyd (1967) label="Floyd (1967)"} **Floyd**, Robert. 1967.
Nondeterministic algorithms. *jacm*, 14(4): 636-644.
[--›](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.332.36)
[]{#Forbus and deKleer 1993 label="Forbus and deKleer 1993"} **Forbus**,
Kenneth D., and Johan deKleer. 1993. *Building Problem Solvers*.
Cambridge, ma: mit Press.
[]{#Friedman and Wise (1976) label="Friedman and Wise (1976)"}
**Friedman**, Daniel P., and David S. Wise. 1976. cons
should not evaluate its arguments. In *Automata, Languages, and
Programming: Third International Colloquium*, edited by S. Michaelson
and R. Milner, pp. 257-284.
[--›](https://www.cs.indiana.edu/cgi-bin/techreports/TRNNN.cgi?trnum=TR44)
[]{#Friedman et al. 1992 label="Friedman et al. 1992"} **Friedman**,
Daniel P., Mitchell Wand, and Christopher T. Haynes. 1992. *Essentials
of Programming Languages*. Cambridge, ma:
mit Press/ McGraw-Hill.
[]{#Gabriel 1988 label="Gabriel 1988"} **Gabriel**, Richard P. 1988. The
Why of *Y*. *Lisp Pointers* 2(2): 15-25.
[--›](http://www.dreamsongs.com/Files/WhyOfY.pdf)
**Goldberg**, Adele, and David Robson. 1983. *Smalltalk-80: The Language
and Its Implementation*. Reading, ma: Addison-Wesley.
[--›](http://stephane.ducasse.free.fr/FreeBooks/BlueBook/Bluebook.pdf)
[]{#Gordon et al. 1979 label="Gordon et al. 1979"} **Gordon**, Michael,
Robin Milner, and Christopher Wadsworth. 1979. *Edinburgh
lcf*. Lecture Notes in Computer Science, volume 78. New
York: Springer-Verlag.
[]{#Gray and Reuter 1993 label="Gray and Reuter 1993"} **Gray**, Jim,
and Andreas Reuter. 1993. *Transaction Processing: Concepts and Models*.
San Mateo, ca: Morgan-Kaufman.
[]{#Green 1969 label="Green 1969"} **Green**, Cordell. 1969. Application
of theorem proving to problem solving. In *Proceedings of the
International Joint Conference on Artificial Intelligence*, pp. 219-240.
[--›](http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.81.9820)
[]{#Green and Raphael (1968) label="Green and Raphael (1968)"}
**Green**, Cordell, and Bertram Raphael. 1968. The use of
theorem-proving techniques in question-answering systems. In
*Proceedings of the acm National Conference*, pp. 169-181.
[--›](http://www.kestrel.edu/home/people/green/publications/green-raphael.pdf)
[]{#Griss 1981 label="Griss 1981"} **Griss**, Martin L. 1981. Portable
Standard Lisp, a brief overview. Utah Symbolic Computation Group
Operating Note 58, University of Utah.
[]{#Guttag 1977 label="Guttag 1977"} **Guttag**, John V. 1977. Abstract
data types and the development of data structures. *Communications of
the acm* 20(6): 396-404.
[--›](http://www.unc.edu/~stotts/comp723/guttagADT77.pdf)
[]{#Hamming 1980 label="Hamming 1980"} **Hamming**, Richard W. 1980.
*Coding and Information Theory*. Englewood Cliffs, N.J.: Prentice-Hall.
[]{#Hanson 1990 label="Hanson 1990"} **Hanson**, Christopher P. 1990.
Efficient stack allocation for tail-recursive languages. In *Proceedings
of acm Conference on Lisp and Functional Programming*, pp.
106-118.
[--›](https://groups.csail.mit.edu/mac/ftpdir/users/cph/links.ps.gz)
[]{#Hanson 1991 label="Hanson 1991"} **Hanson**, Christopher P. 1991. A
syntactic closures macro facility. *Lisp Pointers*, 4(3).
[--›](http://groups.csail.mit.edu/mac/ftpdir/scheme-reports/synclo.ps)
[]{#Hardy 1921 label="Hardy 1921"} **Hardy**, Godfrey H. 1921. Srinivasa
Ramanujan. *Proceedings of the London Mathematical Society*
xix(2).
[]{#Hardy and Wright 1960 label="Hardy and Wright 1960"} **Hardy**,
Godfrey H., and E. M. Wright. 1960. *An Introduction to the Theory of
Numbers*. 4th edition. New York: Oxford University Press.
[--›](https://archive.org/details/AnIntroductionToTheTheoryOfNumbers-4thEd-G.h.HardyE.m.Wright)
[]{#Havender (1968) label="Havender (1968)"} **Havender**, J. 1968.
Avoiding deadlocks in multi-tasking systems. *ibm Systems
Journal* 7(2): 74-84.
[]{#Hearn 1969 label="Hearn 1969"} **Hearn**, Anthony C. 1969. Standard
Lisp. Technical report aim-90, Artificial Intelligence
Project, Stanford University.
[--›](http://www.softwarepreservation.org/projects/LISP/stanford/Hearn-StandardLisp-AIM-90.pdf)
[]{#Henderson 1980 label="Henderson 1980"} **Henderson**, Peter. 1980.
*Functional Programming: Application and Implementation*. Englewood
Cliffs, N.J.: Prentice-Hall.
[]{#Henderson 1982 label="Henderson 1982"} **Henderson**, Peter. 1982.
Functional Geometry. In *Conference Record of the 1982 acm
Symposium on Lisp and Functional Programming*, pp. 179-187.
[--›](http://pmh-systems.co.uk/phAcademic/papers/funcgeo.pdf), [2002
version --›](http://eprints.soton.ac.uk/257577/1/funcgeo2.pdf)
[]{#Hewitt (1969) label="Hewitt (1969)"} **Hewitt**, Carl E. 1969.
planner: A language for proving theorems in robots. In
*Proceedings of the International Joint Conference on Artificial
Intelligence*, pp. 295-301.
[--›](http://dspace.mit.edu/handle/1721.1/6171)
[]{#Hewitt (1977) label="Hewitt (1977)"} **Hewitt**, Carl E. 1977.
Viewing control structures as patterns of passing messages. *Journal of
Artificial Intelligence* 8(3): 323-364.
[--›](http://dspace.mit.edu/handle/1721.1/6272)
[]{#Hoare (1972) label="Hoare (1972)"} **Hoare**, C. A. R. 1972. Proof
of correctness of data representations. *Acta Informatica* 1(1).
[]{#Hodges 1983 label="Hodges 1983"} **Hodges**, Andrew. 1983. *Alan
Turing: The Enigma*. New York: Simon and Schuster.
[]{#Hofstadter 1979 label="Hofstadter 1979"} **Hofstadter**, Douglas R.
1979. *Gödel, Escher, Bach: An Eternal Golden Braid*. New York: Basic
Books.
[]{#Hughes 1990 label="Hughes 1990"} **Hughes**, R. J. M. 1990. Why
functional programming matters. In *Research Topics in Functional
Programming*, edited by David Turner. Reading, ma:
Addison-Wesley, pp. 17-42.
[--›](http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf)
[]{#IEEE 1990 label="IEEE 1990"} **ieee** Std 1178-1990.
1990. *ieee Standard for the Scheme Programming Language*.
[]{#Ingerman et al. 1960 label="Ingerman et al. 1960"} **Ingerman**,
Peter, Edgar Irons, Kirk Sattley, and Wallace Feurzeig; assisted by M.
Lind, Herbert Kanner, and Robert Floyd. 1960. thunks: A
way of compiling procedure statements, with some comments on procedure
declarations. Unpublished manuscript. (Also, private communication from
Wallace Feurzeig.)
[]{#Kaldewaij 1990 label="Kaldewaij 1990"} **Kaldewaij**, Anne. 1990.
*Programming: The Derivation of Algorithms*. New York: Prentice-Hall.
[]{#Knuth (1973) label="Knuth (1973)"} **Knuth**, Donald E. 1973.
*Fundamental Algorithms*. Volume 1 of *The Art of Computer Programming*.
2nd edition. Reading, ma: Addison-Wesley.
[]{#Knuth 1981 label="Knuth 1981"} **Knuth**, Donald E. 1981.
*Seminumerical Algorithms*. Volume 2 of *The Art of Computer
Programming*. 2nd edition. Reading, ma: Addison-Wesley.
[]{#Kohlbecker 1986 label="Kohlbecker 1986"} **Kohlbecker**, Eugene
Edmund, Jr. 1986. Syntactic extensions in the programming language Lisp.
Ph.D. thesis, Indiana University.
[--›](http://www.ccs.neu.edu/scheme/pubs/dissertation-kohlbecker.pdf)
[]{#Konopasek and Jayaraman 1984 label="Konopasek and Jayaraman 1984"}
**Konopasek**, Milos, and Sundaresan Jayaraman. 1984. *The TK!Solver
Book: A Guide to Problem-Solving in Science, Engineering, Business, and
Education*. Berkeley, ca: Osborne/McGraw-Hill.
[]{#Kowalski (1973; 1979) label="Kowalski (1973; 1979)"} **Kowalski**,
Robert. 1973. Predicate logic as a programming language. Technical
report 70, Department of Computational Logic, School of Artificial
Intelligence, University of Edinburgh.
[--›](http://www.doc.ic.ac.uk/~rak/papers/IFIP%2074.pdf)
**Kowalski**, Robert. 1979. *Logic for Problem Solving*. New York:
North-Holland.
[--›](http://www.doc.ic.ac.uk/%7Erak/papers/LogicForProblemSolving.pdf)
[]{#Lamport (1978) label="Lamport (1978)"} **Lamport**, Leslie. 1978.
Time, clocks, and the ordering of events in a distributed system.
*Communications of the acm* 21(7): 558-565.
[--›](http://research.microsoft.com/en-us/um/people/lamport/pubs/time-clocks.pdf)
[]{#Lampson et al. 1981 label="Lampson et al. 1981"} **Lampson**,
Butler, J. J. Horning, R. London, J. G. Mitchell, and G. K. Popek. 1981.
Report on the programming language Euclid. Technical report, Computer
Systems Research Group, University of Toronto.
[--›](http://www.bitsavers.org/pdf/xerox/parc/techReports/CSL-81-12_Report_On_The_Programming_Language_Euclid.pdf)
[]{#Landin (1965) label="Landin (1965)"} **Landin**, Peter. 1965. A
correspondence between Algol 60 and Church's lambda notation: Part I.
*Communications of the acm* 8(2): 89-101.
[]{#Lieberman and Hewitt 1983 label="Lieberman and Hewitt 1983"}
**Lieberman**, Henry, and Carl E. Hewitt. 1983. A real-time garbage
collector based on the lifetimes of objects. *Communications of the
acm* 26(6): 419-429.
[--›](http://dspace.mit.edu/handle/1721.1/6335)
[]{#Liskov and Zilles (1975) label="Liskov and Zilles (1975)"}
**Liskov**, Barbara H., and Stephen N. Zilles. 1975. Specification
techniques for data abstractions. *ieee Transactions on
Software Engineering* 1(1): 7-19.
[--›](http://csg.csail.mit.edu/CSGArchives/memos/Memo-117.pdf)
[]{#McAllester (1978; 1980) label="McAllester (1978; 1980)"}
**McAllester**, David Allen. 1978. A three-valued truth-maintenance
system. Memo 473, mit Artificial Intelligence Laboratory.
[--›](http://dspace.mit.edu/handle/1721.1/6296)
**McAllester**, David Allen. 1980. An outlook on truth maintenance. Memo
551, mit Artificial Intelligence Laboratory.
[--›](http://dspace.mit.edu/handle/1721.1/6327)
[]{#McCarthy 1960 label="McCarthy 1960"} **McCarthy**, John. 1960.
Recursive functions of symbolic expressions and their computation by
machine. *Communications of the acm* 3(4): 184-195.
[--›](http://www-formal.stanford.edu/jmc/recursive.pdf)
[]{#McCarthy 1963 label="McCarthy 1963"} **McCarthy**, John. 1963. A
basis for a mathematical theory of computation. In *Computer Programming
and Formal Systems*, edited by P. Braffort and D. Hirschberg.
North-Holland. [--›](http://www-formal.stanford.edu/jmc/basis.html)
[]{#McCarthy 1978 label="McCarthy 1978"} **McCarthy**, John. 1978. The
history of Lisp. In *Proceedings of the acm
sigplan Conference on the History of Programming
Languages*.
[--›](http://www-formal.stanford.edu/jmc/history/lisp/lisp.html)
[]{#McCarthy et al. 1965 label="McCarthy et al. 1965"} **McCarthy**,
John, P. W. Abrahams, D. J. Edwards, T. P. Hart, and M. I. Levin. 1965.
*Lisp 1.5 Programmer's Manual*. 2nd edition. Cambridge,
ma: mit Press.
[--›](http://www.softwarepreservation.org/projects/LISP/book/LISP%201.5%20Programmers%20Manual.pdf/view)
[]{#McDermott and Sussman (1972) label="McDermott and Sussman (1972)"}
**McDermott**, Drew, and Gerald Jay Sussman. 1972. Conniver reference
manual. Memo 259, mit Artificial Intelligence Laboratory.
[--›](http://dspace.mit.edu/handle/1721.1/6203)
[]{#Miller 1976 label="Miller 1976"} **Miller**, Gary L. 1976. Riemann's
Hypothesis and tests for primality. *Journal of Computer and System
Sciences* 13(3): 300-317.
[--›](http://www.cs.cmu.edu/~glmiller/Publications/b2hd-Mi76.html)
[]{#Miller and Rozas 1994 label="Miller and Rozas 1994"} **Miller**,
James S., and Guillermo J. Rozas. 1994. Garbage collection is fast, but
a stack is faster. Memo 1462, mit Artificial Intelligence
Laboratory. [--›](http://dspace.mit.edu/handle/1721.1/6622)
[]{#Moon 1978 label="Moon 1978"} **Moon**, David. 1978. MacLisp
reference manual, Version 0. Technical report, mit
Laboratory for Computer Science.
[--›](http://www.softwarepreservation.org/projects/LISP/MIT/Moon-MACLISP_Reference_Manual-Apr_08_1974.pdf/view)
[]{#Moon and Weinreb 1981 label="Moon and Weinreb 1981"} **Moon**,
David, and Daniel Weinreb. 1981. Lisp machine manual. Technical report,
mit Artificial Intelligence Laboratory.
[--›](http://www.unlambda.com/lmman/index.html)
[]{#Morris et al. 1980 label="Morris et al. 1980"} **Morris**, J. H.,
Eric Schmidt, and Philip Wadler. 1980. Experience with an applicative
string processing language. In *Proceedings of the 7th Annual
acm sigact/sigplan Symposium
on the Principles of Programming Languages*.
[]{#Phillips 1934 label="Phillips 1934"} **Phillips**, Hubert. 1934.
*The Sphinx Problem Book*. London: Faber and Faber.
[]{#Pitman 1983 label="Pitman 1983"} **Pitman**, Kent. 1983. The revised
MacLisp Manual (Saturday evening edition). Technical report 295,
mit Laboratory for Computer Science.
[--›](http://maclisp.info/pitmanual)
[]{#Rabin 1980 label="Rabin 1980"} **Rabin**, Michael O. 1980.
Probabilistic algorithm for testing primality. *Journal of Number
Theory* 12: 128-138.
[]{#Raymond 1993 label="Raymond 1993"} **Raymond**, Eric. 1993. *The New
Hacker's Dictionary*. 2nd edition. Cambridge, ma:
mit Press. [--›](http://www.catb.org/jargon/)
**Raynal**, Michel. 1986. *Algorithms for Mutual Exclusion*. Cambridge,
ma: mit Press.
[]{#Rees and Adams 1982 label="Rees and Adams 1982"} **Rees**, Jonathan
A., and Norman I. Adams iv. 1982. T: A dialect of Lisp or,
lambda: The ultimate software tool. In *Conference Record of the 1982
acm Symposium on Lisp and Functional Programming*, pp.
114-122. [--›](http://people.csail.mit.edu/riastradh/t/adams82t.pdf)
**Rees**, Jonathan, and William Clinger (eds). 1991. The $\rm revised^4$
report on the algorithmic language Scheme. *Lisp Pointers*, 4(3).
[--›](http://people.csail.mit.edu/jaffer/r4rs.pdf)
[]{#Rivest et al. (1977) label="Rivest et al. (1977)"} **Rivest**,
Ronald, Adi Shamir, and Leonard Adleman. 1977. A method for obtaining
digital signatures and public-key cryptosystems. Technical memo
lcs/tm82, mit Laboratory for
Computer Science. [--›](http://people.csail.mit.edu/rivest/Rsapaper.pdf)
[]{#Robinson 1965 label="Robinson 1965"} **Robinson**, J. A. 1965. A
machine-oriented logic based on the resolution principle. *Journal of
the acm* 12(1): 23.
[]{#Robinson 1983 label="Robinson 1983"} **Robinson**, J. A. 1983. Logic
programming---Past, present, and future. *New Generation Computing* 1:
107-124.
[]{#Spafford 1989 label="Spafford 1989"} **Spafford**, Eugene H. 1989.
The Internet Worm: Crisis and aftermath. *Communications of the
acm* 32(6): 678-688.
[--›](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.8503&rep=rep1&type=pdf)
[]{#Steele 1977 label="Steele 1977"} **Steele**, Guy Lewis, Jr. 1977.
Debunking the "expensive procedure call" myth. In *Proceedings of the
National Conference of the acm*, pp. 153-62.
[--›](http://dspace.mit.edu/handle/1721.1/5753)
[]{#Steele 1982 label="Steele 1982"} **Steele**, Guy Lewis, Jr. 1982. An
overview of Common Lisp. In *Proceedings of the acm
Symposium on Lisp and Functional Programming*, pp. 98-107.
[]{#Steele 1990 label="Steele 1990"} **Steele**, Guy Lewis, Jr. 1990.
*Common Lisp: The Language*. 2nd edition. Digital Press.
[--›](http://www.cs.cmu.edu/Groups/AI/html/cltl/cltl2.html)
[]{#Steele and Sussman 1975 label="Steele and Sussman 1975"} **Steele**,
Guy Lewis, Jr., and Gerald Jay Sussman. 1975. Scheme: An interpreter for
the extended lambda calculus. Memo 349, mit Artificial
Intelligence Laboratory. [--›](http://dspace.mit.edu/handle/1721.1/5794)
[]{#Steele et al. 1983 label="Steele et al. 1983"} **Steele**, Guy
Lewis, Jr., Donald R. Woods, Raphael A. Finkel, Mark R. Crispin, Richard
M. Stallman, and Geoffrey S. Goodfellow. 1983. *The Hacker's
Dictionary*. New York: Harper & Row.
[--›](http://www.dourish.com/goodies/jargon.html)
[]{#Stoy 1977 label="Stoy 1977"} **Stoy**, Joseph E. 1977. *Denotational
Semantics*. Cambridge, ma: mit Press.
[]{#Sussman and Stallman 1975 label="Sussman and Stallman 1975"}
**Sussman**, Gerald Jay, and Richard M. Stallman. 1975. Heuristic
techniques in computer-aided circuit analysis. *ieee
Transactions on Circuits and Systems* cas-22(11): 857-865.
[--›](http://dspace.mit.edu/handle/1721.1/5803)
[]{#Sussman and Steele 1980 label="Sussman and Steele 1980"}
**Sussman**, Gerald Jay, and Guy Lewis Steele Jr. 1980. Constraints---A
language for expressing almost-hierachical descriptions. *AI Journal*
14: 1-39. [--›](http://dspace.mit.edu/handle/1721.1/6312)
[]{#Sussman and Wisdom 1992 label="Sussman and Wisdom 1992"}
**Sussman**, Gerald Jay, and Jack Wisdom. 1992. Chaotic evolution of the
solar system. *Science* 257: 256-262.
[--›](http://groups.csail.mit.edu/mac/users/wisdom/ss-chaos.pdf)
[]{#Sussman et al. (1971) label="Sussman et al. (1971)"} **Sussman**,
Gerald Jay, Terry Winograd, and Eugene Charniak. 1971. Microplanner
reference manual. Memo 203a, mit Artificial
Intelligence Laboratory. [--›](http://dspace.mit.edu/handle/1721.1/6184)
[]{#Sutherland (1963) label="Sutherland (1963)"} **Sutherland**, Ivan E.
1963. sketchpad: A man-machine graphical communication
system. Technical report 296, mit Lincoln Laboratory.
[--›](https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf)
[]{#Teitelman 1974 label="Teitelman 1974"} **Teitelman**, Warren. 1974.
Interlisp reference manual. Technical report, Xerox Palo Alto Research
Center.
[--›](http://www.softwarepreservation.org/projects/LISP/interlisp/Interlisp-Oct_1974.pdf/view)
[]{#Thatcher et al. 1978 label="Thatcher et al. 1978"} **Thatcher**,
James W., Eric G. Wagner, and Jesse B. Wright. 1978. Data type
specification: Parameterization and the power of specification
techniques. In *Conference Record of the Tenth Annual acm
Symposium on Theory of Computing*, pp. 119-132.
[]{#Turner 1981 label="Turner 1981"} **Turner**, David. 1981. The future
of applicative languages. In *Proceedings of the 3rd European Conference
on Informatics*, Lecture Notes in Computer Science, volume 123. New
York: Springer-Verlag, pp. 334-348.
[]{#Wand 1980 label="Wand 1980"} **Wand**, Mitchell. 1980.
Continuation-based program transformation strategies. *Journal of the
acm* 27(1): 164-180.
[--›](http://www.diku.dk/OLD/undervisning/2005e/224/papers/Wand80.pdf)
[]{#Waters (1979) label="Waters (1979)"} **Waters**, Richard C. 1979. A
method for analyzing loop programs. *ieee Transactions on
Software Engineering* 5(3): 237-247.
**Winograd**, Terry. 1971. Procedures as a representation for data in a
computer program for understanding natural language. Technical report
ai tr-17, mit Artificial Intelligence
Laboratory. [--›](http://dspace.mit.edu/handle/1721.1/7095)
[]{#Winston 1992 label="Winston 1992"} **Winston**, Patrick. 1992.
*Artificial Intelligence*. 3rd edition. Reading, ma:
Addison-Wesley.
[]{#Zabih et al. 1987 label="Zabih et al. 1987"} **Zabih**, Ramin, David
McAllester, and David Chapman. 1987. Non-deterministic Lisp with
dependency-directed backtracking. *aaai-87*, pp. 59-64.
[--›](http://www.aaai.org/Papers/AAAI/1987/AAAI87-011.pdf)
[]{#Zippel (1979) label="Zippel (1979)"} **Zippel**, Richard. 1979.
Probabilistic algorithms for sparse polynomials. Ph.D. dissertation,
Department of Electrical Engineering and Computer Science,
mit.
[]{#Zippel 1993 label="Zippel 1993"} **Zippel**, Richard. 1993.
*Effective Polynomial Computation*. Boston, ma: Kluwer
Academic Publishers.
# List of Exercises {#list-of-exercises .unnumbered}
[]{#List of Exercises label="List of Exercises"}
# List of Figures {#list-of-figures .unnumbered}
[]{#List of Figures label="List of Figures"}
# Colophon {#colophon .unnumbered}
[]{#Colophon label="Colophon"}
On the cover page is Agostino Ramelli's
bookwheel mechanism from 1588. It could be seen as an early hypertext
navigation aid. This image of the engraving is hosted by J. E. Johnson
of [New
Gottland](http://newgottland.com/2012/02/09/before-the-ereader-there-was-the-wheelreader/ramelli_bookwheel_1032px/).
The typefaces are Linux Libertine for body text and Linux Biolinum for
headings, both by Philipp H. Poll. Typewriter face is Inconsolata
created by Raph Levien and supplemented by Dimosthenis Kaponis and
Takashi Tanigawa in the form of Inconsolata lgc. The cover
page typeface is Alegreya, designed by Juan Pablo del Peral.
Graphic design and typography are done by Andres Raba. Texinfo source is
converted to LaTeX by a Perl script and compiled to pdf by
XeLaTeX. Diagrams are drawn with Inkscape.
[^1]: The *Lisp 1 Programmer's Manual* appeared in 1960, and the *Lisp
1.5 Programmer's Manual* ([McCarthy et al.
1965](#McCarthy et al. 1965)) was published in 1962. The early
history of Lisp is described in [McCarthy 1978](#McCarthy 1978).
[^2]: The two dialects in which most major Lisp programs of the 1970s
were written are MacLisp ([Moon 1978](#Moon 1978); [Pitman
1983](#Pitman 1983)), developed at the mit Project
mac, and Interlisp ([Teitelman
1974](#Teitelman 1974)), developed at Bolt Beranek and Newman Inc.
and the Xerox Palo Alto Research Center. Portable Standard Lisp
([Hearn 1969](#Hearn 1969); [Griss 1981](#Griss 1981)) was a Lisp
dialect designed to be easily portable between different machines.
MacLisp spawned a number of subdialects, such as Franz Lisp, which
was developed at the University of California at Berkeley, and
Zetalisp ([Moon and Weinreb 1981](#Moon and Weinreb 1981)), which
was based on a special-purpose processor designed at the
mit Artificial Intelligence Laboratory to run Lisp
very efficiently. The Lisp dialect used in this book, called Scheme
([Steele and Sussman 1975](#Steele and Sussman 1975)), was invented
in 1975 by Guy Lewis Steele Jr. and Gerald Jay Sussman of the
mit Artificial Intelligence Laboratory and later
reimplemented for instructional use at mit. Scheme
became an ieee standard in 1990 ([IEEE
1990](#IEEE 1990)). The Common Lisp dialect ([Steele
1982](#Steele 1982), [Steele 1990](#Steele 1990)) was developed by
the Lisp community to combine features from the earlier Lisp
dialects to make an industrial standard for Lisp. Common Lisp became
an ansi standard in 1994 ([ANSI 1994](#ANSI 1994)).
[^3]: One such special application was a breakthrough computation of
scientific importance---an integration of the motion of the Solar
System that extended previous results by nearly two orders of
magnitude, and demonstrated that the dynamics of the Solar System is
chaotic. This computation was made possible by new integration
algorithms, a special-purpose compiler, and a special-purpose
computer all implemented with the aid of software tools written in
Lisp ([Abelson et al. 1992](#Abelson et al. 1992); [Sussman and
Wisdom 1992](#Sussman and Wisdom 1992)).
[^4]: The characterization of numbers as "simple data" is a barefaced
bluff. In fact, the treatment of numbers is one of the trickiest and
most confusing aspects of any programming language. Some typical
issues involved are these: Some computer systems distinguish
*integers*, such as 2, from *real numbers*, such as 2.71. Is the
real number 2.00 different from the integer 2? Are the arithmetic
operations used for integers the same as the operations used for
real numbers? Does 6 divided by 2 produce 3, or 3.0? How large a
number can we represent? How many decimal places of accuracy can we
represent? Is the range of integers the same as the range of real
numbers? Above and beyond these questions, of course, lies a
collection of issues concerning roundoff and truncation errors---the
entire science of numerical analysis. Since our focus in this book
is on large-scale program design rather than on numerical
techniques, we are going to ignore these problems. The numerical
examples in this chapter will exhibit the usual roundoff behavior
that one observes when using arithmetic operations that preserve a
limited number of decimal places of accuracy in noninteger
operations.
[^5]: Throughout this book, when we wish to emphasize the distinction
between the input typed by the user and the response printed by the
interpreter, we will show the latter in slanted characters.
[^6]: Lisp systems typically provide features to aid the user in
formatting expressions. Two especially useful features are one that
automatically indents to the proper pretty-print position whenever a
new line is started and one that highlights the matching left
parenthesis whenever a right parenthesis is typed.
[^7]: Lisp obeys the convention that every expression has a value. This
convention, together with the old reputation of Lisp as an
inefficient language, is the source of the quip by Alan Perlis
(paraphrasing Oscar Wilde) that "Lisp programmers know the value of
everything but the cost of nothing."
[^8]: In this book, we do not show the interpreter's response to
evaluating definitions, since this is highly
implementation-dependent.
[^9]: [Chapter 3](#Chapter 3) will show that this notion of environment
is crucial, both for understanding how the interpreter works and for
implementing interpreters.
[^10]: It may seem strange that the evaluation rule says, as part of the
first step, that we should evaluate the leftmost element of a
combination, since at this point that can only be an operator such
as `+` or `*` representing a built-in primitive procedure such as
addition or multiplication. We will see later that it is useful to
be able to work with combinations whose operators are themselves
compound expressions.
[^11]: Special syntactic forms that are simply convenient alternative
surface structures for things that can be written in more uniform
ways are sometimes called *syntactic sugar*, to use a phrase coined
by Peter Landin. In comparison with users of other languages, Lisp
programmers, as a rule, are less concerned with matters of syntax.
(By contrast, examine any Pascal manual and notice how much of it is
devoted to descriptions of syntax.) This disdain for syntax is due
partly to the flexibility of Lisp, which makes it easy to change
surface syntax, and partly to the observation that many "convenient"
syntactic constructs, which make the language less uniform, end up
causing more trouble than they are worth when programs become large
and complex. In the words of Alan Perlis, "Syntactic sugar causes
cancer of the semicolon."
[^12]: Observe that there are two different operations being combined
here: we are creating the procedure, and we are giving it the name
`square`. It is possible, indeed important, to be able to separate
these two notions---to create procedures without naming them, and to
give names to procedures that have already been created. We will see
how to do this in [Section 1.3.2](#Section 1.3.2).
[^13]: Throughout this book, we will describe the general syntax of
expressions by using italic symbols delimited by angle
brackets---e.g., $\langle$*name*$\kern0.08em\rangle$---to denote the
"slots" in the expression to be filled in when such an expression is
actually used.
[^14]: More generally, the body of the procedure can be a sequence of
expressions. In this case, the interpreter evaluates each expression
in the sequence in turn and returns the value of the final
expression as the value of the procedure application.
[^15]: Despite the simplicity of the substitution idea, it turns out to
be surprisingly complicated to give a rigorous mathematical
definition of the substitution process. The problem arises from the
possibility of confusion between the names used for the formal
parameters of a procedure and the (possibly identical) names used in
the expressions to which the procedure may be applied. Indeed, there
is a long history of erroneous definitions of *substitution* in the
literature of logic and programming semantics. See [Stoy
1977](#Stoy 1977) for a careful discussion of substitution.
[^16]: In [Chapter 3](#Chapter 3) we will introduce *stream processing*,
which is a way of handling apparently "infinite" data structures by
incorporating a limited form of normal-order evaluation. In [Section
4.2](#Section 4.2) we will modify the Scheme interpreter to produce
a normal-order variant of Scheme.
[^17]: "Interpreted as either true or false" means this: In Scheme,
there are two distinguished values that are denoted by the constants
`#t` and `#f`. When the interpreter checks a predicate's value, it
interprets `#f` as false. Any other value is treated as true. (Thus,
providing `#t` is logically unnecessary, but it is convenient.) In
this book we will use names `true` and `false`, which are associated
with the values `#t` and `#f` respectively.
[^18]: `abs` also uses the "minus" operator `-`, which, when used with a
single operand, as in `(- x)`, indicates negation.
[^19]: A minor difference between `if` and `cond` is that the
$\langle{e}\rangle$ part of each `cond` clause may be a sequence of
expressions. If the corresponding $\langle{p}\rangle$ is found to be
true, the expressions $\langle{e}\rangle$ are evaluated in sequence
and the value of the final expression in the sequence is returned as
the value of the `cond`. In an `if` expression, however, the
$\langle$*consequent*$\kern0.04em\rangle$ and
$\langle$*alternative*$\kern0.04em\rangle$ must be single
expressions.
[^20]: Declarative and imperative descriptions are intimately related,
as indeed are mathematics and computer science. For instance, to say
that the answer produced by a program is "correct" is to make a
declarative statement about the program. There is a large amount of
research aimed at establishing techniques for proving that programs
are correct, and much of the technical difficulty of this subject
has to do with negotiating the transition between imperative
statements (from which programs are constructed) and declarative
statements (which can be used to deduce things). In a related vein,
an important current area in programming-language design is the
exploration of so-called very high-level languages, in which one
actually programs in terms of declarative statements. The idea is to
make interpreters sophisticated enough so that, given "what is"
knowledge specified by the programmer, they can generate "how to"
knowledge automatically. This cannot be done in general, but there
are important areas where progress has been made. We shall revisit
this idea in [Chapter 4](#Chapter 4).
[^21]: This square-root algorithm is actually a special case of Newton's
method, which is a general technique for finding roots of equations.
The square-root algorithm itself was developed by Heron of
Alexandria in the first century a.d. We will see how
to express the general Newton's method as a Lisp procedure in
[Section 1.3.4](#Section 1.3.4).
[^22]: We will usually give predicates names ending with question marks,
to help us remember that they are predicates. This is just a
stylistic convention. As far as the interpreter is concerned, the
question mark is just an ordinary character.
[^23]: Observe that we express our initial guess as 1.0 rather than 1.
This would not make any difference in many Lisp implementations.
mit Scheme, however, distinguishes between exact
integers and decimal values, and dividing two integers produces a
rational number rather than a decimal. For example, dividing 10 by 6
yields 5/3, while dividing 10.0 by 6.0 yields 1.6666666666666667.
(We will learn how to implement arithmetic on rational numbers in
[Section 2.1.1](#Section 2.1.1).) If we start with an initial guess
of 1 in our square-root program, and $x$ is an exact integer, all
subsequent values produced in the square-root computation will be
rational numbers rather than decimals. Mixed operations on rational
numbers and decimals always yield decimals, so starting with an
initial guess of 1.0 forces all subsequent values to be decimals.
[^24]: Readers who are worried about the efficiency issues involved in
using procedure calls to implement iteration should note the remarks
on "tail recursion" in [Section 1.2.1](#Section 1.2.1).
[^25]: It is not even clear which of these procedures is a more
efficient implementation. This depends upon the hardware available.
There are machines for which the "obvious" implementation is the
less efficient one. Consider a machine that has extensive tables of
logarithms and antilogarithms stored in a very efficient manner.
[^26]: The concept of consistent renaming is actually subtle and
difficult to define formally. Famous logicians have made
embarrassing errors here.
[^27]: Lexical scoping dictates that free variables in a procedure are
taken to refer to bindings made by enclosing procedure definitions;
that is, they are looked up in the environment in which the
procedure was defined. We will see how this works in detail in
chapter 3 when we study environments and the detailed behavior of
the interpreter.[]{#Footnote 28 label="Footnote 28"}
[^28]: Embedded definitions must come first in a procedure body. The
management is not responsible for the consequences of running
programs that intertwine definition and use.
[^29]: In a real program we would probably use the block structure
introduced in the last section to hide the definition of
`fact/iter`:
::: smallscheme
(define (factorial n) (define (iter product counter) (if (\> counter
n) product (iter (\* counter product) (+ counter 1)))) (iter 1 1))
:::
We avoided doing this here so as to minimize the number of things to
think about at once.
[^30]: When we discuss the implementation of procedures on register
machines in [Chapter 5](#Chapter 5), we will see that any iterative
process can be realized "in hardware" as a machine that has a fixed
set of registers and no auxiliary memory. In contrast, realizing a
recursive process requires a machine that uses an auxiliary data
structure known as a *stack*.
[^31]: Tail recursion has long been known as a compiler optimization
trick. A coherent semantic basis for tail recursion was provided by
Carl [Hewitt (1977)](#Hewitt (1977)), who explained it in terms of
the "message-passing" model of computation that we shall discuss in
[Chapter 3](#Chapter 3). Inspired by this, Gerald Jay Sussman and
Guy Lewis Steele Jr. (see [Steele and Sussman
1975](#Steele and Sussman 1975)) constructed a tail-recursive
interpreter for Scheme. Steele later showed how tail recursion is a
consequence of the natural way to compile procedure calls ([Steele
1977](#Steele 1977)). The ieee standard for Scheme
requires that Scheme implementations be tail-recursive.
[^32]: An example of this was hinted at in [Section
1.1.3](#Section 1.1.3). The interpreter itself evaluates expressions
using a tree-recursive process.
[^33]: For example, work through in detail how the reduction rule
applies to the problem of making change for 10 cents using pennies
and nickels.
[^34]: One approach to coping with redundant computations is to arrange
matters so that we automatically construct a table of values as they
are computed. Each time we are asked to apply the procedure to some
argument, we first look to see if the value is already stored in the
table, in which case we avoid performing the redundant computation.
This strategy, known as *tabulation* or *memoization*, can be
implemented in a straightforward way. Tabulation can sometimes be
used to transform processes that require an exponential number of
steps (such as `count/change`) into processes whose space and time
requirements grow linearly with the input. See [Exercise
3.27](#Exercise 3.27).
[^35]: The elements of Pascal's triangle are called the *binomial
coefficients*, because the $n^{\mathrm{th}}$ row consists of the
coefficients of the terms in the expansion of $(x + y)^n$. This
pattern for computing the coefficients appeared in Blaise Pascal's
1653 seminal work on probability theory, *Traité du triangle
arithmétique*. According to [Knuth (1973)](#Knuth (1973)), the same
pattern appears in the *Szu-yuen Yü-chien* ("The Precious Mirror of
the Four Elements"), published by the Chinese mathematician Chu
Shih-chieh in 1303, in the works of the twelfth-century Persian poet
and mathematician Omar Khayyam, and in the works of the
twelfth-century Hindu mathematician Bháscara Áchárya.
[^36]: These statements mask a great deal of oversimplification. For
instance, if we count process steps as "machine operations" we are
making the assumption that the number of machine operations needed
to perform, say, a multiplication is independent of the size of the
numbers to be multiplied, which is false if the numbers are
sufficiently large. Similar remarks hold for the estimates of space.
Like the design and description of a process, the analysis of a
process can be carried out at various levels of abstraction.
[^37]: More precisely, the number of multiplications required is equal
to 1 less than the log base 2 of $n$ plus the number of ones in the
binary representation of $n$. This total is always less than twice
the log base 2 of $n$. The arbitrary constants $k_1$ and $k_2$ in
the definition of order notation imply that, for a logarithmic
process, the base to which logarithms are taken does not matter, so
all such processes are described as $\Theta(\log n)$.
[^38]: You may wonder why anyone would care about raising numbers to the
1000th power. See [Section 1.2.6](#Section 1.2.6).
[^39]: This iterative algorithm is ancient. It appears in the
*Chandah-sutra* by Áchárya Pingala, written before 200
b.c. See [Knuth 1981](#Knuth 1981), section 4.6.3, for
a full discussion and analysis of this and other methods of
exponentiation.
[^40]: This algorithm, which is sometimes known as the "Russian peasant
method" of multiplication, is ancient. Examples of its use are found
in the Rhind Papyrus, one of the two oldest mathematical documents
in existence, written about 1700 b.c. (and copied from
an even older document) by an Egyptian scribe named A$\!$'h-mose.
[^41]: This exercise was suggested to us by Joe Stoy, based on an
example in [Kaldewaij 1990](#Kaldewaij 1990).
[^42]: Euclid's Algorithm is so called because it appears in Euclid's
*Elements* (Book 7, ca. 300 b.c.). According to [Knuth
(1973)](#Knuth (1973)), it can be considered the oldest known
nontrivial algorithm. The ancient Egyptian method of multiplication
([Exercise 1.18](#Exercise 1.18)) is surely older, but, as Knuth
explains, Euclid's algorithm is the oldest known to have been
presented as a general algorithm, rather than as a set of
illustrative examples.
[^43]: This theorem was proved in 1845 by Gabriel Lamé, a French
mathematician and engineer known chiefly for his contributions to
mathematical physics. To prove the theorem, we consider pairs
($a_k, b_k$), where $a_k \ge
b_k$, for which Euclid's Algorithm terminates in $k$ steps. The
proof is based on the claim that, if $(a_{k+1}, b_{k+1}) \to
(a_k, b_k) \to (a_{k-1}, b_{k-1})$ are three successive pairs in the
reduction process, then we must have $b_{k+1} \ge
b_k + b_{k-1}$. To verify the claim, consider that a reduction step
is defined by applying the transformation $a_{k-1} = b_k,
b_{k-1} =$ remainder of $a_k$ divided by $b_k$. The second equation
means that $a_k = qb_k + b_{k-1}$ for some positive integer $q$. And
since $q$ must be at least 1 we have $a_k
= qb_k + b_{k-1} \ge b_k + b_{k-1}$. But in the previous reduction
step we have $b_{k+1} = a_k$. Therefore,
$b_{k+1} = a_k \ge b_k + b_{k-1}$. This verifies the claim. Now we
can prove the theorem by induction on $k$, the number of steps that
the algorithm requires to terminate. The result is true for $k =
1$, since this merely requires that $b$ be at least as large as
Fib(1) = 1. Now, assume that the result is true for all integers
less than or equal to $k$ and establish the result for $k + 1$. Let
$(a_{k+1},
b_{k+1}) \to (a_k, b_k) \to (a_{k-1}, b_{k-1})$ be successive pairs
in the reduction process. By our induction hypotheses, we have
$b_{k-1} \ge {\rm Fib}(k - 1)$ and $b_k \ge {\rm Fib}(k)$. Thus,
applying the claim we just proved together with the definition of
the Fibonacci numbers gives $b_{k+1} \ge
b_k + b_{k-1} \ge {\rm Fib}(k) + {\rm Fib}(k-1) =
{\rm Fib}(k+1)$, which completes the proof of Lamé's Theorem.
[^44]: If $d$ is a divisor of $n$, then so is $n / d$. But $d$ and
$n / d$ cannot both be greater than $\sqrt{n}$.
[^45]: Pierre de Fermat (1601-1665) is considered to be the founder of
modern number theory. He obtained many important number-theoretic
results, but he usually announced just the results, without
providing his proofs. Fermat's Little Theorem was stated in a letter
he wrote in 1640. The first published proof was given by Euler in
1736 (and an earlier, identical proof was discovered in the
unpublished manuscripts of Leibniz). The most famous of Fermat's
results---known as Fermat's Last Theorem---was jotted down in 1637
in his copy of the book *Arithmetic* (by the third-century Greek
mathematician Diophantus) with the remark "I have discovered a truly
remarkable proof, but this margin is too small to contain it."
Finding a proof of Fermat's Last Theorem became one of the most
famous challenges in number theory. A complete solution was finally
given in 1995 by Andrew Wiles of Princeton University.
[^46]: The reduction steps in the cases where the exponent $e$ is
greater than 1 are based on the fact that, for any integers $x$,
$y$, and $m$, we can find the remainder of $x$ times $y$ modulo $m$
by computing separately the remainders of $x$ modulo $m$ and $y$
modulo $m$, multiplying these, and then taking the remainder of the
result modulo $m$. For instance, in the case where $e$ is even, we
compute the remainder of $b^{e / 2}$ modulo $m$, square this, and
take the remainder modulo $m$. This technique is useful because it
means we can perform our computation without ever having to deal
with numbers much larger than $m$. (Compare [Exercise
1.25](#Exercise 1.25).)
[^47]: []{#Footnote 1.47 label="Footnote 1.47"} Numbers that fool the
Fermat test are called *Carmichael numbers*, and little is known
about them other than that they are extremely rare. There are 255
Carmichael numbers below 100,000,000. The smallest few are 561,
1105, 1729, 2465, 2821, and 6601. In testing primality of very large
numbers chosen at random, the chance of stumbling upon a value that
fools the Fermat test is less than the chance that cosmic radiation
will cause the computer to make an error in carrying out a "correct"
algorithm. Considering an algorithm to be inadequate for the first
reason but not for the second illustrates the difference between
mathematics and engineering.
[^48]: One of the most striking applications of probabilistic prime
testing has been to the field of cryptography. Although it is now
computationally infeasible to factor an arbitrary 200-digit number,
the primality of such a number can be checked in a few seconds with
the Fermat test. This fact forms the basis of a technique for
constructing "unbreakable codes" suggested by [Rivest et al.
(1977)](#Rivest et al. (1977)). The resulting *RSA algorithm* has
become a widely used technique for enhancing the security of
electronic communications. Because of this and related developments,
the study of prime numbers, once considered the epitome of a topic
in "pure" mathematics to be studied only for its own sake, now turns
out to have important practical applications to cryptography,
electronic funds transfer, and information retrieval.
[^49]: This series, usually written in the equivalent form
${\pi\over4} = 1 - {1\over3} + {1\over5} - {1\over7} + \dots$, is
due to Leibniz. We'll see how to use this as the basis for some
fancy numerical tricks in [Section 3.5.3](#Section 3.5.3).
[^50]: Notice that we have used block structure ([Section
1.1.8](#Section 1.1.8)) to embed the definitions of `pi/next` and
`pi/term` within `pi/sum`, since these procedures are unlikely to be
useful for any other purpose. We will see how to get rid of them
altogether in [Section 1.3.2](#Section 1.3.2).
[^51]: The intent of [Exercise 1.31](#Exercise 1.31) through [Exercise
1.33](#Exercise 1.33) is to demonstrate the expressive power that is
attained by using an appropriate abstraction to consolidate many
seemingly disparate operations. However, though accumulation and
filtering are elegant ideas, our hands are somewhat tied in using
them at this point since we do not yet have data structures to
provide suitable means of combination for these abstractions. We
will return to these ideas in [Section 2.2.3](#Section 2.2.3) when
we show how to use *sequences* as interfaces for combining filters
and accumulators to build even more powerful abstractions. We will
see there how these methods really come into their own as a powerful
and elegant approach to designing programs.
[^52]: This formula was discovered by the seventeenth-century English
mathematician John Wallis.
[^53]: It would be clearer and less intimidating to people learning Lisp
if a name more obvious than `lambda`, such as `make/procedure`, were
used. But the convention is firmly entrenched. The notation is
adopted from the λ-calculus, a mathematical formalism introduced by
the mathematical logician Alonzo [Church (1941)](#Church (1941)).
Church developed the λ-calculus to provide a rigorous foundation for
studying the notions of function and function application. The
λ-calculus has become a basic tool for mathematical investigations
of the semantics of programming languages.
[^54]: Understanding internal definitions well enough to be sure a
program means what we intend it to mean requires a more elaborate
model of the evaluation process than we have presented in this
chapter. The subtleties do not arise with internal definitions of
procedures, however. We will return to this issue in [Section
4.1.6](#Section 4.1.6), after we learn more about evaluation.
[^55]: We have used 0.001 as a representative "small" number to indicate
a tolerance for the acceptable error in a calculation. The
appropriate tolerance for a real calculation depends upon the
problem to be solved and the limitations of the computer and the
algorithm. This is often a very subtle consideration, requiring help
from a numerical analyst or some other kind of magician.
[^56]: This can be accomplished using `error`, which takes as arguments
a number of items that are printed as error messages.
[^57]: Try this during a boring lecture: Set your calculator to radians
mode and then repeatedly press the `cos` button until you obtain the
fixed point.
[^58]: $\mapsto$ (pronounced "maps to") is the mathematician's way of
writing `lambda`. $y \mapsto x / y$ means `(lambda (y) (/ x y))`,
that is, the function whose value at $y$ is $x / y$.
[^59]: Observe that this is a combination whose operator is itself a
combination. [Exercise 1.4](#Exercise 1.4) already demonstrated the
ability to form such combinations, but that was only a toy example.
Here we begin to see the real need for such combinations---when
applying a procedure that is obtained as the value returned by a
higher-order procedure.
[^60]: See [Exercise 1.45](#Exercise 1.45) for a further generalization.
[^61]: Elementary calculus books usually describe Newton's method in
terms of the sequence of approximations $x_{n+1} = x_n -
g(x_n) / Dg(x_n)$. Having language for talking about processes and
using the idea of fixed points simplifies the description of the
method.
[^62]: Newton's method does not always converge to an answer, but it can
be shown that in favorable cases each iteration doubles the
number-of-digits accuracy of the approximation to the solution. In
such cases, Newton's method will converge much more rapidly than the
half-interval method.
[^63]: For finding square roots, Newton's method converges rapidly to
the correct solution from any starting point.
[^64]: The notion of first-class status of programming-language elements
is due to the British computer scientist Christopher Strachey
(1916-1975).
[^65]: We'll see examples of this after we introduce data structures in
[Chapter 2](#Chapter 2).
[^66]: The major implementation cost of first-class procedures is that
allowing procedures to be returned as values requires reserving
storage for a procedure's free variables even while the procedure is
not executing. In the Scheme implementation we will study in
[Section 4.1](#Section 4.1), these variables are stored in the
procedure's environment.
[^67]: The ability to directly manipulate procedures provides an
analogous increase in the expressive power of a programming
language. For example, in [Section 1.3.1](#Section 1.3.1) we
introduced the `sum` procedure, which takes a procedure `term` as an
argument and computes the sum of the values of `term` over some
specified interval. In order to define `sum`, it is crucial that we
be able to speak of a procedure such as `term` as an entity in its
own right, without regard for how `term` might be expressed with
more primitive operations. Indeed, if we did not have the notion of
"a procedure," it is doubtful that we would ever even think of the
possibility of defining an operation such as `sum`. Moreover,
insofar as performing the summation is concerned, the details of how
`term` may be constructed from more primitive operations are
irrelevant.
[^68]: The name `cons` stands for "construct." The names `car` and `cdr`
derive from the original implementation of Lisp on the [ibm
704]{.smallcaps}. That machine had an addressing scheme that allowed
one to reference the "address" and "decrement" parts of a memory
location. `car` stands for "Contents of Address part of Register"
and `cdr` (pronounced "could-er") stands for "Contents of Decrement
part of Register."
[^69]: Another way to define the selectors and constructor is
::: smallscheme
(define make-rat cons) (define numer car) (define denom cdr)
:::
The first definition associates the name `make/rat` with the value
of the expression `cons`, which is the primitive procedure that
constructs pairs. Thus `make/rat` and `cons` are names for the same
primitive constructor.
Defining selectors and constructors in this way is efficient:
Instead of `make/rat` *calling* `cons`, `make/rat` *is* `cons`, so
there is only one procedure called, not two, when `make/rat` is
called. On the other hand, doing this defeats debugging aids that
trace procedure calls or put breakpoints on procedure calls: You may
want to watch `make/rat` being called, but you certainly don't want
to watch every call to `cons`.
We have chosen not to use this style of definition in this book.
[^70]: `display` is the Scheme primitive for printing data. The Scheme
primitive `newline` starts a new line for printing. Neither of these
procedures returns a useful value, so in the uses of `print/rat`
below, we show only what `print/rat` prints, not what the
interpreter prints as the value returned by `print/rat`.
[^71]: Surprisingly, this idea is very difficult to formulate
rigorously. There are two approaches to giving such a formulation.
One, pioneered by C. A. R. [Hoare (1972)](#Hoare (1972)), is known
as the method of *abstract models*. It formalizes the "procedures
plus conditions" specification as outlined in the rational-number
example above. Note that the condition on the rational-number
representation was stated in terms of facts about integers (equality
and division). In general, abstract models define new kinds of data
objects in terms of previously defined types of data objects.
Assertions about data objects can therefore be checked by reducing
them to assertions about previously defined data objects. Another
approach, introduced by Zilles at mit, by Goguen,
Thatcher, Wagner, and Wright at ibm (see [Thatcher et
al. 1978](#Thatcher et al. 1978)), and by Guttag at Toronto (see
[Guttag 1977](#Guttag 1977)), is called *algebraic specification*.
It regards the "procedures" as elements of an abstract algebraic
system whose behavior is specified by axioms that correspond to our
"conditions," and uses the techniques of abstract algebra to check
assertions about data objects. Both methods are surveyed in the
paper by [Liskov and Zilles (1975)](#Liskov and Zilles (1975)).
[^72]: The use of the word "closure" here comes from abstract algebra,
where a set of elements is said to be closed under an operation if
applying the operation to elements in the set produces an element
that is again an element of the set. The Lisp community also
(unfortunately) uses the word "closure" to describe a totally
unrelated concept: A closure is an implementation technique for
representing procedures with free variables. We do not use the word
"closure" in this second sense in this book.
[^73]: The notion that a means of combination should satisfy closure is
a straightforward idea. Unfortunately, the data combiners provided
in many popular programming languages do not satisfy closure, or
make closure cumbersome to exploit. In Fortran or Basic, one
typically combines data elements by assembling them into
arrays---but one cannot form arrays whose elements are themselves
arrays. Pascal and C admit structures whose elements are structures.
However, this requires that the programmer manipulate pointers
explicitly, and adhere to the restriction that each field of a
structure can contain only elements of a prespecified form. Unlike
Lisp with its pairs, these languages have no built-in
general-purpose glue that makes it easy to manipulate compound data
in a uniform way. This limitation lies behind Alan Perlis's comment
in his foreword to this book: "In Pascal the plethora of declarable
data structures induces a specialization within functions that
inhibits and penalizes casual cooperation. It is better to have 100
functions operate on one data structure than to have 10 functions
operate on 10 data structures."
[^74]: In this book, we use *list* to mean a chain of pairs terminated
by the end-of-list marker. In contrast, the term *list structure*
refers to any data structure made out of pairs, not just to lists.
[^75]: Since nested applications of `car` and `cdr` are cumbersome to
write, Lisp dialects provide abbreviations for them---for instance,
::: smallscheme
(cadr
$\color{SchemeDark}\langle$ *arg* $\color{SchemeDark}\rangle$ ) =
(car (cdr
$\color{SchemeDark}\langle$ *arg* $\color{SchemeDark}\rangle$ ))
:::
The names of all such procedures start with `c` and end with `r`.
Each `a` between them stands for a `car` operation and each `d` for
a `cdr` operation, to be applied in the same order in which they
appear in the name. The names `car` and `cdr` persist because simple
combinations like `cadr` are pronounceable.
[^76]: It's remarkable how much energy in the standardization of Lisp
dialects has been dissipated in arguments that are literally over
nothing: Should `nil` be an ordinary name? Should the value of `nil`
be a symbol? Should it be a list? Should it be a pair? In Scheme,
`nil` is an ordinary name, which we use in this section as a
variable whose value is the end-of-list marker (just as `true` is an
ordinary variable that has a true value). Other dialects of Lisp,
including Common Lisp, treat `nil` as a special symbol. The authors
of this book, who have endured too many language standardization
brawls, would like to avoid the entire issue. Once we have
introduced quotation in [Section 2.3](#Section 2.3), we will denote
the empty list as `’()` and dispense with the variable `nil`
entirely.
[^77]: To define `f` and `g` using `lambda` we would write
::: smallscheme
(define f (lambda (x y . z)
$\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ ))
(define g (lambda w
$\color{SchemeDark}\langle$ *body* $\color{SchemeDark}\rangle$ ))
:::
[^78]: []{#Footnote 12 label="Footnote 12"} Scheme standardly provides a
`map` procedure that is more general than the one described here.
This more general `map` takes a procedure of $n$ arguments, together
with $n$ lists, and applies the procedure to all the first elements
of the lists, all the second elements of the lists, and so on,
returning a list of the results. For example:
::: smallscheme
(map + (list 1 2 3) (list 40 50 60) (list 700 800 900)) *(741 852
963)* (map (lambda (x y) (+ x (\* 2 y))) (list 1 2 3) (list 4 5 6))
*(9 12 15)*
:::
[^79]: The order of the first two clauses in the `cond` matters, since
the empty list satisfies `null?` and also is not a pair.
[^80]: This is, in fact, precisely the `fringe` procedure from [Exercise
2.28](#Exercise 2.28). Here we've renamed it to emphasize that it is
part of a family of general sequence-manipulation procedures.
[^81]: Richard [Waters (1979)](#Waters (1979)) developed a program that
automatically analyzes traditional Fortran programs, viewing them in
terms of maps, filters, and accumulations. He found that fully 90
percent of the code in the Fortran Scientific Subroutine Package
fits neatly into this paradigm. One of the reasons for the success
of Lisp as a programming language is that lists provide a standard
medium for expressing ordered collections so that they can be
manipulated using higher-order operations. The programming language
APL owes much of its power and appeal to a similar choice. In APL
all data are represented as arrays, and there is a universal and
convenient set of generic operators for all sorts of array
operations.
[^82]: According to [Knuth 1981](#Knuth 1981), this rule was formulated
by W. G. Horner early in the nineteenth century, but the method was
actually used by Newton over a hundred years earlier. Horner's rule
evaluates the polynomial using fewer additions and multiplications
than does the straightforward method of first computing $a_n x^n$,
then adding $a_{n-1}x^{n-1}$, and so on. In fact, it is possible to
prove that any algorithm for evaluating arbitrary polynomials must
use at least as many additions and multiplications as does Horner's
rule, and thus Horner's rule is an optimal algorithm for polynomial
evaluation. This was proved (for the number of additions) by A. M.
Ostrowski in a 1954 paper that essentially founded the modern study
of optimal algorithms. The analogous statement for multiplications
was proved by V. Y. Pan in 1966. The book by [Borodin and Munro
(1975)](#Borodin and Munro (1975)) provides an overview of these and
other results about optimal algorithms.
[^83]: This definition uses the extended version of `map` described in
[Footnote 12](#Footnote 12).
[^84]: This approach to nested mappings was shown to us by David Turner,
whose languages KRC and Miranda provide elegant formalisms for
dealing with these constructs. The examples in this section (see
also [Exercise 2.42](#Exercise 2.42)) are adapted from [Turner
1981](#Turner 1981). In [Section 3.5.3](#Section 3.5.3), we'll see
how this approach generalizes to infinite sequences.
[^85]: We're representing a pair here as a list of two elements rather
than as a Lisp pair. Thus, the "pair" $(i, j)$ is represented as
`(list i j)`, not `(cons i j)`.
[^86]: The set $S - x$ is the set of all elements of $S$, excluding $x$.
[^87]: Semicolons in Scheme code are used to introduce *comments*.
Everything from the semicolon to the end of the line is ignored by
the interpreter. In this book we don't use many comments; we try to
make our programs self-documenting by using descriptive names.
[^88]: The picture language is based on the language Peter Henderson
created to construct images like M.C. Escher's "Square Limit"
woodcut (see [Henderson 1982](#Henderson 1982)). The woodcut
incorporates a repeated scaled pattern, similar to the arrangements
drawn using the `square/limit` procedure in this section.
[^89]: William Barton Rogers (1804-1882) was the founder and first
president of mit. A geologist and talented teacher, he
taught at William and Mary College and at the University of
Virginia. In 1859 he moved to Boston, where he had more time for
research, worked on a plan for establishing a "polytechnic
institute," and served as Massachusetts's first State Inspector of
Gas Meters.
When mit was established in 1861, Rogers was elected
its first president. Rogers espoused an ideal of "useful learning"
that was different from the university education of the time, with
its overemphasis on the classics, which, as he wrote, "stand in the
way of the broader, higher and more practical instruction and
discipline of the natural and social sciences." This education was
likewise to be different from narrow trade-school education. In
Rogers's words:
> The world-enforced distinction between the practical and the
> scientific worker is utterly futile, and the whole experience of
> modern times has demonstrated its utter worthlessness.
Rogers served as president of mit until 1870, when he
resigned due to ill health. In 1878 the second president of
mit, John Runkle, resigned under the pressure of a
financial crisis brought on by the Panic of 1873 and strain of
fighting off attempts by Harvard to take over mit.
Rogers returned to hold the office of president until 1881.
Rogers collapsed and died while addressing mit's
graduating class at the commencement exercises of 1882. Runkle
quoted Rogers's last words in a memorial address delivered that same
year:
> "As I stand here today and see what the Institute is, $\dots$ I
> call to mind the beginnings of science. I remember one hundred and
> fifty years ago Stephen Hales published a pamphlet on the subject
> of illuminating gas, in which he stated that his researches had
> demonstrated that 128 grains of bituminous coal -- " "Bituminous
> coal," these were his last words on earth. Here he bent forward,
> as if consulting some notes on the table before him, then slowly
> regaining an erect position, threw up his hands, and was
> translated from the scene of his earthly labors and triumphs to
> "the tomorrow of death," where the mysteries of life are solved,
> and the disembodied spirit finds unending satisfaction in
> contemplating the new and still unfathomable mysteries of the
> infinite future.
In the words of Francis A. Walker (mit's third
president):
> All his life he had borne himself most faithfully and heroically,
> and he died as so good a knight would surely have wished, in
> harness, at his post, and in the very part and act of public duty.
[^90]: Equivalently, we could write
::: smallscheme
(define flipped-pairs (square-of-four identity flip-vert identity
flip-vert))
:::
[^91]: `rotate180` rotates a painter by 180 degrees (see [Exercise
2.50](#Exercise 2.50)). Instead of `rotate180` we could say
`(compose flip/vert flip/horiz)`, using the `compose` procedure from
[Exercise 1.42](#Exercise 1.42).
[^92]: `frame/coord/map` uses the vector operations described in
[Exercise 2.46](#Exercise 2.46) below, which we assume have been
implemented using some representation for vectors. Because of data
abstraction, it doesn't matter what this vector representation is,
so long as the vector operations behave correctly.
[^93]: `segments/>painter` uses the representation for line segments
described in [Exercise 2.48](#Exercise 2.48) below. It also uses the
`for/each` procedure described in [Exercise 2.23](#Exercise 2.23).
[^94]: For example, the `rogers` painter of [Figure 2.11](#Figure 2.11)
was constructed from a gray-level image. For each point in a given
frame, the `rogers` painter determines the point in the image that
is mapped to it under the frame coordinate map, and shades it
accordingly. By allowing different types of painters, we are
capitalizing on the abstract data idea discussed in [Section
2.1.3](#Section 2.1.3), where we argued that a rational-number
representation could be anything at all that satisfies an
appropriate condition. Here we're using the fact that a painter can
be implemented in any way at all, so long as it draws something in
the designated frame. [Section 2.1.3](#Section 2.1.3) also showed
how pairs could be implemented as procedures. Painters are our
second example of a procedural representation for data.
[^95]: `rotate90` is a pure rotation only for square frames, because it
also stretches and shrinks the image to fit into the rotated frame.
[^96]: The diamond-shaped images in [Figure 2.10](#Figure 2.10) and
[Figure 2.11](#Figure 2.11) were created with `squash/inwards`
applied to `wave` and `rogers`.
[^97]: [Section 3.3.4](#Section 3.3.4) describes one such language.
[^98]: Allowing quotation in a language wreaks havoc with the ability to
reason about the language in simple terms, because it destroys the
notion that equals can be substituted for equals. For example, three
is one plus two, but the word "three" is not the phrase "one plus
two." Quotation is powerful because it gives us a way to build
expressions that manipulate other expressions (as we will see when
we write an interpreter in [Chapter 4](#Chapter 4)). But allowing
statements in a language that talk about other statements in that
language makes it very difficult to maintain any coherent principle
of what "equals can be substituted for equals" should mean. For
example, if we know that the evening star is the morning star, then
from the statement "the evening star is Venus" we can deduce "the
morning star is Venus." However, given that "John knows that the
evening star is Venus" we cannot infer that "John knows that the
morning star is Venus."
[^99]: The single quote is different from the double quote we have been
using to enclose character strings to be printed. Whereas the single
quote can be used to denote lists or symbols, the double quote is
used only with character strings. In this book, the only use for
character strings is as items to be printed.
[^100]: Strictly, our use of the quotation mark violates the general
rule that all compound expressions in our language should be
delimited by parentheses and look like lists. We can recover this
consistency by introducing a special form `quote`, which serves the
same purpose as the quotation mark. Thus, we would type `(quote a)`
instead of `’a`, and we would type `(quote (a b c))` instead of
`’(a b c)`. This is precisely how the interpreter works. The
quotation mark is just a single-character abbreviation for wrapping
the next complete expression with `quote` to form
$\hbox{\ttfamily(quote}\;\langle\kern0.06em\hbox{\ttfamily\slshape expression}\kern0.08em\rangle\hbox{\ttfamily)}$.
This is important because it maintains the principle that any
expression seen by the interpreter can be manipulated as a data
object. For instance, we could construct the expression
`(car ’(a b c))`, which is the same as `(car (quote (a b c)))`, by
evaluating `(list ’car (list ’quote ’(a b c)))`.
[^101]: We can consider two symbols to be "the same" if they consist of
the same characters in the same order. Such a definition skirts a
deep issue that we are not yet ready to address: the meaning of
"sameness" in a programming language. We will return to this in
[Chapter 3](#Chapter 3) ([Section 3.1.3](#Section 3.1.3)).
[^102]: In practice, programmers use `equal?` to compare lists that
contain numbers as well as symbols. Numbers are not considered to be
symbols. The question of whether two numerically equal numbers (as
tested by `=`) are also `eq?` is highly implementation-dependent. A
better definition of `equal?` (such as the one that comes as a
primitive in Scheme) would also stipulate that if `a` and `b` are
both numbers, then `a` and `b` are `equal?` if they are numerically
equal.
[^103]: If we want to be more formal, we can specify "consistent with
the interpretations given above" to mean that the operations satisfy
a collection of rules such as these:
$\bullet$ For any set `S` and any object `x`,
`(element/of/set? x (adjoin/set x S))` is true (informally:
"Adjoining an object to a set produces a set that contains the
object").
$\bullet$ For any sets `S` and `T` and any object `x`,
`(element/of/set? x (union/set S T))` is equal to
`(or (element/of/set? x S) (element/of/set? x T))` (informally: "The
elements of `(union S T)` are the elements that are in `S` or in
`T`").
$\bullet$ For any object `x`, `(element/of/set? x ’())` is false
(informally: "No object is an element of the empty set").
[^104]: Halving the size of the problem at each step is the
distinguishing characteristic of logarithmic growth, as we saw with
the fast-exponentiation algorithm of [Section 1.2.4](#Section 1.2.4)
and the half-interval search method of [Section
1.3.3](#Section 1.3.3).
[^105]: We are representing sets in terms of trees, and trees in terms
of lists---in effect, a data abstraction built upon a data
abstraction. We can regard the procedures `entry`, `left/branch`,
`right/branch`, and `make/tree` as a way of isolating the
abstraction of a "binary tree" from the particular way we might wish
to represent such a tree in terms of list structure.
[^106]: Examples of such structures include *B-trees* and *red-black
trees*. There is a large literature on data structures devoted to
this problem. See [Cormen et al. 1990](#Cormen et al. 1990).
[^107]: [Exercise 2.63](#Exercise 2.63) through [Exercise
2.65](#Exercise 2.65) are due to Paul Hilfinger.
[^108]: See [Hamming 1980](#Hamming 1980) for a discussion of the
mathematical properties of Huffman codes.
[^109]: In actual computational systems, rectangular form is preferable
to polar form most of the time because of roundoff errors in
conversion between rectangular and polar form. This is why the
complex-number example is unrealistic. Nevertheless, it provides a
clear illustration of the design of a system using generic
operations and a good introduction to the more substantial systems
to be developed later in this chapter.
[^110]: The arctangent function referred to here, computed by Scheme's
`atan` procedure, is defined so as to take two arguments $y$ and $x$
and to return the angle whose tangent is $y / x$. The signs of the
arguments determine the quadrant of the angle.
[^111]: We use the list `(rectangular)` rather than the symbol
`rectangular` to allow for the possibility of operations with
multiple arguments, not all of the same type.
[^112]: The type the constructors are installed under needn't be a list
because a constructor is always used to make an object of one
particular type.
[^113]: `apply/generic` uses the dotted-tail notation described in
[Exercise 2.20](#Exercise 2.20), because different generic
operations may take different numbers of arguments. In
`apply/generic`, `op` has as its value the first argument to
`apply/generic` and `args` has as its value a list of the remaining
arguments.
`apply/generic` also uses the primitive procedure `apply`, which
takes two arguments, a procedure and a list. `apply` applies the
procedure, using the elements in the list as arguments. For example,
::: smallscheme
(apply + (list 1 2 3 4))
:::
returns 10.
[^114]: One limitation of this organization is it permits only generic
procedures of one argument.
[^115]: We also have to supply an almost identical procedure to handle
the types `(scheme/number complex)`.
[^116]: See [Exercise 2.82](#Exercise 2.82) for generalizations.
[^117]: If we are clever, we can usually get by with fewer than $n^2$
coercion procedures. For instance, if we know how to convert from
type 1 to type 2 and from type 2 to type 3, then we can use this
knowledge to convert from type 1 to type 3. This can greatly
decrease the number of coercion procedures we need to supply
explicitly when we add a new type to the system. If we are willing
to build the required amount of sophistication into our system, we
can have it search the "graph" of relations among types and
automatically generate those coercion procedures that can be
inferred from the ones that are supplied explicitly.
[^118]: This statement, which also appears in the first edition of this
book, is just as true now as it was when we wrote it twelve years
ago. Developing a useful, general framework for expressing the
relations among different types of entities (what philosophers call
"ontology") seems intractably difficult. The main difference between
the confusion that existed ten years ago and the confusion that
exists now is that now a variety of inadequate ontological theories
have been embodied in a plethora of correspondingly inadequate
programming languages. For example, much of the complexity of
object-oriented programming languages---and the subtle and confusing
differences among contemporary object-oriented languages---centers
on the treatment of generic operations on interrelated types. Our
own discussion of computational objects in [Chapter 3](#Chapter 3)
avoids these issues entirely. Readers familiar with object-oriented
programming will notice that we have much to say in chapter 3 about
local state, but we do not even mention "classes" or "inheritance."
In fact, we suspect that these problems cannot be adequately
addressed in terms of computer-language design alone, without also
drawing on work in knowledge representation and automated reasoning.
[^119]: A real number can be projected to an integer using the `round`
primitive, which returns the closest integer to its argument.
[^120]: On the other hand, we will allow polynomials whose coefficients
are themselves polynomials in other variables. This will give us
essentially the same representational power as a full multivariate
system, although it does lead to coercion problems, as discussed
below.
[^121]: For univariate polynomials, giving the value of a polynomial at
a given set of points can be a particularly good representation.
This makes polynomial arithmetic extremely simple. To obtain, for
example, the sum of two polynomials represented in this way, we need
only add the values of the polynomials at corresponding points. To
transform back to a more familiar representation, we can use the
Lagrange interpolation formula, which shows how to recover the
coefficients of a polynomial of degree $n$ given the values of the
polynomial at $n + 1$ points.
[^122]: This operation is very much like the ordered `union/set`
operation we developed in [Exercise 2.62](#Exercise 2.62). In fact,
if we think of the terms of the polynomial as a set ordered
according to the power of the indeterminate, then the program that
produces the term list for a sum is almost identical to `union/set`.
[^123]: To make this work completely smoothly, we should also add to our
generic arithmetic system the ability to coerce a "number" to a
polynomial by regarding it as a polynomial of degree zero whose
coefficient is the number. This is necessary if we are going to
perform operations such as
$$[x^2 + (y + 1)x + 5] + [x^2 + 2x + 1],$$
which requires adding the coefficient $y + 1$ to the coefficient 2.
[^124]: In these polynomial examples, we assume that we have implemented
the generic arithmetic system using the type mechanism suggested in
[Exercise 2.78](#Exercise 2.78). Thus, coefficients that are
ordinary numbers will be represented as the numbers themselves
rather than as pairs whose `car` is the symbol `scheme/number`.
[^125]: Although we are assuming that term lists are ordered, we have
implemented `adjoin/term` to simply `cons` the new term onto the
existing term list. We can get away with this so long as we
guarantee that the procedures (such as `add/terms`) that use
`adjoin/term` always call it with a higher-order term than appears
in the list. If we did not want to make such a guarantee, we could
have implemented `adjoin/term` to be similar to the `adjoin/set`
constructor for the ordered-list representation of sets ([Exercise
2.61](#Exercise 2.61)).
[^126]: The fact that Euclid's Algorithm works for polynomials is
formalized in algebra by saying that polynomials form a kind of
algebraic domain called a *Euclidean ring*. A Euclidean ring is a
domain that admits addition, subtraction, and commutative
multiplication, together with a way of assigning to each element $x$
of the ring a positive integer "measure" $m(x)$ with the properties
that $m(xy) \ge m(x)$ for any nonzero $x$ and $y$ and that, given
any $x$ and $y$, there exists a $q$ such that $y = qx + r$ and
either $r = 0$ or $m(r) < m(x)$. From an abstract point of view,
this is what is needed to prove that Euclid's Algorithm works. For
the domain of integers, the measure $m$ of an integer is the
absolute value of the integer itself. For the domain of polynomials,
the measure of a polynomial is its degree.
[^127]: In an implementation like mit Scheme, this
produces a polynomial that is indeed a divisor of $Q_1$ and $Q_2$,
but with rational coefficients. In many other Scheme systems, in
which division of integers can produce limited-precision decimal
numbers, we may fail to get a valid divisor.
[^128]: One extremely efficient and elegant method for computing
polynomial gcds was discovered by Richard [Zippel
(1979)](#Zippel (1979)). The method is a probabilistic algorithm, as
is the fast test for primality that we discussed in [Chapter
1](#Chapter
1). Zippel's book ([Zippel 1993](#Zippel 1993)) describes this
method, together with other ways to compute polynomial
gcds.
[^129]: Actually, this is not quite true. One exception was the
random-number generator in [Section 1.2.6](#Section 1.2.6). Another
exception involved the operation/type tables we introduced in
[Section 2.4.3](#Section 2.4.3), where the values of two calls to
`get` with the same arguments depended on intervening calls to
`put`. On the other hand, until we introduce assignment, we have no
way to create such procedures ourselves.
[^130]: The value of a `set!` expression is implementation-dependent.
`set!` should be used only for its effect, not for its value.
The name `set!` reflects a naming convention used in Scheme:
Operations that change the values of variables (or that change data
structures, as we will see in [Section 3.3](#Section 3.3)) are given
names that end with an exclamation point. This is similar to the
convention of designating predicates by names that end with a
question mark.
[^131]: We have already used `begin` implicitly in our programs, because
in Scheme the body of a procedure can be a sequence of expressions.
Also, the $\langle$*consequent*$\kern0.06em\rangle$ part of each
clause in a `cond` expression can be a sequence of expressions
rather than a single expression.
[^132]: In programming-language jargon, the variable `balance` is said
to be *encapsulated* within the `new/withdraw` procedure.
Encapsulation reflects the general system-design principle known as
the *hiding principle*: One can make a system more modular and
robust by protecting parts of the system from each other; that is,
by providing information access only to those parts of the system
that have a "need to know."
[^133]: In contrast with `new/withdraw` above, we do not have to use
`let` to make `balance` a local variable, since formal parameters
are already local. This will be clearer after the discussion of the
environment model of evaluation in [Section 3.2](#Section 3.2). (See
also [Exercise 3.10](#Exercise 3.10).)
[^134]: One common way to implement `rand/update` is to use the rule
that $x$ is updated to $ax + b$ modulo $m$, where $a$, $b$, and $m$
are appropriately chosen integers. Chapter 3 of [Knuth
1981](#Knuth 1981) includes an extensive discussion of techniques
for generating sequences of random numbers and establishing their
statistical properties. Notice that the `rand/update` procedure
computes a mathematical function: Given the same input twice, it
produces the same output. Therefore, the number sequence produced by
`rand/update` certainly is not "random," if by "random" we insist
that each number in the sequence is unrelated to the preceding
number. The relation between "real randomness" and so-called
*pseudo-random* sequences, which are produced by well-determined
computations and yet have suitable statistical properties, is a
complex question involving difficult issues in mathematics and
philosophy. Kolmogorov, Solomonoff, and Chaitin have made great
progress in clarifying these issues; a discussion can be found in
[Chaitin 1975](#Chaitin 1975).
[^135]: This theorem is due to E. Cesàro. See section 4.5.2 of [Knuth
1981](#Knuth 1981) for a discussion and a proof.
[^136]: mit Scheme provides such a procedure. If `random`
is given an exact integer (as in [Section 1.2.6](#Section 1.2.6)) it
returns an exact integer, but if it is given a decimal value (as in
this exercise) it returns a decimal value.
[^137]: We don't substitute for the occurrence of `balance` in the
`set!` expression because the $\langle$*name*$\kern0.08em\rangle$ in
a `set!` is not evaluated. If we did substitute for it, we would get
`(set! 25 (- 25 amount))`, which makes no sense.
[^138]: The phenomenon of a single computational object being accessed
by more than one name is known as *aliasing*. The joint bank account
situation illustrates a very simple example of an alias. In [Section
3.3](#Section 3.3) we will see much more complex examples, such as
"distinct" compound data structures that share parts. Bugs can occur
in our programs if we forget that a change to an object may also, as
a "side effect," change a "different" object because the two
"different" objects are actually a single object appearing under
different aliases. These so-called *side-effect bugs* are so
difficult to locate and to analyze that some people have proposed
that programming languages be designed in such a way as to not allow
side effects or aliasing ([Lampson et al.
1981](#Lampson et al. 1981); [Morris et al.
1980](#Morris et al. 1980)).
[^139]: In view of this, it is ironic that introductory programming is
most often taught in a highly imperative style. This may be a
vestige of a belief, common throughout the 1960s and 1970s, that
programs that call procedures must inherently be less efficient than
programs that perform assignments. ([Steele 1977](#Steele 1977)
debunks this argument.) Alternatively it may reflect a view that
step-by-step assignment is easier for beginners to visualize than
procedure call. Whatever the reason, it often saddles beginning
programmers with "should I set this variable before or after that
one" concerns that can complicate programming and obscure the
important ideas.
[^140]: Assignment introduces a subtlety into step 1 of the evaluation
rule. As shown in [Exercise 3.8](#Exercise 3.8), the presence of
assignment allows us to write expressions that will produce
different values depending on the order in which the subexpressions
in a combination are evaluated. Thus, to be precise, we should
specify an evaluation order in step 1 (e.g., left to right or right
to left). However, this order should always be considered to be an
implementation detail, and one should never write programs that
depend on some particular order. For instance, a sophisticated
compiler might optimize a program by varying the order in which
subexpressions are evaluated.
[^141]: If there is already a binding for the variable in the current
frame, then the binding is changed. This is convenient because it
allows redefinition of symbols; however, it also means that `define`
can be used to change values, and this brings up the issues of
assignment without explicitly using `set!`. Because of this, some
people prefer redefinitions of existing symbols to signal errors or
warnings.
[^142]: The environment model will not clarify our claim in [Section
1.2.1](#Section 1.2.1) that the interpreter can execute a procedure
such as `fact/iter` in a constant amount of space using tail
recursion. We will discuss tail recursion when we deal with the
control structure of the interpreter in [Section 5.4](#Section 5.4).
[^143]: Whether `W1` and `W2` share the same physical code stored in the
computer, or whether they each keep a copy of the code, is a detail
of the implementation. For the interpreter we implement in [Chapter
4](#Chapter 4), the code is in fact shared.
[^144]: `set/car!` and `set/cdr!` return implementation-dependent
values. Like `set!`, they should be used only for their effect.
[^145]: We see from this that mutation operations on lists can create
"garbage" that is not part of any accessible structure. We will see
in [Section 5.3.2](#Section 5.3.2) that Lisp memory-management
systems include a *garbage collector*, which identifies and recycles
the memory space used by unneeded pairs.
[^146]: `get/new/pair` is one of the operations that must be implemented
as part of the memory management required by a Lisp implementation.
We will discuss this in [Section 5.3.1](#Section 5.3.1).
[^147]: The two pairs are distinct because each call to `cons` returns a
new pair. The symbols are shared; in Scheme there is a unique symbol
with any given name. Since Scheme provides no way to mutate a
symbol, this sharing is undetectable. Note also that the sharing is
what enables us to compare symbols using `eq?`, which simply checks
equality of pointers.
[^148]: The subtleties of dealing with sharing of mutable data objects
reflect the underlying issues of "sameness" and "change" that were
raised in [Section 3.1.3](#Section 3.1.3). We mentioned there that
admitting change to our language requires that a compound object
must have an "identity" that is something different from the pieces
from which it is composed. In Lisp, we consider this "identity" to
be the quality that is tested by `eq?`, i.e., by equality of
pointers. Since in most Lisp implementations a pointer is
essentially a memory address, we are "solving the problem" of
defining the identity of objects by stipulating that a data object
"itself$\kern0.1em$" is the information stored in some particular
set of memory locations in the computer. This suffices for simple
Lisp programs, but is hardly a general way to resolve the issue of
"sameness" in computational models.
[^149]: On the other hand, from the viewpoint of implementation,
assignment requires us to modify the environment, which is itself a
mutable data structure. Thus, assignment and mutation are
equipotent: Each can be implemented in terms of the other.
[^150]: If the first item is the final item in the queue, the front
pointer will be the empty list after the deletion, which will mark
the queue as empty; we needn't worry about updating the rear
pointer, which will still point to the deleted item, because
`empty/queue?` looks only at the front pointer.
[^151]: Be careful not to make the interpreter try to print a structure
that contains cycles. (See [Exercise 3.13](#Exercise 3.13).)
[^152]: Because `assoc` uses `equal?`, it can recognize keys that are
symbols, numbers, or list structure.
[^153]: Thus, the first backbone pair is the object that represents the
table "itself$\kern0.1em$"; that is, a pointer to the table is a
pointer to this pair. This same backbone pair always starts the
table. If we did not arrange things in this way, `insert!` would
have to return a new value for the start of the table when it added
a new record.
[^154]: A full-adder is a basic circuit element used in adding two
binary numbers. Here A and B are the bits at corresponding positions
in the two numbers to be added, and $\rm C_{in}$ is the carry bit
from the addition one place to the right. The circuit generates SUM,
which is the sum bit in the corresponding position, and
$\rm C_{out}$, which is the carry bit to be propagated to the left.
[^155]: []{#Footnote 27 label="Footnote 27"} These procedures are simply
syntactic sugar that allow us to use ordinary procedural syntax to
access the local procedures of objects. It is striking that we can
interchange the role of "procedures" and "data" in such a simple
way. For example, if we write `(wire ’get/signal)` we think of
`wire` as a procedure that is called with the message `get/signal`
as input. Alternatively, writing `(get/signal wire)` encourages us
to think of `wire` as a data object that is the input to a procedure
`get/signal`. The truth of the matter is that, in a language in
which we can deal with procedures as objects, there is no
fundamental difference between "procedures" and "data," and we can
choose our syntactic sugar to allow us to program in whatever style
we choose.
[^156]: The agenda is a headed list, like the tables in [Section
3.3.3](#Section 3.3.3), but since the list is headed by the time, we
do not need an additional dummy header (such as the `*table*` symbol
used with tables).
[^157]: Observe that the `if` expression in this procedure has no
$\langle$*alternative*$\kern0.08em\rangle$ expression. Such a
"one-armed `if` statement" is used to decide whether to do
something, rather than to select between two expressions. An `if`
expression returns an unspecified value if the predicate is false
and there is no $\langle$*alternative*$\kern0.08em\rangle$.
[^158]: In this way, the current time will always be the time of the
action most recently processed. Storing this time at the head of the
agenda ensures that it will still be available even if the
associated time segment has been deleted.
[^159]: Constraint propagation first appeared in the incredibly
forward-looking sketchpad system of Ivan [Sutherland
(1963)](#Sutherland (1963)). A beautiful constraint-propagation
system based on the Smalltalk language was developed by Alan
[Borning (1977)](#Borning (1977)) at Xerox Palo Alto Research
Center. Sussman, Stallman, and Steele applied constraint propagation
to electrical circuit analysis ([Sussman and Stallman
1975](#Sussman and Stallman 1975); [Sussman and Steele
1980](#Sussman and Steele 1980)). TK!Solver ([Konopasek and
Jayaraman 1984](#Konopasek and Jayaraman 1984)) is an extensive
modeling environment based on constraints.
[^160]: The `setter` might not be a constraint. In our temperature
example, we used `user` as the `setter`.
[^161]: The expression-oriented format is convenient because it avoids
the need to name the intermediate expressions in a computation. Our
original formulation of the constraint language is cumbersome in the
same way that many languages are cumbersome when dealing with
operations on compound data. For example, if we wanted to compute
the product $(a + b) \cdot (c + d)$, where the variables represent
vectors, we could work in "imperative style," using procedures that
set the values of designated vector arguments but do not themselves
return vectors as values:
(v-sum a b temp1) (v-sum c d temp2) (v-prod temp1 temp2 answer)
Alternatively, we could deal with expressions, using procedures that
return vectors as values, and thus avoid explicitly mentioning
`temp1` and `temp2`:
(define answer (v-prod (v-sum a b) (v-sum c d)))
Since Lisp allows us to return compound objects as values of
procedures, we can transform our imperative-style constraint
language into an expression-oriented style as shown in this
exercise. In languages that are impoverished in handling compound
objects, such as Algol, Basic, and Pascal (unless one explicitly
uses Pascal pointer variables), one is usually stuck with the
imperative style when manipulating compound objects. Given the
advantage of the expression-oriented format, one might ask if there
is any reason to have implemented the system in imperative style, as
we did in this section. One reason is that the
non-expression-oriented constraint language provides a handle on
constraint objects (e.g., the value of the `adder` procedure) as
well as on connector objects. This is useful if we wish to extend
the system with new operations that communicate with constraints
directly rather than only indirectly via operations on connectors.
Although it is easy to implement the expression-oriented style in
terms of the imperative implementation, it is very difficult to do
the converse.
[^162]: Most real processors actually execute a few operations at a
time, following a strategy called *pipelining*. Although this
technique greatly improves the effective utilization of the
hardware, it is used only to speed up the execution of a sequential
instruction stream, while retaining the behavior of the sequential
program.
[^163]: To quote some graffiti seen on a Cambridge building wall: "Time
is a device that was invented to keep everything from happening at
once."
[^164]: An even worse failure for this system could occur if the two
`set!` operations attempt to change the balance simultaneously, in
which case the actual data appearing in memory might end up being a
random combination of the information being written by the two
processes. Most computers have interlocks on the primitive
memory-write operations, which protect against such simultaneous
access. Even this seemingly simple kind of protection, however,
raises implementation challenges in the design of multiprocessing
computers, where elaborate *cache-coherence* protocols are required
to ensure that the various processors will maintain a consistent
view of memory contents, despite the fact that data may be
replicated ("cached") among the different processors to increase the
speed of memory access.
[^165]: The factorial program in [Section 3.1.3](#Section 3.1.3)
illustrates this for a single sequential process.
[^166]: The columns show the contents of Peter's wallet, the joint
account (in Bank1), Paul's wallet, and Paul's private account (in
Bank2), before and after each withdrawal (W) and deposit (D). Peter
withdraws \$10 from Bank1; Paul deposits \$5 in Bank2, then
withdraws \$25 from Bank1.
[^167]: []{#Footnote 39 label="Footnote 39"} A more formal way to
express this idea is to say that concurrent programs are inherently
*nondeterministic*. That is, they are described not by single-valued
functions, but by functions whose results are sets of possible
values. In [Section 4.3](#Section 4.3) we will study a language for
expressing nondeterministic computations.
[^168]: `parallel/execute` is not part of standard Scheme, but it can be
implemented in mit Scheme. In our implementation, the
new concurrent processes also run concurrently with the original
Scheme process. Also, in our implementation, the value returned by
`parallel/execute` is a special control object that can be used to
halt the newly created processes.
[^169]: We have simplified `exchange` by exploiting the fact that our
`deposit` message accepts negative amounts. (This is a serious bug
in our banking system!)
[^170]: If the account balances start out as \$10, \$20, and \$30, then
after any number of concurrent exchanges, the balances should still
be \$10, \$20, and \$30 in some order. Serializing the deposits to
individual accounts is not sufficient to guarantee this. See
[Exercise 3.43](#Exercise 3.43).
[^171]: [Exercise 3.45](#Exercise 3.45) investigates why deposits and
withdrawals are no longer automatically serialized by the account.
[^172]: The term "mutex" is an abbreviation for *mutual exclusion*. The
general problem of arranging a mechanism that permits concurrent
processes to safely share resources is called the mutual exclusion
problem. Our mutex is a simple variant of the *semaphore* mechanism
(see [Exercise 3.47](#Exercise 3.47)), which was introduced in the
"THE" Multiprogramming System developed at the Technological
University of Eindhoven and named for the university's initials in
Dutch ([Dijkstra 1968a](#Dijkstra 1968a)). The acquire and release
operations were originally called P and V, from the Dutch words
*passeren* (to pass) and *vrijgeven* (to release), in reference to
the semaphores used on railroad systems. Dijkstra's classic
exposition ([Dijkstra 1968b](#Dijkstra 1968b)) was one of the first
to clearly present the issues of concurrency control, and showed how
to use semaphores to handle a variety of concurrency problems.
[^173]: In most time-shared operating systems, processes that are
blocked by a mutex do not waste time "busy-waiting" as above.
Instead, the system schedules another process to run while the first
is waiting, and the blocked process is awakened when the mutex
becomes available.
[^174]: In mit Scheme for a single processor, which uses a
time-slicing model, `test/and/set!` can be implemented as follows:
::: smallscheme
(define (test-and-set! cell) (without-interrupts (lambda () (if (car
cell) true (begin (set-car! cell true) false)))))
:::
`without/interrupts` disables time-slicing interrupts while its
procedure argument is being executed.
[^175]: There are many variants of such instructions---including
test-and-set, test-and-clear, swap, compare-and-exchange,
load-reserve, and store-conditional---whose design must be carefully
matched to the machine's processor-memory interface. One issue that
arises here is to determine what happens if two processes attempt to
acquire the same resource at exactly the same time by using such an
instruction. This requires some mechanism for making a decision
about which process gets control. Such a mechanism is called an
*arbiter*. Arbiters usually boil down to some sort of hardware
device. Unfortunately, it is possible to prove that one cannot
physically construct a fair arbiter that works 100% of the time
unless one allows the arbiter an arbitrarily long time to make its
decision. The fundamental phenomenon here was originally observed by
the fourteenth-century French philosopher Jean Buridan in his
commentary on Aristotle's *De caelo*. Buridan argued that a
perfectly rational dog placed between two equally attractive sources
of food will starve to death, because it is incapable of deciding
which to go to first.
[^176]: The general technique for avoiding deadlock by numbering the
shared resources and acquiring them in order is due to [Havender
(1968)](#Havender (1968)). Situations where deadlock cannot be
avoided require *deadlock-recovery* methods, which entail having
processes "back out" of the deadlocked state and try again.
Deadlock-recovery mechanisms are widely used in database management
systems, a topic that is treated in detail in [Gray and Reuter
1993](#Gray and Reuter 1993).
[^177]: One such alternative to serialization is called *barrier
synchronization*. The programmer permits concurrent processes to
execute as they please, but establishes certain synchronization
points ("barriers") through which no process can proceed until all
the processes have reached the barrier. Modern processors provide
machine instructions that permit programmers to establish
synchronization points at places where consistency is required. The
PowerPC, for example, includes for this purpose two instructions
called sync and eieio (Enforced In-order
Execution of Input/Output).
[^178]: This may seem like a strange point of view, but there are
systems that work this way. International charges to credit-card
accounts, for example, are normally cleared on a per-country basis,
and the charges made in different countries are periodically
reconciled. Thus the account balance may be different in different
countries.
[^179]: For distributed systems, this perspective was pursued by
[Lamport (1978)](#Lamport (1978)), who showed how to use
communication to establish "global clocks" that can be used to
establish orderings on events in distributed systems.
[^180]: Physicists sometimes adopt this view by introducing the "world
lines" of particles as a device for reasoning about motion. We've
also already mentioned ([Section 2.2.3](#Section 2.2.3)) that this
is the natural way to think about signal-processing systems. We will
explore applications of streams to signal processing in [Section
3.5.3](#Section 3.5.3).
[^181]: Assume that we have a predicate `prime?` (e.g., as in [Section
1.2.6](#Section 1.2.6)) that tests for primality.
[^182]: In the mit implementation, `the/empty/stream` is
the same as the empty list `’()`, and `stream/null?` is the same as
`null?`.
[^183]: This should bother you. The fact that we are defining such
similar procedures for streams and lists indicates that we are
missing some underlying abstraction. Unfortunately, in order to
exploit this abstraction, we will need to exert finer control over
the process of evaluation than we can at present. We will discuss
this point further at the end of [Section 3.5.4](#Section 3.5.4). In
[Section 4.2](#Section 4.2), we'll develop a framework that unifies
lists and streams.
[^184]: Although `stream/car` and `stream/cdr` can be defined as
procedures, `cons/stream` must be a special form. If `cons/stream`
were a procedure, then, according to our model of evaluation,
evaluating
`(cons/stream `$\langle$*`a`*$\rangle$` `$\langle$*`b`*$\rangle$`)`
would automatically cause $\langle$*b*$\kern0.08em\rangle$ to be
evaluated, which is precisely what we do not want to happen. For the
same reason, `delay` must be a special form, though `force` can be
an ordinary procedure.
[^185]: The numbers shown here do not really appear in the delayed
expression. What actually appears is the original expression, in an
environment in which the variables are bound to the appropriate
numbers. For example, `(+ low 1)` with `low` bound to 10,000
actually appears where `10001` is shown.
[^186]: There are many possible implementations of streams other than
the one described in this section. Delayed evaluation, which is the
key to making streams practical, was inherent in Algol 60's
*call-by-name* parameter-passing method. The use of this mechanism
to implement streams was first described by [Landin
(1965)](#Landin (1965)). Delayed evaluation for streams was
introduced into Lisp by [Friedman and Wise
(1976)](#Friedman and Wise (1976)). In their implementation, `cons`
always delays evaluating its arguments, so that lists automatically
behave as streams. The memoizing optimization is also known as
*call-by-need*. The Algol community would refer to our original
delayed objects as *call-by-name thunks* and to the optimized
versions as *call-by-need thunks*.
[^187]: Exercises such as [Exercise 3.51](#Exercise 3.51) and [Exercise
3.52](#Exercise 3.52) are valuable for testing our understanding of
how `delay` works. On the other hand, intermixing delayed evaluation
with printing---and, even worse, with assignment---is extremely
confusing, and instructors of courses on computer languages have
traditionally tormented their students with examination questions
such as the ones in this section. Needless to say, writing programs
that depend on such subtleties is odious programming style. Part of
the power of stream processing is that it lets us ignore the order
in which events actually happen in our programs. Unfortunately, this
is precisely what we cannot afford to do in the presence of
assignment, which forces us to be concerned with time and change.
[^188]: Eratosthenes, a third-century b.c. Alexandrian
Greek philosopher, is famous for giving the first accurate estimate
of the circumference of the Earth, which he computed by observing
shadows cast at noon on the day of the summer solstice.
Eratosthenes's sieve method, although ancient, has formed the basis
for special-purpose hardware "sieves" that, until recently, were the
most powerful tools in existence for locating large primes. Since
the 70s, however, these methods have been superseded by outgrowths
of the probabilistic techniques discussed in [Section
1.2.6](#Section 1.2.6).
[^189]: We have named these figures after Peter Henderson, who was the
first person to show us diagrams of this sort as a way of thinking
about stream processing. Each solid line represents a stream of
values being transmitted. The dashed line from the `car` to the
`cons` and the `filter` indicates that this is a single value rather
than a stream.
[^190]: This uses the generalized version of `stream/map` from [Exercise
3.50](#Exercise 3.50).
[^191]: This last point is very subtle and relies on the fact that
$p_{n+1} \le p_n^2$. (Here, $p_k$ denotes the $k^{\mathrm{th}}$
prime.) Estimates such as these are very difficult to establish. The
ancient proof by Euclid that there are an infinite number of primes
shows that $p_{n+1} \le p_1 p_2 \cdots p_n + 1$, and no
substantially better result was proved until 1851, when the Russian
mathematician P. L. Chebyshev established that $p_{n+1} \le 2p_n$
for all $n$. This result, originally conjectured in 1845, is known
as *Bertrand's hypothesis*. A proof can be found in section 22.3 of
[Hardy and Wright 1960](#Hardy and Wright 1960).
[^192]: This exercise shows how call-by-need is closely related to
ordinary memoization as described in [Exercise
3.27](#Exercise 3.27). In that exercise, we used assignment to
explicitly construct a local table. Our call-by-need stream
optimization effectively constructs such a table automatically,
storing values in the previously forced parts of the stream.
[^193]: We can't use `let` to bind the local variable `guesses`, because
the value of `guesses` depends on `guesses` itself. [Exercise
3.63](#Exercise 3.63) addresses why we want a local variable here.
[^194]: As in [Section 2.2.3](#Section 2.2.3), we represent a pair of
integers as a list rather than a Lisp pair.
[^195]: See [Exercise 3.68](#Exercise 3.68) for some insight into why we
chose this decomposition.
[^196]: The precise statement of the required property on the order of
combination is as follows: There should be a function $f$ of two
arguments such that the pair corresponding to element $i$ of the
first stream and element $j$ of the second stream will appear as
element number $f(i, j)$ of the output stream. The trick of using
`interleave` to accomplish this was shown to us by David Turner, who
employed it in the language KRC ([Turner 1981](#Turner 1981)).
[^197]: We will require that the weighting function be such that the
weight of a pair increases as we move out along a row or down along
a column of the array of pairs.
[^198]: To quote from G. H. Hardy's obituary of Ramanujan ([Hardy
1921](#Hardy 1921)): "It was Mr. Littlewood (I believe) who remarked
that 'every positive integer was one of his friends.' I remember
once going to see him when he was lying ill at Putney. I had ridden
in taxi-cab No. 1729, and remarked that the number seemed to me a
rather dull one, and that I hoped it was not an unfavorable omen.
'No,' he replied, 'it is a very interesting number; it is the
smallest number expressible as the sum of two cubes in two different
ways.' " The trick of using weighted pairs to generate the Ramanujan
numbers was shown to us by Charles Leiserson.
[^199]: This procedure is not guaranteed to work in all Scheme
implementations, although for any implementation there is a simple
variation that will work. The problem has to do with subtle
differences in the ways that Scheme implementations handle internal
definitions. (See [Section 4.1.6](#Section 4.1.6).)
[^200]: This is a small reflection, in Lisp, of the difficulties that
conventional strongly typed languages such as Pascal have in coping
with higher-order procedures. In such languages, the programmer must
specify the data types of the arguments and the result of each
procedure: number, logical value, sequence, and so on. Consequently,
we could not express an abstraction such as "map a given procedure
`proc` over all the elements in a sequence" by a single higher-order
procedure such as `stream/map`. Rather, we would need a different
mapping procedure for each different combination of argument and
result data types that might be specified for a `proc`. Maintaining
a practical notion of "data type" in the presence of higher-order
procedures raises many difficult issues. One way of dealing with
this problem is illustrated by the language ML ([Gordon et al.
1979](#Gordon et al. 1979)), whose "polymorphic data types" include
templates for higher-order transformations between data types.
Moreover, data types for most procedures in ML are never explicitly
declared by the programmer. Instead, ML includes a
*type-inferencing* mechanism that uses information in the
environment to deduce the data types for newly defined procedures.
[^201]: Similarly in physics, when we observe a moving particle, we say
that the position (state) of the particle is changing. However, from
the perspective of the particle's world line in space-time there is
no change involved.
[^202]: John Backus, the inventor of Fortran, gave high visibility to
functional programming when he was awarded the acm
Turing award in 1978. His acceptance speech ([Backus
1978](#Backus 1978)) strongly advocated the functional approach. A
good overview of functional programming is given in [Henderson
1980](#Henderson 1980) and in [Darlington et al.
1982](#Darlington et al. 1982).
[^203]: Observe that, for any two streams, there is in general more than
one acceptable order of interleaving. Thus, technically, "merge" is
a relation rather than a function---the answer is not a
deterministic function of the inputs. We already mentioned
([Footnote 39](#Footnote 39)) that nondeterminism is essential when
dealing with concurrency. The merge relation illustrates the same
essential nondeterminism, from the functional perspective. In
[Section 4.3](#Section 4.3), we will look at nondeterminism from yet
another point of view.
[^204]: The object model approximates the world by dividing it into
separate pieces. The functional model does not modularize along
object boundaries. The object model is useful when the unshared
state of the "objects" is much larger than the state that they
share. An example of a place where the object viewpoint fails is
quantum mechanics, where thinking of things as individual particles
leads to paradoxes and confusions. Unifying the object view with the
functional view may have little to do with programming, but rather
with fundamental epistemological issues.
[^205]: The same idea is pervasive throughout all of engineering. For
example, electrical engineers use many different languages for
describing circuits. Two of these are the language of electrical
*networks* and the language of electrical *systems*. The network
language emphasizes the physical modeling of devices in terms of
discrete electrical elements. The primitive objects of the network
language are primitive electrical components such as resistors,
capacitors, inductors, and transistors, which are characterized in
terms of physical variables called voltage and current. When
describing circuits in the network language, the engineer is
concerned with the physical characteristics of a design. In
contrast, the primitive objects of the system language are
signal-processing modules such as filters and amplifiers. Only the
functional behavior of the modules is relevant, and signals are
manipulated without concern for their physical realization as
voltages and currents. The system language is erected on the network
language, in the sense that the elements of signal-processing
systems are constructed from electrical networks. Here, however, the
concerns are with the large-scale organization of electrical devices
to solve a given application problem; the physical feasibility of
the parts is assumed. This layered collection of languages is
another example of the stratified design technique illustrated by
the picture language of [Section 2.2.4](#Section 2.2.4).
[^206]: The most important features that our evaluator leaves out are
mechanisms for handling errors and supporting debugging. For a more
extensive discussion of evaluators, see [Friedman et al.
1992](#Friedman et al. 1992), which gives an exposition of
programming languages that proceeds via a sequence of evaluators
written in Scheme.
[^207]: Even so, there will remain important aspects of the evaluation
process that are not elucidated by our evaluator. The most important
of these are the detailed mechanisms by which procedures call other
procedures and return values to their callers. We will address these
issues in [Chapter 5](#Chapter 5), where we take a closer look at
the evaluation process by implementing the evaluator as a simple
register machine.
[^208]: If we grant ourselves the ability to apply primitives, then what
remains for us to implement in the evaluator? The job of the
evaluator is not to specify the primitives of the language, but
rather to provide the connective tissue---the means of combination
and the means of abstraction---that binds a collection of primitives
to form a language. Specifically:
$\bullet$ The evaluator enables us to deal with nested expressions.
For example, although simply applying primitives would suffice for
evaluating the expression `(+ 1 6)`, it is not adequate for handling
`(+ 1 (* 2 3))`. As far as the primitive procedure `+` is concerned,
its arguments must be numbers, and it would choke if we passed it
the expression `(* 2 3)` as an argument. One important role of the
evaluator is to choreograph procedure composition so that `(* 2 3)`
is reduced to 6 before being passed as an argument to `+`.
$\bullet$ The evaluator allows us to use variables. For example, the
primitive procedure for addition has no way to deal with expressions
such as `(+ x 1)`. We need an evaluator to keep track of variables
and obtain their values before invoking the primitive procedures.
$\bullet$ The evaluator allows us to define compound procedures.
This involves keeping track of procedure definitions, knowing how to
use these definitions in evaluating expressions, and providing a
mechanism that enables procedures to accept arguments.
$\bullet$ The evaluator provides the special forms, which must be
evaluated differently from procedure calls.
[^209]: We could have simplified the `application?` clause in `eval` by
using `map` (and stipulating that `operands` returns a list) rather
than writing an explicit `list/of/values` procedure. We chose not to
use `map` here to emphasize the fact that the evaluator can be
implemented without any use of higher-order procedures (and thus
could be written in a language that doesn't have higher-order
procedures), even though the language that it supports will include
higher-order procedures.
[^210]: In this case, the language being implemented and the
implementation language are the same. Contemplation of the meaning
of `true?` here yields expansion of consciousness without the abuse
of substance.
[^211]: This implementation of `define` ignores a subtle issue in the
handling of internal definitions, although it works correctly in
most cases. We will see what the problem is and how to solve it in
[Section 4.1.6](#Section 4.1.6).
[^212]: As we said when we introduced `define` and `set!`, these values
are implementation-dependent in Scheme---that is, the implementor
can choose what value to return.
[^213]: As mentioned in [Section 2.3.1](#Section 2.3.1), the evaluator
sees a quoted expression as a list beginning with `quote`, even if
the expression is typed with the quotation mark. For example, the
expression `’a` would be seen by the evaluator as `(quote a)`. See
[Exercise 2.55](#Exercise 2.55).
[^214]: The value of an `if` expression when the predicate is false and
there is no alternative is unspecified in Scheme; we have chosen
here to make it false. We will support the use of the variables
`true` and `false` in expressions to be evaluated by binding them in
the global environment. See [Section 4.1.4](#Section 4.1.4).
[^215]: These selectors for a list of expressions---and the
corresponding ones for a list of operands---are not intended as a
data abstraction. They are introduced as mnemonic names for the
basic list operations in order to make it easier to understand the
explicit-control evaluator in [Section 5.4](#Section 5.4).
[^216]: The value of a `cond` expression when all the predicates are
false and there is no `else` clause is unspecified in Scheme; we
have chosen here to make it false.
[^217]: Practical Lisp systems provide a mechanism that allows a user to
add new derived expressions and specify their implementation as
syntactic transformations without modifying the evaluator. Such a
user-defined transformation is called a *macro*. Although it is easy
to add an elementary mechanism for defining macros, the resulting
language has subtle name-conflict problems. There has been much
research on mechanisms for macro definition that do not cause these
difficulties. See, for example, [Kohlbecker 1986](#Kohlbecker 1986),
[Clinger and Rees 1991](#Clinger and Rees 1991), and [Hanson
1991](#Hanson 1991).
[^218]: Frames are not really a data abstraction in the following code:
`set/variable/value!` and `define/variable!` use `set/car!` to
directly modify the values in a frame. The purpose of the frame
procedures is to make the environment-manipulation procedures easy
to read.
[^219]: The drawback of this representation (as well as the variant in
[Exercise 4.11](#Exercise 4.11)) is that the evaluator may have to
search through many frames in order to find the binding for a given
variable. (Such an approach is referred to as *deep binding*.) One
way to avoid this inefficiency is to make use of a strategy called
*lexical addressing*, which will be discussed in [Section
5.5.6](#Section 5.5.6).
[^220]: Any procedure defined in the underlying Lisp can be used as a
primitive for the metacircular evaluator. The name of a primitive
installed in the evaluator need not be the same as the name of its
implementation in the underlying Lisp; the names are the same here
because the metacircular evaluator implements Scheme itself. Thus,
for example, we could put `(list ’first car)` or
`(list ’square (lambda (x) (* x x)))` in the list of
`primitive/procedures`.
[^221]: `apply/in/underlying/scheme` is the `apply` procedure we have
used in earlier chapters. The metacircular evaluator's `apply`
procedure ([Section 4.1.1](#Section 4.1.1)) models the working of
this primitive. Having two different things called `apply` leads to
a technical problem in running the metacircular evaluator, because
defining the metacircular evaluator's `apply` will mask the
definition of the primitive. One way around this is to rename the
metacircular `apply` to avoid conflict with the name of the
primitive procedure. We have assumed instead that we have saved a
reference to the underlying `apply` by doing
::: smallscheme
(define apply-in-underlying-scheme apply)
:::
before defining the metacircular `apply`. This allows us to access
the original version of `apply` under a different name.
[^222]: The primitive procedure `read` waits for input from the user,
and returns the next complete expression that is typed. For example,
if the user types `(+ 23 x)`, `read` returns a three-element list
containing the symbol `+`, the number 23, and the symbol `x`. If the
user types `’x`, `read` returns a two-element list containing the
symbol `quote` and the symbol `x`.
[^223]: The fact that the machines are described in Lisp is inessential.
If we give our evaluator a Lisp program that behaves as an evaluator
for some other language, say C, the Lisp evaluator will emulate the
C evaluator, which in turn can emulate any machine described as a C
program. Similarly, writing a Lisp evaluator in C produces a C
program that can execute any Lisp program. The deep idea here is
that any evaluator can emulate any other. Thus, the notion of "what
can in principle be computed" (ignoring practicalities of time and
memory required) is independent of the language or the computer, and
instead reflects an underlying notion of *computability*. This was
first demonstrated in a clear way by Alan M. Turing (1912-1954),
whose 1936 paper laid the foundations for theoretical computer
science. In the paper, Turing presented a simple computational
model---now known as a *Turing machine*---and argued that any
"effective process" can be formulated as a program for such a
machine. (This argument is known as the *Church-Turing thesis*.)
Turing then implemented a universal machine, i.e., a Turing machine
that behaves as an evaluator for Turing-machine programs. He used
this framework to demonstrate that there are well-posed problems
that cannot be computed by Turing machines (see [Exercise
4.15](#Exercise 4.15)), and so by implication cannot be formulated
as "effective processes." Turing went on to make fundamental
contributions to practical computer science as well. For example, he
invented the idea of structuring programs using general-purpose
subroutines. See [Hodges 1983](#Hodges 1983) for a biography of
Turing.
[^224]: Some people find it counterintuitive that an evaluator, which is
implemented by a relatively simple procedure, can emulate programs
that are more complex than the evaluator itself. The existence of a
universal evaluator machine is a deep and wonderful property of
computation. *Recursion theory*, a branch of mathematical logic, is
concerned with logical limits of computation. Douglas Hofstadter's
beautiful book *Gödel, Escher, Bach* explores some of these ideas
([Hofstadter 1979](#Hofstadter 1979)).
[^225]: Warning: This `eval` primitive is not identical to the `eval`
procedure we implemented in [Section 4.1.1](#Section 4.1.1), because
it uses *actual* Scheme environments rather than the sample
environment structures we built in [Section 4.1.3](#Section 4.1.3).
These actual environments cannot be manipulated by the user as
ordinary lists; they must be accessed via `eval` or other special
operations. Similarly, the `apply` primitive we saw earlier is not
identical to the metacircular `apply`, because it uses actual Scheme
procedures rather than the procedure objects we constructed in
[Section 4.1.3](#Section 4.1.3) and [Section 4.1.4](#Section 4.1.4).
[^226]: The mit implementation of Scheme includes `eval`,
as well as a symbol `user/initial/environment` that is bound to the
initial environment in which the user's input expressions are
evaluated.
[^227]: Although we stipulated that `halts?` is given a procedure
object, notice that this reasoning still applies even if `halts?`
can gain access to the procedure's text and its environment. This is
Turing's celebrated *Halting Theorem*, which gave the first clear
example of a *non-computable* problem, i.e., a well-posed task that
cannot be carried out as a computational procedure.
[^228]: Wanting programs to not depend on this evaluation mechanism is
the reason for the "management is not responsible" remark in
[Footnote 28](#Footnote 28) of [Chapter 1](#Chapter 1). By insisting
that internal definitions come first and do not use each other while
the definitions are being evaluated, the ieee standard
for Scheme leaves implementors some choice in the mechanism used to
evaluate these definitions. The choice of one evaluation rule rather
than another here may seem like a small issue, affecting only the
interpretation of "badly formed" programs. However, we will see in
[Section 5.5.6](#Section 5.5.6) that moving to a model of
simultaneous scoping for internal definitions avoids some nasty
difficulties that would otherwise arise in implementing a compiler.
[^229]: The ieee standard for Scheme allows for different
implementation strategies by specifying that it is up to the
programmer to obey this restriction, not up to the implementation to
enforce it. Some Scheme implementations, including mit
Scheme, use the transformation shown above. Thus, some programs that
don't obey this restriction will in fact run in such
implementations.
[^230]: The mit implementors of Scheme support Alyssa on
the following grounds: Eva is in principle correct---the definitions
should be regarded as simultaneous. But it seems difficult to
implement a general, efficient mechanism that does what Eva
requires. In the absence of such a mechanism, it is better to
generate an error in the difficult cases of simultaneous definitions
(Alyssa's notion) than to produce an incorrect answer (as Ben would
have it).
[^231]: This example illustrates a programming trick for formulating
recursive procedures without using `define`. The most general trick
of this sort is the $Y$ *operator*, which can be used to give a
"pure λ-calculus" implementation of recursion. (See [Stoy
1977](#Stoy 1977) for details on the λ-calculus, and [Gabriel
1988](#Gabriel 1988) for an exposition of the $Y$ operator in
Scheme.)
[^232]: This technique is an integral part of the compilation process,
which we shall discuss in [Chapter 5](#Chapter 5). Jonathan Rees
wrote a Scheme interpreter like this in about 1982 for the T project
([Rees and Adams 1982](#Rees and Adams 1982)). Marc [Feeley
(1986)](#Feeley (1986)) (see also [Feeley and Lapalme
1987](#Feeley and Lapalme 1987)) independently invented this
technique in his master's thesis.
[^233]: There is, however, an important part of the variable search that
*can* be done as part of the syntactic analysis. As we will show in
[Section 5.5.6](#Section 5.5.6), one can determine the position in
the environment structure where the value of the variable will be
found, thus obviating the need to scan the environment for the entry
that matches the variable.
[^234]: See [Exercise 4.23](#Exercise 4.23) for some insight into the
processing of sequences.
[^235]: Snarf: "To grab, especially a large document or file for the
purpose of using it either with or without the owner's permission."
Snarf down: "To snarf, sometimes with the connotation of absorbing,
processing, or understanding." (These definitions were snarfed from
[Steele et al. 1983](#Steele et al. 1983). See also [Raymond
1993](#Raymond 1993).)
[^236]: The difference between the "lazy" terminology and the
"normal-order" terminology is somewhat fuzzy. Generally, "lazy"
refers to the mechanisms of particular evaluators, while
"normal-order" refers to the semantics of languages, independent of
any particular evaluation strategy. But this is not a hard-and-fast
distinction, and the two terminologies are often used
interchangeably.
[^237]: The "strict" versus "non-strict" terminology means essentially
the same thing as "applicative-order" versus "normal-order," except
that it refers to individual procedures and arguments rather than to
the language as a whole. At a conference on programming languages
you might hear someone say, "The normal-order language Hassle has
certain strict primitives. Other procedures take their arguments by
lazy evaluation."
[^238]: The word *thunk* was invented by an informal working group that
was discussing the implementation of call-by-name in Algol 60. They
observed that most of the analysis of ("thinking about") the
expression could be done at compile time; thus, at run time, the
expression would already have been "thunk" about ([Ingerman et al.
1960](#Ingerman et al. 1960)).
[^239]: This is analogous to the use of `force` on the delayed objects
that were introduced in [Chapter 3](#Chapter 3) to represent
streams. The critical difference between what we are doing here and
what we did in [Chapter 3](#Chapter 3) is that we are building
delaying and forcing into the evaluator, and thus making this
uniform and automatic throughout the language.
[^240]: Lazy evaluation combined with memoization is sometimes referred
to as *call-by-need* argument passing, in contrast to *call-by-name*
argument passing. (Call-by-name, introduced in Algol 60, is similar
to non-memoized lazy evaluation.) As language designers, we can
build our evaluator to memoize, not to memoize, or leave this an
option for programmers ([Exercise 4.31](#Exercise 4.31)). As you
might expect from [Chapter 3](#Chapter 3), these choices raise
issues that become both subtle and confusing in the presence of
assignments. (See [Exercise 4.27](#Exercise 4.27) and [Exercise
4.29](#Exercise 4.29).) An excellent article by [Clinger
(1982)](#Clinger (1982)) attempts to clarify the multiple dimensions
of confusion that arise here.
[^241]: Notice that we also erase the `env` from the thunk once the
expression's value has been computed. This makes no difference in
the values returned by the interpreter. It does help save space,
however, because removing the reference from the thunk to the `env`
once it is no longer needed allows this structure to be
*garbage-collected* and its space recycled, as we will discuss in
[Section 5.3](#Section 5.3).
Similarly, we could have allowed unneeded environments in the
memoized delayed objects of [Section 3.5.1](#Section 3.5.1) to be
garbage-collected, by having `memo/proc` do something like
`(set! proc ’())` to discard the procedure `proc` (which includes
the environment in which the `delay` was evaluated) after storing
its value.
[^242]: This exercise demonstrates that the interaction between lazy
evaluation and side effects can be very confusing. This is just what
you might expect from the discussion in [Chapter 3](#Chapter 3).
[^243]: This is precisely the issue with the `unless` procedure, as in
[Exercise 4.26](#Exercise 4.26).
[^244]: This is the procedural representation described in [Exercise
2.4](#Exercise 2.4). Essentially any procedural representation
(e.g., a message-passing implementation) would do as well. Notice
that we can install these definitions in the lazy evaluator simply
by typing them at the driver loop. If we had originally included
`cons`, `car`, and `cdr` as primitives in the global environment,
they will be redefined. (Also see [Exercise 4.33](#Exercise 4.33)
and [Exercise 4.34](#Exercise 4.34).)
[^245]: This permits us to create delayed versions of more general kinds
of list structures, not just sequences. [Hughes 1990](#Hughes 1990)
discusses some applications of "lazy trees."
[^246]: We assume that we have previously defined a procedure `prime?`
that tests whether numbers are prime. Even with `prime?` defined,
the `prime/sum/pair` procedure may look suspiciously like the
unhelpful "pseudo-Lisp" attempt to define the square-root function,
which we described at the beginning of [Section
1.1.7](#Section 1.1.7). In fact, a square-root procedure along those
lines can actually be formulated as a nondeterministic program. By
incorporating a search mechanism into the evaluator, we are eroding
the distinction between purely declarative descriptions and
imperative specifications of how to compute answers. We'll go even
farther in this direction in [Section 4.4](#Section 4.4).
[^247]: The idea of `amb` for nondeterministic programming was first
described in 1961 by John McCarthy (see [McCarthy
1963](#McCarthy 1963)).
[^248]: In actuality, the distinction between nondeterministically
returning a single choice and returning all choices depends somewhat
on our point of view. From the perspective of the code that uses the
value, the nondeterministic choice returns a single value. From the
perspective of the programmer designing the code, the
nondeterministic choice potentially returns all possible values, and
the computation branches so that each value is investigated
separately.
[^249]: One might object that this is a hopelessly inefficient
mechanism. It might require millions of processors to solve some
easily stated problem this way, and most of the time most of those
processors would be idle. This objection should be taken in the
context of history. Memory used to be considered just such an
expensive commodity. In 1964 a megabyte of ram cost
about \$400,000. Now every personal computer has many megabytes of
ram, and most of the time most of that
ram is unused. It is hard to underestimate the cost of
mass-produced electronics.
[^250]: Automagically: "Automatically, but in a way which, for some
reason (typically because it is too complicated, or too ugly, or
perhaps even too trivial), the speaker doesn't feel like
explaining." ([Steele et al. 1983](#Steele et al. 1983), [Raymond
1993](#Raymond 1993))[]{#Footnote 4.47 label="Footnote 4.47"}
[^251]: The integration of automatic search strategies into programming
languages has had a long and checkered history. The first
suggestions that nondeterministic algorithms might be elegantly
encoded in a programming language with search and automatic
backtracking came from Robert [Floyd (1967)](#Floyd (1967)). Carl
[Hewitt (1969)](#Hewitt (1969)) invented a programming language
called Planner that explicitly supported automatic chronological
backtracking, providing for a built-in depth-first search strategy.
[Sussman et al. (1971)](#Sussman et al. (1971)) implemented a subset
of this language, called MicroPlanner, which was used to support
work in problem solving and robot planning. Similar ideas, arising
from logic and theorem proving, led to the genesis in Edinburgh and
Marseille of the elegant language Prolog (which we will discuss in
[Section 4.4](#Section 4.4)). After sufficient frustration with
automatic search, [McDermott and Sussman
(1972)](#McDermott and Sussman (1972)) developed a language called
Conniver, which included mechanisms for placing the search strategy
under programmer control. This proved unwieldy, however, and
[Sussman and Stallman 1975](#Sussman and Stallman 1975) found a more
tractable approach while investigating methods of symbolic analysis
for electrical circuits. They developed a non-chronological
backtracking scheme that was based on tracing out the logical
dependencies connecting facts, a technique that has come to be known
as *dependency-directed backtracking*. Although their method was
complex, it produced reasonably efficient programs because it did
little redundant search. [Doyle (1979)](#Doyle (1979)) and
[McAllester (1978; 1980)](#McAllester (1978; 1980)) generalized and
clarified the methods of Stallman and Sussman, developing a new
paradigm for formulating search that is now called *truth
maintenance*. Modern problem-solving systems all use some form of
truth-maintenance system as a substrate. See [Forbus and deKleer
1993](#Forbus and deKleer 1993) for a discussion of elegant ways to
build truth-maintenance systems and applications using truth
maintenance. [Zabih et al. 1987](#Zabih et al. 1987) describes a
nondeterministic extension to Scheme that is based on `amb`; it is
similar to the interpreter described in this section, but more
sophisticated, because it uses dependency-directed backtracking
rather than chronological backtracking. [Winston
1992](#Winston 1992) gives an introduction to both kinds of
backtracking.
[^252]: Our program uses the following procedure to determine if the
elements of a list are distinct:
::: smallscheme
(define (distinct? items) (cond ((null? items) true) ((null? (cdr
items)) true) ((member (car items) (cdr items)) false) (else
(distinct? (cdr items)))))
:::
`member` is like `memq` except that it uses `equal?` instead of
`eq?` to test for equality.
[^253]: This is taken from a booklet called "Problematical Recreations,"
published in the 1960s by Litton Industries, where it is attributed
to the *Kansas State Engineer*.
[^254]: Here we use the convention that the first element of each list
designates the part of speech for the rest of the words in the list.
[^255]: Notice that `parse/word` uses `set!` to modify the unparsed
input list. For this to work, our `amb` evaluator must undo the
effects of `set!` operations when it backtracks.
[^256]: Observe that this definition is recursive---a verb may be
followed by any number of prepositional phrases.
[^257]: This kind of grammar can become arbitrarily complex, but it is
only a toy as far as real language understanding is concerned. Real
natural-language understanding by computer requires an elaborate
mixture of syntactic analysis and interpretation of meaning. On the
other hand, even toy parsers can be useful in supporting flexible
command languages for programs such as information-retrieval
systems. [Winston 1992](#Winston 1992) discusses computational
approaches to real language understanding and also the applications
of simple grammars to command languages.
[^258]: Although Alyssa's idea works just fine (and is surprisingly
simple), the sentences that it generates are a bit boring---they
don't sample the possible sentences of this language in a very
interesting way. In fact, the grammar is highly recursive in many
places, and Alyssa's technique "falls into" one of these recursions
and gets stuck. See [Exercise 4.50](#Exercise 4.50) for a way to
deal with this.
[^259]: We chose to implement the lazy evaluator in [Section
4.2](#Section 4.2) as a modification of the ordinary metacircular
evaluator of [Section 4.1.1](#Section 4.1.1). In contrast, we will
base the `amb` evaluator on the analyzing evaluator of [Section
4.1.7](#Section 4.1.7), because the execution procedures in that
evaluator provide a convenient framework for implementing
backtracking.
[^260]: We assume that the evaluator supports `let` (see [Exercise
4.22](#Exercise 4.22)), which we have used in our nondeterministic
programs.
[^261]: We didn't worry about undoing definitions, since we can assume
that internal definitions are scanned out ([Section
4.1.6](#Section 4.1.6)).
[^262]: Logic programming has grown out of a long history of research in
automatic theorem proving. Early theorem-proving programs could
accomplish very little, because they exhaustively searched the space
of possible proofs. The major breakthrough that made such a search
plausible was the discovery in the early 1960s of the *unification
algorithm* and the *resolution principle* ([Robinson
1965](#Robinson 1965)). Resolution was used, for example, by [Green
and Raphael (1968)](#Green and Raphael (1968)) (see also [Green
1969](#Green 1969)) as the basis for a deductive question-answering
system. During most of this period, researchers concentrated on
algorithms that are guaranteed to find a proof if one exists. Such
algorithms were difficult to control and to direct toward a proof.
[Hewitt (1969)](#Hewitt (1969)) recognized the possibility of
merging the control structure of a programming language with the
operations of a logic-manipulation system, leading to the work in
automatic search mentioned in [Section 4.3.1](#Section 4.3.1)
([Footnote 4.47](#Footnote 4.47)). At the same time that this was
being done, Colmerauer, in Marseille, was developing rule-based
systems for manipulating natural language (see [Colmerauer et al.
1973](#Colmerauer et al. 1973)). He invented a programming language
called Prolog for representing those rules. [Kowalski (1973;
1979)](#Kowalski (1973; 1979)), in Edinburgh, recognized that
execution of a Prolog program could be interpreted as proving
theorems (using a proof technique called linear Horn-clause
resolution). The merging of the last two strands led to the
logic-programming movement. Thus, in assigning credit for the
development of logic programming, the French can point to Prolog's
genesis at the University of Marseille, while the British can
highlight the work at the University of Edinburgh. According to
people at mit, logic programming was developed by
these groups in an attempt to figure out what Hewitt was talking
about in his brilliant but impenetrable Ph.D. thesis. For a history
of logic programming, see [Robinson 1983](#Robinson 1983).
[^263]: To see the correspondence between the rules and the procedure,
let `x` in the procedure (where `x` is nonempty) correspond to
`(cons u v)` in the rule. Then `z` in the rule corresponds to the
`append` of `(cdr x)` and `y`.
[^264]: This certainly does not relieve the user of the entire problem
of how to compute the answer. There are many different
mathematically equivalent sets of rules for formulating the `append`
relation, only some of which can be turned into effective devices
for computing in any direction. In addition, sometimes "what is"
information gives no clue "how to" compute an answer. For example,
consider the problem of computing the $y$ such that $y^2 = x$.
[^265]: Interest in logic programming peaked during the early 80s when
the Japanese government began an ambitious project aimed at building
superfast computers optimized to run logic programming languages.
The speed of such computers was to be measured in LIPS (Logical
Inferences Per Second) rather than the usual FLOPS (FLoating-point
Operations Per Second). Although the project succeeded in developing
hardware and software as originally planned, the international
computer industry moved in a different direction. See [Feigenbaum
and Shrobe 1993](#Feigenbaum and Shrobe 1993) for an overview
evaluation of the Japanese project. The logic programming community
has also moved on to consider relational programming based on
techniques other than simple pattern matching, such as the ability
to deal with numerical constraints such as the ones illustrated in
the constraint-propagation system of [Section
3.3.5](#Section 3.3.5).
[^266]: This uses the dotted-tail notation introduced in [Exercise
2.20](#Exercise 2.20).
[^267]: Actually, this description of `not` is valid only for simple
cases. The real behavior of `not` is more complex. We will examine
`not`'s peculiarities in sections [Section 4.4.2](#Section 4.4.2)
and [Section 4.4.3](#Section 4.4.3).
[^268]: `lisp/value` should be used only to perform an operation not
provided in the query language. In particular, it should not be used
to test equality (since that is what the matching in the query
language is designed to do) or inequality (since that can be done
with the `same` rule shown below).
[^269]: Notice that we do not need `same` in order to make two things be
the same: We just use the same pattern variable for each---in
effect, we have one thing instead of two things in the first place.
For example, see `?town` in the `lives/near` rule and
`?middle/manager` in the `wheel` rule below. `same` is useful when
we want to force two things to be different, such as `?person/1` and
`?person/2` in the `lives/near` rule. Although using the same
pattern variable in two parts of a query forces the same value to
appear in both places, using different pattern variables does not
force different values to appear. (The values assigned to different
pattern variables may be the same or different.)
[^270]: We will also allow rules without bodies, as in `same`, and we
will interpret such a rule to mean that the rule conclusion is
satisfied by any values of the variables.
[^271]: Because matching is generally very expensive, we would like to
avoid applying the full matcher to every element of the data base.
This is usually arranged by breaking up the process into a fast,
coarse match and the final match. The coarse match filters the data
base to produce a small set of candidates for the final match. With
care, we can arrange our data base so that some of the work of
coarse matching can be done when the data base is constructed rather
then when we want to select the candidates. This is called
*indexing* the data base. There is a vast technology built around
data-base-indexing schemes. Our implementation, described in
[Section 4.4.4](#Section 4.4.4), contains a simple-minded form of
such an optimization.
[^272]: But this kind of exponential explosion is not common in `and`
queries because the added conditions tend to reduce rather than
expand the number of frames produced.
[^273]: There is a large literature on data-base-management systems that
is concerned with how to handle complex queries efficiently.
[^274]: There is a subtle difference between this filter implementation
of `not` and the usual meaning of `not` in mathematical logic. See
[Section 4.4.3](#Section 4.4.3).
[^275]: In one-sided pattern matching, all the equations that contain
pattern variables are explicit and already solved for the unknown
(the pattern variable).
[^276]: Another way to think of unification is that it generates the
most general pattern that is a specialization of the two input
patterns. That is, the unification of `(?x a)` and `((b ?y) ?z)` is
`((b ?y) a)`, and the unification of `(?x a ?y)` and `(?y ?z a)`,
discussed above, is `(a a a)`. For our implementation, it is more
convenient to think of the result of unification as a frame rather
than a pattern.
[^277]: Since unification is a generalization of matching, we could
simplify the system by using the unifier to produce both streams.
Treating the easy case with the simple matcher, however, illustrates
how matching (as opposed to full-blown unification) can be useful in
its own right.
[^278]: The reason we use streams (rather than lists) of frames is that
the recursive application of rules can generate infinite numbers of
values that satisfy a query. The delayed evaluation embodied in
streams is crucial here: The system will print responses one by one
as they are generated, regardless of whether there are a finite or
infinite number of responses.
[^279]: That a particular method of inference is legitimate is not a
trivial assertion. One must prove that if one starts with true
premises, only true conclusions can be derived. The method of
inference represented by rule applications is *modus ponens*, the
familiar method of inference that says that if $A$ is true and *A
implies B* is true, then we may conclude that $B$ is true.
[^280]: We must qualify this statement by agreeing that, in speaking of
the "inference" accomplished by a logic program, we assume that the
computation terminates. Unfortunately, even this qualified statement
is false for our implementation of the query language (and also
false for programs in Prolog and most other current logic
programming languages) because of our use of `not` and `lisp/value`.
As we will describe below, the `not` implemented in the query
language is not always consistent with the `not` of mathematical
logic, and `lisp/value` introduces additional complications. We
could implement a language consistent with mathematical logic by
simply removing `not` and `lisp/value` from the language and
agreeing to write programs using only simple queries, `and`, and
`or`. However, this would greatly restrict the expressive power of
the language. One of the major concerns of research in logic
programming is to find ways to achieve more consistency with
mathematical logic without unduly sacrificing expressive power.
[^281]: This is not a problem of the logic but one of the procedural
interpretation of the logic provided by our interpreter. We could
write an interpreter that would not fall into a loop here. For
example, we could enumerate all the proofs derivable from our
assertions and our rules in a breadth-first rather than a
depth-first order. However, such a system makes it more difficult to
take advantage of the order of deductions in our programs. One
attempt to build sophisticated control into such a program is
described in [deKleer et al. 1977](#deKleer et al. 1977). Another
technique, which does not lead to such serious control problems, is
to put in special knowledge, such as detectors for particular kinds
of loops ([Exercise 4.67](#Exercise 4.67)). However, there can be no
general scheme for reliably preventing a system from going down
infinite paths in performing deductions. Imagine a diabolical rule
of the form "To show $P(x)$ is true, show that $P(f(x))$ is true,"
for some suitably chosen function $f$.
[^282]: Consider the query `(not (baseball/fan (Bitdiddle Ben)))`. The
system finds that `(baseball/fan (Bitdiddle Ben))` is not in the
data base, so the empty frame does not satisfy the pattern and is
not filtered out of the initial stream of frames. The result of the
query is thus the empty frame, which is used to instantiate the
input query to produce `(not (baseball/fan (Bitdiddle Ben)))`.
[^283]: A discussion and justification of this treatment of `not` can be
found in the article by [Clark (1978)](#Clark (1978)).
[^284]: In general, unifying `?y` with an expression involving `?y`
would require our being able to find a fixed point of the equation
`?y` = $\langle$*expression involving `?y`*$\rangle$. It is
sometimes possible to syntactically form an expression that appears
to be the solution. For example, `?y` = `(f ?y)` seems to have the
fixed point `(f (f (f `$\dots$` )))`, which we can produce by
beginning with the expression `(f ?y)` and repeatedly substituting
`(f ?y)` for `?y`. Unfortunately, not every such equation has a
meaningful fixed point. The issues that arise here are similar to
the issues of manipulating infinite series in mathematics. For
example, we know that 2 is the solution to the equation
$y = 1 + y / 2$. Beginning with the expression $1 + y / 2$ and
repeatedly substituting $1 + y / 2$ for $y$ gives
$$2 = y = 1 + {y \over 2} = 1 + {1\over2}\left(1 + {y \over 2}\right) =
1 + {1\over2} + {y \over 4} = \dots ,$$
which leads to
$$2 = 1 + {1\over2} + {1\over4} + {1\over8} + \dots.$$
However, if we try the same manipulation beginning with the
observation that -1 is the solution to the equation $y = 1 + 2y$, we
obtain
$$-1 = y = 1 + 2y = 1 + 2(1 + 2y) = 1 + 2 + 4y = \dots,$$
which leads to
$$-1 = 1 + 2 + 4 + 8 + \dots.$$
Although the formal manipulations used in deriving these two
equations are identical, the first result is a valid assertion about
infinite series but the second is not. Similarly, for our
unification results, reasoning with an arbitrary syntactically
constructed expression may lead to errors.
[^285]: Most Lisp systems give the user the ability to modify the
ordinary `read` procedure to perform such transformations by
defining *reader macro characters*. Quoted expressions are already
handled in this way: The reader automatically translates
`’expression` into `(quote expression)` before the evaluator sees
it. We could arrange for `?expression` to be transformed into
`(? expression)` in the same way; however, for the sake of clarity
we have included the transformation procedure here explicitly.
`expand/question/mark` and `contract/question/mark` use several
procedures with `string` in their names. These are Scheme
primitives.
[^286]: This assumption glosses over a great deal of complexity. Usually
a large portion of the implementation of a Lisp system is dedicated
to making reading and printing work.
[^287]: One might argue that we don't need to save the old `n`; after we
decrement it and solve the subproblem, we could simply increment it
to recover the old value. Although this strategy works for
factorial, it cannot work in general, since the old value of a
register cannot always be computed from the new one.
[^288]: In [Section 5.3](#Section 5.3) we will see how to implement a
stack in terms of more primitive operations.
[^289]: Using the `receive` procedure here is a way to get
`extract/labels` to effectively return two values---`labels` and
`insts`---without explicitly making a compound data structure to
hold them. An alternative implementation, which returns an explicit
pair of values, is
::: smallscheme
(define (extract-labels text) (if (null? text) (cons '() '()) (let
((result (extract-labels (cdr text)))) (let ((insts (car result))
(labels (cdr result))) (let ((next-inst (car text))) (if (symbol?
next-inst) (cons insts (cons (make-label-entry next-inst insts)
labels)) (cons (cons (make-instruction next-inst) insts)
labels)))))))
:::
which would be called by `assemble` as follows:
::: smallscheme
(define (assemble controller-text machine) (let ((result
(extract-labels controller-text))) (let ((insts (car result))
(labels (cdr result))) (update-insts! insts labels machine) insts)))
:::
You can consider our use of `receive` as demonstrating an elegant
way to return multiple values, or simply an excuse to show off a
programming trick. An argument like `receive` that is the next
procedure to be invoked is called a "continuation." Recall that we
also used continuations to implement the backtracking control
structure in the `amb` evaluator in [Section 4.3.3](#Section 4.3.3).
[^290]: We could represent memory as lists of items. However, the access
time would then not be independent of the index, since accessing the
$n^{\mathrm{th}}$ element of a list requires $n - 1$ `cdr`
operations.
[^291]: For completeness, we should specify a `make/vector` operation
that constructs vectors. However, in the present application we will
use vectors only to model fixed divisions of the computer memory.
[^292]: This is precisely the same "tagged data" idea we introduced in
[Chapter 2](#Chapter 2) for dealing with generic operations. Here,
however, the data types are included at the primitive machine level
rather than constructed through the use of lists.
[^293]: Type information may be encoded in a variety of ways, depending
on the details of the machine on which the Lisp system is to be
implemented. The execution efficiency of Lisp programs will be
strongly dependent on how cleverly this choice is made, but it is
difficult to formulate general design rules for good choices. The
most straightforward way to implement typed pointers is to allocate
a fixed set of bits in each pointer to be a *type field* that
encodes the data type. Important questions to be addressed in
designing such a representation include the following: How many type
bits are required? How large must the vector indices be? How
efficiently can the primitive machine instructions be used to
manipulate the type fields of pointers? Machines that include
special hardware for the efficient handling of type fields are said
to have *tagged architectures*.
[^294]: This decision on the representation of numbers determines
whether `eq?`, which tests equality of pointers, can be used to test
for equality of numbers. If the pointer contains the number itself,
then equal numbers will have the same pointer. But if the pointer
contains the index of a location where the number is stored, equal
numbers will be guaranteed to have equal pointers only if we are
careful never to store the same number in more than one location.
[^295]: This is just like writing a number as a sequence of digits,
except that each "digit" is a number between 0 and the largest
number that can be stored in a single pointer.
[^296]: There are other ways of finding free storage. For example, we
could link together all the unused pairs into a *free list*. Our
free locations are consecutive (and hence can be accessed by
incrementing a pointer) because we are using a compacting garbage
collector, as we will see in [Section 5.3.2](#Section 5.3.2).
[^297]: This is essentially the implementation of `cons` in terms of
`set/car!` and `set/cdr!`, as described in [Section
3.3.1](#Section 3.3.1). The operation `get/new/pair` used in that
implementation is realized here by the `free` pointer.
[^298]: This may not be true eventually, because memories may get large
enough so that it would be impossible to run out of free memory in
the lifetime of the computer. For example, there are about
$3\cdot10^{13}$ microseconds in a year, so if we were to `cons` once
per microsecond we would need about $10^{15}$ cells of memory to
build a machine that could operate for 30 years without running out
of memory. That much memory seems absurdly large by today's
standards, but it is not physically impossible. On the other hand,
processors are getting faster and a future computer may have large
numbers of processors operating in parallel on a single memory, so
it may be possible to use up memory much faster than we have
postulated.
[^299]: We assume here that the stack is represented as a list as
described in [Section 5.3.1](#Section 5.3.1), so that items on the
stack are accessible via the pointer in the stack register.
[^300]: This idea was invented and first implemented by Minsky, as part
of the implementation of Lisp for the pdp-1 at the
mit Research Laboratory of Electronics. It was further
developed by [Fenichel and Yochelson
(1969)](#Fenichel and Yochelson (1969)) for use in the Lisp
implementation for the Multics time-sharing system. Later, [Baker
(1978)](#Baker (1978)) developed a "real-time" version of the
method, which does not require the computation to stop during
garbage collection. Baker's idea was extended by Hewitt, Lieberman,
and Moon (see [Lieberman and Hewitt
1983](#Lieberman and Hewitt 1983)) to take advantage of the fact
that some structure is more volatile and other structure is more
permanent.
An alternative commonly used garbage-collection technique is the
*mark-sweep* method. This consists of tracing all the structure
accessible from the machine registers and marking each pair we
reach. We then scan all of memory, and any location that is unmarked
is "swept up" as garbage and made available for reuse. A full
discussion of the mark-sweep method can be found in [Allen
1978](#Allen 1978).
The Minsky-Fenichel-Yochelson algorithm is the dominant algorithm in
use for large-memory systems because it examines only the useful
part of memory. This is in contrast to mark-sweep, in which the
sweep phase must check all of memory. A second advantage of
stop-and-copy is that it is a *compacting* garbage collector. That
is, at the end of the garbage-collection phase the useful data will
have been moved to consecutive memory locations, with all garbage
pairs compressed out. This can be an extremely important performance
consideration in machines with virtual memory, in which accesses to
widely separated memory addresses may require extra paging
operations.
[^301]: This list of registers does not include the registers used by
the storage-allocation system---`root`, `the/cars`, `the/cdrs`, and
the other registers that will be introduced in this section.
[^302]: The term *broken heart* was coined by David Cressey, who wrote a
garbage collector for MDL, a dialect of Lisp developed at
mit during the early 1970s.
[^303]: The garbage collector uses the low-level predicate
`pointer/to/pair?` instead of the list-structure `pair?` operation
because in a real system there might be various things that are
treated as pairs for garbage-collection purposes. For example, in a
Scheme system that conforms to the ieee standard a
procedure object may be implemented as a special kind of "pair" that
doesn't satisfy the `pair?` predicate. For simulation purposes,
`pointer/to/pair?` can be implemented as `pair?`.
[^304]: See [Batali et al. 1982](#Batali et al. 1982) for more
information on the chip and the method by which it was designed.
[^305]: In our controller, the dispatch is written as a sequence of
`test` and `branch` instructions. Alternatively, it could have been
written in a data-directed style (and in a real system it probably
would have been) to avoid the need to perform sequential tests and
to facilitate the definition of new expression types. A machine
designed to run Lisp would probably include a `dispatch/on/type`
instruction that would efficiently execute such data-directed
dispatches.
[^306]: This is an important but subtle point in translating algorithms
from a procedural language, such as Lisp, to a register-machine
language. As an alternative to saving only what is needed, we could
save all the registers (except `val`) before each recursive call.
This is called a *framed-stack* discipline. This would work but
might save more registers than necessary; this could be an important
consideration in a system where stack operations are expensive.
Saving registers whose contents will not be needed later may also
hold onto useless data that could otherwise be garbage-collected,
freeing space to be reused.
[^307]: We add to the evaluator data-structure procedures in [Section
4.1.3](#Section 4.1.3) the following two procedures for manipulating
argument lists:
::: smallscheme
(define (empty-arglist) '()) (define (adjoin-arg arg arglist)
(append arglist (list arg)))
:::
We also use an additional syntax procedure to test for the last
operand in a combination:
::: smallscheme
(define (last-operand? ops) (null? (cdr ops)))
:::
[^308]: The optimization of treating the last operand specially is known
as *evlis tail recursion* (see [Wand 1980](#Wand 1980)). We could be
somewhat more efficient in the argument evaluation loop if we made
evaluation of the first operand a special case too. This would
permit us to postpone initializing `argl` until after evaluating the
first operand, so as to avoid saving `argl` in this case. The
compiler in [Section 5.5](#Section 5.5) performs this optimization.
(Compare the `construct/arglist` procedure of [Section
5.5.3](#Section 5.5.3).)
[^309]: The order of operand evaluation in the metacircular evaluator is
determined by the order of evaluation of the arguments to `cons` in
the procedure `list/of/values` of [Section 4.1.1](#Section 4.1.1)
(see [Exercise 4.1](#Exercise 4.1)).
[^310]: We saw in [Section 5.1](#Section 5.1) how to implement such a
process with a register machine that had no stack; the state of the
process was stored in a fixed set of registers.
[^311]: This implementation of tail recursion in `ev/sequence` is one
variety of a well-known optimization technique used by many
compilers. In compiling a procedure that ends with a procedure call,
one can replace the call by a jump to the called procedure's entry
point. Building this strategy into the interpreter, as we have done
in this section, provides the optimization uniformly throughout the
language.
[^312]: We can define `no/more/exps?` as follows:
::: smallscheme
(define (no-more-exps? seq) (null? seq))
:::
[^313]: This isn't really cheating. In an actual implementation built
from scratch, we would use our explicit-control evaluator to
interpret a Scheme program that performs source-level
transformations like `cond/>if` in a syntax phase that runs before
execution.
[^314]: We assume here that `read` and the various printing operations
are available as primitive machine operations, which is useful for
our simulation, but completely unrealistic in practice. These are
actually extremely complex operations. In practice, they would be
implemented using low-level input-output operations such as
transferring single characters to and from a device.
To support the `get/global/environment` operation we define
::: smallscheme
(define the-global-environment (setup-environment)) (define
(get-global-environment) the-global-environment)
:::
[^315]: There are other errors that we would like the interpreter to
handle, but these are not so simple. See [Exercise
5.30](#Exercise 5.30).
[^316]: We could perform the stack initialization only after errors, but
doing it in the driver loop will be convenient for monitoring the
evaluator's performance, as described below.
[^317]: Regrettably, this is the normal state of affairs in conventional
compiler-based language systems such as C. In unix(tm)
the system "dumps core," and in dos/Windows(tm) it
becomes catatonic. The Macintosh(tm) displays a picture of an
exploding bomb and offers you the opportunity to reboot the
computer---if you're lucky.
[^318]: This is a theoretical statement. We are not claiming that the
evaluator's data paths are a particularly convenient or efficient
set of data paths for a general-purpose computer. For example, they
are not very good for implementing high-performance floating-point
calculations or calculations that intensively manipulate bit
vectors.
[^319]: Actually, the machine that runs compiled code can be simpler
than the interpreter machine, because we won't use the `exp` and
`unev` registers. The interpreter used these to hold pieces of
unevaluated expressions. With the compiler, however, these
expressions get built into the compiled code that the register
machine will run. For the same reason, we don't need the machine
operations that deal with expression syntax. But compiled code will
use a few additional machine operations (to represent compiled
procedure objects) that didn't appear in the explicit-control
evaluator machine.
[^320]: Notice, however, that our compiler is a Scheme program, and the
syntax procedures that it uses to manipulate expressions are the
actual Scheme procedures used with the metacircular evaluator. For
the explicit-control evaluator, in contrast, we assumed that
equivalent syntax operations were available as operations for the
register machine. (Of course, when we simulated the register machine
in Scheme, we used the actual Scheme procedures in our register
machine simulation.)
[^321]: This procedure uses a feature of Lisp called *backquote* (or
*quasiquote*) that is handy for constructing lists. Preceding a list
with a backquote symbol is much like quoting it, except that
anything in the list that is flagged with a comma is evaluated.
For example, if the value of `linkage` is the symbol `branch25`,
then the expression
::: smallscheme
'((goto (label ,linkage)))
:::
evaluates to the list
::: smallscheme
((goto (label branch25)))
:::
Similarly, if the value of `x` is the list `(a b c)`, then
::: smallscheme
'(1 2 ,(car x))
:::
evaluates to the list
::: smallscheme
(1 2 a).
:::
[^322]: We can't just use the labels `true/branch`, `false/branch`, and
`after/if` as shown above, because there might be more than one `if`
in the program. The compiler uses the procedure `make/label` to
generate labels. `make/label` takes a symbol as argument and returns
a new symbol that begins with the given symbol. For example,
successive calls to `(make/label ’a)` would return `a1`, `a2`, and
so on. `make/label` can be implemented similarly to the generation
of unique variable names in the query language, as follows:
::: smallscheme
(define label-counter 0) (define (new-label-number) (set!
label-counter (+ 1 label-counter)) label-counter) (define
(make-label name) (string-\>symbol (string-append (symbol-\>string
name) (number-\>string (new-label-number)))))
:::
[^323]: []{#Footnote 38 label="Footnote 38"} We need machine operations
to implement a data structure for representing compiled procedures,
analogous to the structure for compound procedures described in
[Section 4.1.3](#Section 4.1.3):
::: smallscheme
(define (make-compiled-procedure entry env) (list
'compiled-procedure entry env)) (define (compiled-procedure? proc)
(tagged-list? proc 'compiled-procedure)) (define
(compiled-procedure-entry c-proc) (cadr c-proc)) (define
(compiled-procedure-env c-proc) (caddr c-proc))
:::
[^324]: Actually, we signal an error when the target is not `val` and
the linkage is `return`, since the only place we request `return`
linkages is in compiling procedures, and our convention is that
procedures return their values in `val`.
[^325]: Making a compiler generate tail-recursive code might seem like a
straightforward idea. But most compilers for common languages,
including C and Pascal, do not do this, and therefore these
languages cannot represent iterative processes in terms of procedure
call alone. The difficulty with tail recursion in these languages is
that their implementations use the stack to store procedure
arguments and local variables as well as return addresses. The
Scheme implementations described in this book store arguments and
variables in memory to be garbage-collected. The reason for using
the stack for variables and arguments is that it avoids the need for
garbage collection in languages that would not otherwise require it,
and is generally believed to be more efficient. Sophisticated Lisp
compilers can, in fact, use the stack for arguments without
destroying tail recursion. (See [Hanson 1990](#Hanson 1990) for a
description.) There is also some debate about whether stack
allocation is actually more efficient than garbage collection in the
first place, but the details seem to hinge on fine points of
computer architecture. (See [Appel 1987](#Appel 1987) and [Miller
and Rozas 1994](#Miller and Rozas 1994) for opposing views on this
issue.)
[^326]: The variable `all/regs` is bound to the list of names of all the
registers:
::: smallscheme
(define all-regs '(env proc val argl continue))
:::
[^327]: Note that `preserving` calls `append` with three arguments.
Though the definition of `append` shown in this book accepts only
two arguments, Scheme standardly provides an `append` procedure that
takes an arbitrary number of arguments.
[^328]: We have used the same symbol `+` here to denote both the
source-language procedure and the machine operation. In general
there will not be a one-to-one correspondence between primitives of
the source language and primitives of the machine.
[^329]: Making the primitives into reserved words is in general a bad
idea, since a user cannot then rebind these names to different
procedures. Moreover, if we add reserved words to a compiler that is
in use, existing programs that define procedures with these names
will stop working. See [Exercise 5.44](#Exercise 5.44) for ideas on
how to avoid this problem.
[^330]: This is not true if we allow internal definitions, unless we
scan them out. See [Exercise 5.43](#Exercise 5.43).
[^331]: This is the modification to variable lookup required if we
implement the scanning method to eliminate internal definitions
([Exercise 5.43](#Exercise 5.43)). We will need to eliminate these
definitions in order for lexical addressing to work.
[^332]: Lexical addresses cannot be used to access variables in the
global environment, because these names can be defined and redefined
interactively at any time. With internal definitions scanned out, as
in [Exercise 5.43](#Exercise 5.43), the only definitions the
compiler sees are those at top level, which act on the global
environment. Compilation of a definition does not cause the defined
name to be entered in the compile-time environment.
[^333]: Of course, compiled procedures as well as interpreted procedures
are compound (nonprimitive). For compatibility with the terminology
used in the explicit-control evaluator, in this section we will use
"compound" to mean interpreted (as opposed to compiled).
[^334]: Now that the evaluator machine starts with a `branch`, we must
always initialize the `flag` register before starting the evaluator
machine. To start the machine at its ordinary read-eval-print loop,
we could use
::: smallscheme
(define (start-eceval) (set! the-global-environment
(setup-environment)) (set-register-contents! eceval 'flag false)
(start eceval))
:::
[^335]: Since a compiled procedure is an object that the system may try
to print, we also modify the system print operation `user/print`
(from [Section 4.1.4](#Section 4.1.4)) so that it will not attempt
to print the components of a compiled procedure:
::: smallscheme
(define (user-print object) (cond ((compound-procedure? object)
(display (list 'compound-procedure (procedure-parameters object)
(procedure-body object) '\<procedure-env\>))) ((compiled-procedure?
object) (display '\<compiled-procedure\>)) (else (display object))))
:::
[^336]: We can do even better by extending the compiler to allow
compiled code to call interpreted procedures. See [Exercise
5.47](#Exercise 5.47).
[^337]: Independent of the strategy of execution, we incur significant
overhead if we insist that errors encountered in execution of a user
program be detected and signaled, rather than being allowed to kill
the system or produce wrong answers. For example, an out-of-bounds
array reference can be detected by checking the validity of the
reference before performing it. The overhead of checking, however,
can be many times the cost of the array reference itself, and a
programmer should weigh speed against safety in determining whether
such a check is desirable. A good compiler should be able to produce
code with such checks, should avoid redundant checks, and should
allow programmers to control the extent and type of error checking
in the compiled code.
Compilers for popular languages, such as C and C++, put hardly any
error-checking operations into running code, so as to make things
run as fast as possible. As a result, it falls to programmers to
explicitly provide error checking. Unfortunately, people often
neglect to do this, even in critical applications where speed is not
a constraint. Their programs lead fast and dangerous lives. For
example, the notorious "Worm" that paralyzed the Internet in 1988
exploited the unix(tm) operating system's failure to
check whether the input buffer has overflowed in the finger daemon.
(See [Spafford 1989](#Spafford 1989).)
[^338]: Of course, with either the interpretation or the compilation
strategy we must also implement for the new machine storage
allocation, input and output, and all the various operations that we
took as "primitive" in our discussion of the evaluator and compiler.
One strategy for minimizing work here is to write as many of these
operations as possible in Lisp and then compile them for the new
machine. Ultimately, everything reduces to a small kernel (such as
garbage collection and the mechanism for applying actual machine
primitives) that is hand-coded for the new machine.
[^339]: This strategy leads to amusing tests of correctness of the
compiler, such as checking whether the compilation of a program on
the new machine, using the compiled compiler, is identical with the
compilation of the program on the original Lisp system. Tracking
down the source of differences is fun but often frustrating, because
the results are extremely sensitive to minuscule details.
|