Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
15,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 1
Create Data - We begin by creating our own data set for analysis. This prevents the end user reading this tutorial from having to download any files to replicate the results below. We will export this data set to a text file so that you can get some experience pulling data from a text file.
Get Data - We will learn how to read in the text file. The data consist of baby names and the number of baby names born in the year 1880.
Prepare Data - Here we will simply take a look at the data and make sure it is clean. By clean I mean we will take a look inside the contents of the text file and look for any anomalities. These can include missing data, inconsistencies in the data, or any other data that seems out of place. If any are found we will then have to make decisions on what to do with these records.
Analyze Data - We will simply find the most popular name in a specific year.
Present Data - Through tabular data and a graph, clearly show the end user what is the most popular name in a specific year.
The pandas library is used for all the data analysis excluding a small piece of the data presentation section. The matplotlib library will only be needed for the data presentation section. Importing the libraries is the first step we will take in the lesson.
Step1: Create Data
The data set will consist of 5 baby names and the number of births recorded for that year (1880).
Step2: To merge these two lists together we will use the zip function.
Step3: We are basically done creating the data set. We now will use the pandas library to export this data set into a csv file.
df will be a DataFrame object. You can think of this object holding the contents of the BabyDataSet in a format similar to a sql table or an excel spreadsheet. Lets take a look below at the contents inside df.
Step4: Export the dataframe to a csv file. We can name the file births1880.csv. The function to_csv will be used to export the file. The file will be saved in the same location of the notebook unless specified otherwise.
Step5: The only parameters we will use is index and header. Setting these parameters to True will prevent the index and header names from being exported. Change the values of these parameters to get a better understanding of their use.
Step6: Get Data
To pull in the csv file, we will use the pandas function read_csv. Let us take a look at this function and what inputs it takes.
Step7: Even though this functions has many parameters, we will simply pass it the location of the text file.
Location = C
Step8: Notice the r before the string. Since the slashes are special characters, prefixing the string with a r will escape the whole string.
Step9: This brings us the our first problem of the exercise. The read_csv function treated the first record in the csv file as the header names. This is obviously not correct since the text file did not provide us with header names.
To correct this we will pass the header parameter to the read_csv function and set it to None (means null in python).
Step10: If we wanted to give the columns specific names, we would have to pass another paramter called names. We can also omit the header parameter.
Step11: You can think of the numbers [0,1,2,3,4] as the row numbers in an Excel file. In pandas these are part of the index of the dataframe. You can think of the index as the primary key of a sql table with the exception that an index is allowed to have duplicates.
[Names, Births] can be though of as column headers similar to the ones found in an Excel spreadsheet or sql database.
Delete the csv file now that we are done using it.
Step12: Prepare Data
The data we have consists of baby names and the number of births in the year 1880. We already know that we have 5 records and none of the records are missing (non-null values).
The Names column at this point is of no concern since it most likely is just composed of alpha numeric strings (baby names). There is a chance of bad data in this column but we will not worry about that at this point of the analysis. The Births column should just contain integers representing the number of babies born in a specific year with a specific name. We can check if the all the data is of the data type integer. It would not make sense to have this column have a data type of float. I would not worry about any possible outliers at this point of the analysis.
Realize that aside from the check we did on the "Names" column, briefly looking at the data inside the dataframe should be as far as we need to go at this stage of the game. As we continue in the data analysis life cycle we will have plenty of opportunities to find any issues with the data set.
Step13: As you can see the Births column is of type int64, thus no floats (decimal numbers) or alpha numeric characters will be present in this column.
Analyze Data
To find the most popular name or the baby name with the higest birth rate, we can do one of the following.
Sort the dataframe and select the top row
Use the max() attribute to find the maximum value
Step14: Present Data
Here we can plot the Births column and label the graph to show the end user the highest point on the graph. In conjunction with the table, the end user has a clear picture that Mel is the most popular baby name in the data set.
plot() is a convinient attribute where pandas lets you painlessly plot the data in your dataframe. We learned how to find the maximum value of the Births column in the previous section. Now to find the actual baby name of the 973 value looks a bit tricky, so lets go over it.
Explain the pieces | Python Code:
# Import all libraries needed for the tutorial
# General syntax to import specific functions in a library:
##from (library) import (specific library function)
from pandas import DataFrame, read_csv
# General syntax to import a library but no functions:
##import (library) as (give the library a nickname/alias)
import matplotlib.pyplot as plt
import pandas as pd #this is how I usually import pandas
import sys #only needed to determine Python version number
# Enable inline plotting
%matplotlib inline
print 'Python version ' + sys.version
print 'Pandas version ' + pd.__version__
Explanation: Lesson 1
Create Data - We begin by creating our own data set for analysis. This prevents the end user reading this tutorial from having to download any files to replicate the results below. We will export this data set to a text file so that you can get some experience pulling data from a text file.
Get Data - We will learn how to read in the text file. The data consist of baby names and the number of baby names born in the year 1880.
Prepare Data - Here we will simply take a look at the data and make sure it is clean. By clean I mean we will take a look inside the contents of the text file and look for any anomalities. These can include missing data, inconsistencies in the data, or any other data that seems out of place. If any are found we will then have to make decisions on what to do with these records.
Analyze Data - We will simply find the most popular name in a specific year.
Present Data - Through tabular data and a graph, clearly show the end user what is the most popular name in a specific year.
The pandas library is used for all the data analysis excluding a small piece of the data presentation section. The matplotlib library will only be needed for the data presentation section. Importing the libraries is the first step we will take in the lesson.
End of explanation
# The inital set of baby names and bith rates
names = ['Bob','Jessica','Mary','John','Mel']
births = [968, 155, 77, 578, 973]
Explanation: Create Data
The data set will consist of 5 baby names and the number of births recorded for that year (1880).
End of explanation
zip?
BabyDataSet = zip(names,births)
BabyDataSet
Explanation: To merge these two lists together we will use the zip function.
End of explanation
df = pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])
df
Explanation: We are basically done creating the data set. We now will use the pandas library to export this data set into a csv file.
df will be a DataFrame object. You can think of this object holding the contents of the BabyDataSet in a format similar to a sql table or an excel spreadsheet. Lets take a look below at the contents inside df.
End of explanation
df.to_csv?
Explanation: Export the dataframe to a csv file. We can name the file births1880.csv. The function to_csv will be used to export the file. The file will be saved in the same location of the notebook unless specified otherwise.
End of explanation
df.to_csv('births1880.csv',index=False,header=False)
Explanation: The only parameters we will use is index and header. Setting these parameters to True will prevent the index and header names from being exported. Change the values of these parameters to get a better understanding of their use.
End of explanation
read_csv?
Explanation: Get Data
To pull in the csv file, we will use the pandas function read_csv. Let us take a look at this function and what inputs it takes.
End of explanation
Location = r'C:\Users\david\notebooks\pandas\births1880.csv'
df = pd.read_csv(Location)
Explanation: Even though this functions has many parameters, we will simply pass it the location of the text file.
Location = C:\Users\ENTER_USER_NAME.xy\startups\births1880.csv
Note: Depending on where you save your notebooks, you may need to modify the location above.
End of explanation
df
Explanation: Notice the r before the string. Since the slashes are special characters, prefixing the string with a r will escape the whole string.
End of explanation
df = pd.read_csv(Location, header=None)
df
Explanation: This brings us the our first problem of the exercise. The read_csv function treated the first record in the csv file as the header names. This is obviously not correct since the text file did not provide us with header names.
To correct this we will pass the header parameter to the read_csv function and set it to None (means null in python).
End of explanation
df = pd.read_csv(Location, names=['Names','Births'])
df
Explanation: If we wanted to give the columns specific names, we would have to pass another paramter called names. We can also omit the header parameter.
End of explanation
import os
os.remove(Location)
Explanation: You can think of the numbers [0,1,2,3,4] as the row numbers in an Excel file. In pandas these are part of the index of the dataframe. You can think of the index as the primary key of a sql table with the exception that an index is allowed to have duplicates.
[Names, Births] can be though of as column headers similar to the ones found in an Excel spreadsheet or sql database.
Delete the csv file now that we are done using it.
End of explanation
# Check data type of the columns
df.dtypes
# Check data type of Births column
df.Births.dtype
Explanation: Prepare Data
The data we have consists of baby names and the number of births in the year 1880. We already know that we have 5 records and none of the records are missing (non-null values).
The Names column at this point is of no concern since it most likely is just composed of alpha numeric strings (baby names). There is a chance of bad data in this column but we will not worry about that at this point of the analysis. The Births column should just contain integers representing the number of babies born in a specific year with a specific name. We can check if the all the data is of the data type integer. It would not make sense to have this column have a data type of float. I would not worry about any possible outliers at this point of the analysis.
Realize that aside from the check we did on the "Names" column, briefly looking at the data inside the dataframe should be as far as we need to go at this stage of the game. As we continue in the data analysis life cycle we will have plenty of opportunities to find any issues with the data set.
End of explanation
# Method 1:
Sorted = df.sort(['Births'], ascending=False)
Sorted.head(1)
# Method 2:
df['Births'].max()
Explanation: As you can see the Births column is of type int64, thus no floats (decimal numbers) or alpha numeric characters will be present in this column.
Analyze Data
To find the most popular name or the baby name with the higest birth rate, we can do one of the following.
Sort the dataframe and select the top row
Use the max() attribute to find the maximum value
End of explanation
# Create graph
df['Births'].plot()
# Maximum value in the data set
MaxValue = df['Births'].max()
# Name associated with the maximum value
MaxName = df['Names'][df['Births'] == df['Births'].max()].values
# Text to display on graph
Text = str(MaxValue) + " - " + MaxName
# Add text to graph
plt.annotate(Text, xy=(1, MaxValue), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
print "The most popular name"
df[df['Births'] == df['Births'].max()]
#Sorted.head(1) can also be used
Explanation: Present Data
Here we can plot the Births column and label the graph to show the end user the highest point on the graph. In conjunction with the table, the end user has a clear picture that Mel is the most popular baby name in the data set.
plot() is a convinient attribute where pandas lets you painlessly plot the data in your dataframe. We learned how to find the maximum value of the Births column in the previous section. Now to find the actual baby name of the 973 value looks a bit tricky, so lets go over it.
Explain the pieces:
df['Names'] - This is the entire list of baby names, the entire Names column
df['Births'] - This is the entire list of Births in the year 1880, the entire Births column
df['Births'].max() - This is the maximum value found in the Births column
[df['Births'] == df['Births'].max()] IS EQUAL TO [Find all of the records in the Births column where it is equal to 973]
df['Names'][df['Births'] == df['Births'].max()] IS EQUAL TO Select all of the records in the Names column WHERE [The Births column is equal to 973]
An alternative way could have been to use the Sorted dataframe:
Sorted['Names'].head(1).value
The str() function simply converts an object into a string.
End of explanation |
15,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This IPython notebook illustrates how to read the CSV files from disk as tables and set their metadata.
First, we need to import py_entitymatching package and other libraries as follows
Step1: Get the Path of the CSV File
the paths of the CSV file in the disk. For the convenience of the user, we have included some sample files in the package. The path of a sample CSV file can be obtained like this
Step2: Ways to Read a CSV File and Set Metadata
There are three different ways to read a CSV file and set metadata
Step3: Then set the metadata for the table. We see ID is the key attribute (since it contains unique values and no value is missing) for the table. We can set this metadata as follows
Step4: Now the CSV file is read into the memory and the metadata (i.e. key) is set for the table.
Read a CSV File and Set Metadata Together
In the above, we saw that we first read in the CSV file and then set the metadata. These two steps can be combined into a single step like this
Step5: Read a CSV File and Set Metadata from a File in Disk
The user can specify the metadata in a file.
This file MUST be in the same directory as the CSV file and the file name
should be same, except the extension is set to '.metadata'. | Python Code:
import py_entitymatching as em
import pandas as pd
import os, sys
Explanation: This IPython notebook illustrates how to read the CSV files from disk as tables and set their metadata.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
# Get the paths of the input tables
path_A = datasets_dir + os.sep + 'person_table_A.csv'
# Display the contents of the file in path_A
!cat $path_A | head -3
Explanation: Get the Path of the CSV File
the paths of the CSV file in the disk. For the convenience of the user, we have included some sample files in the package. The path of a sample CSV file can be obtained like this:
End of explanation
A = em.read_csv_metadata(path_A)
A.head()
# Display the 'type' of A
type(A)
Explanation: Ways to Read a CSV File and Set Metadata
There are three different ways to read a CSV file and set metadata:
Read a CSV file first, and then set the metadata
Read a CSV file and set the metadata together
Read a CSV file and set the metadata from a file in disk
Read the CSV file First and Then Set the Metadata
First, read the CSV files as follows:
End of explanation
em.set_key(A, 'ID')
# Get the metadata that were set for table A
em.get_key(A)
Explanation: Then set the metadata for the table. We see ID is the key attribute (since it contains unique values and no value is missing) for the table. We can set this metadata as follows:
End of explanation
A = em.read_csv_metadata(path_A, key='ID')
# Display the 'type' of A
type(A)
# Get the metadata that were set for the table A
em.get_key(A)
Explanation: Now the CSV file is read into the memory and the metadata (i.e. key) is set for the table.
Read a CSV File and Set Metadata Together
In the above, we saw that we first read in the CSV file and then set the metadata. These two steps can be combined into a single step like this:
End of explanation
# We set the metadata for table A (stored in person_table_A.csv).
# Get the file name (with full path) where the metadata file must be stored
metadata_file = datasets_dir + os.sep + 'person_table_A.metadata'
# Specify the metadata for table A . Here we specify that 'ID' is the key attribute for the table.
# Note that this step requires write permission to the datasets directory.
!echo '#key=ID' > $metadata_file
# If you donot have write permissions to the datasets directory, first copy the file to the local directory and then create
# a metadata file like this:
# !cp $path_A .
# metadata_local_file = 'person_table_A.metadata'
# !echo '#key=ID' > $metadata_local_file
# Read the CSV file for table A
A = em.read_csv_metadata(path_A)
# Get the key for table A
em.get_key(A)
# Remove the metadata file
!rm $metadata_file
Explanation: Read a CSV File and Set Metadata from a File in Disk
The user can specify the metadata in a file.
This file MUST be in the same directory as the CSV file and the file name
should be same, except the extension is set to '.metadata'.
End of explanation |
15,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Chapter 11
Step3: This could be the purpose of a function
Step5: If we execute the code above, we don't get any output. That's because we only told Python
Step8: 1.3.2 Calling a function from within another function
We can also define functions that call other functions, which is very helpful if we want to split our task into smaller, more manageable subtasks
Step9: You can do the same tricks that we learnt to apply on the built-in functions, like asking for help or for a function type
Step11: The help we get on a function will become more interesting once we learn about function inputs and outputs ;-)
1.4 Working with function input
1.4.1 Parameters and arguments
We use parameters and arguments to make a function execute a task depending on the input we provide. For instance, we can change the function above to input the name of a person and print a birthday song using this name. This results in a more generic function.
To understand how we use parameters and arguments, keep in mind the distinction between function definition and function call.
Parameter
Step12: We can also store the name in a variable
Step13: If we forgot to specify the name, we get an error
Step15: Functions can have multiple parameters. We can for example multiply two numbers in a function (using the two parameters x and y) and then call the function by giving it two arguments
Step17: 1.4.2 Positional vs keyword parameters and arguments
The function definition tells Python which parameters are positional and which are keyword. As you might remember, positional means that you have to give an argument for that parameter; keyword means that you can give an argument value, but this is not necessary because there is a default value.
So, to summarize these two notes, we distinguish between
Step18: If we do not specify a value for a positional parameter, the function call will fail (with a very helpful error message)
Step20: 1.5 Output
Step21: We can also print the result directly (without assigning it to a variable), which gives us the same effect as using the print statements we used before
Step23: If we assign the result to a variable, but do not use the return statement, the function cannot return it. Instead, it returns None (as you can try out below).
This is important to realize
Step25: Returning multiple values
Similarly as the input, a function can also return multiple values as output. We call such a collection of values a tuple (does this term sound familiar ;-)?).
Step26: Make sure you actually save your 2 values into 2 variables, or else you end up with errors or unexpected behavior
Step28: Saving the resulting values in different variables can be useful when you want to use them in different places in your code
Step30: 1.6 Documenting your functions with docstrings
Docstring is a string that occurs as the first statement in a function definition.
For consistency, always use
Step33: You can see that this docstring describes the function goal, its parameters, its outputs, and the errors it raises.
It is a good practice to write a docstring for your functions, so we will always do this! For now we will stick with single-sentence docstrings
You can read more about this topic here, here, and here.
1.7 Debugging a function
Sometimes, it can hard to write a function that works perfectly. A common practice in programming is to check whether the function performs as you expect it to do. The assert statement is one way of debugging your function. The syntax is as follows
Step34: If the function output is what you expect, Python will show nothing.
Step36: However, when the actual output is different from what we expected, we got an error. Let's say we made a mistake in writing the function.
Step37: 1.8 Storing a function in a Python module
Since Python functions are nice blocks of code with a clear focus, wouldn't it be nice if we can store them in a file? By doing this, we make our code visually very appealing since we are only left with functions calls instead of function definitions.
Please open the file utils_chapter11.py (is in the same folder as the notebook you are now reading). In it, you will find three of the functions that we've shown so far in this notebook. So, how can we use those functions? We can import the function using the following syntax
Step39: 2. Variable scope
Please note
Step41: Even when we return x, it does not exist outside of the function
Step43: Also consider this
Step45: In fact, this code has produced two completely unrelated x's!
So, you can not read a local variable outside of the local context. Nevertheless, it is possible to read a global variable from within a function, in a strictly read-only fashion.
Step47: You can use two built-in functions in Python when you are unsure whether a variable is local or global. The function locals() returns a list of all local variables, and the function globals() - a list of all global variables. Note that there are many non-interesting system variables that these functions return, so in practice it is best to check for membership with the in operator. For example
Step50: Finally, note that the local context stays local to the function, and is not shared even with other functions called within a function, for example
Step51: We call the function setb() from the global context, and we call the function setb_again() from the context of the function setb(). The variable b in the function setb_again() is set to 3, but this does not affect the value of this variable in the function setb() which is still 2. And as we saw before, the changes in setb() do not influence the value of the global variable (b=1).
Exercises
Exercise 1
Step53: Exercise 2
Step55: Exercise 3
Step57: Exercise 4
Step59: Exercise 5
Step60: Exercise 6 | Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_11_Functions_and_scope.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print("Happy Birthday, dear Emily.")
print("Happy Birthday to you!")
Explanation: Chapter 11: Functions and scope
We use an example from this website to show you some of the basics of writing a function.
We use some materials from this other Python course.
We have seen that Python has several built-in functions (e.g. print() or max()). But you can also create a function. A function is a reusable block of code that performs a specific task. Once you have defined a function, you can use it at any place in your Python script. You can even import a function from an external module (as we will see in the next chapter). Therefore, they are beneficial for tasks that you will perform more often. Plus, functions are a convenient way to order your code and make it more readable!
At the end of this chapter, you will be able to:
write a function
work with function inputs
understand the difference between (keyword and positional) arguments and parameters
return zero, one, or multiple values
write function docstrings
understand the scope of variables
store your function in a Python module and call it
debug your functions
If you want to learn more about these topics, you might find the following link useful:
Tutorial: Defining Functions of your Own
The docstrings main formats
PEP 287 -- reStructured Docstring Format
Introduction to assert
Now let's get started!
If you have questions about this chapter, please contact us (cltl.python.course@gmail.com).
1. Writing a function
A function is an isolated chunk of code that has a name, gets zero or more parameters, and returns a value. In general, a function will do something for you based on the input parameters you pass it, and it will typically return a result. You are not limited to using functions available in the standard library or the ones provided by external parties. You can also write your own functions!
Whenever you are writing a function, you need to think of the following things:
* What is the purpose of the function?
* How should I name the function?
* What input does the function need?
* What output should the function generate?
1.1. Why use a function?
There are several good reasons why functions are a vital component of any non-ridiculous programmer:
encapsulation: wrapping a piece of useful code into a function so that it can be used without knowledge of the specifics
generalization: making a piece of code useful in varied circumstances through parameters
manageability: Dividing a complex program up into easy-to-manage chunks
maintainability: using meaningful names to make the program better readable and understandable
reusability: a good function may be useful in multiple programs
recursion!
1.2. How to define a function
Let's say we want to sing a birthday song to Emily. Then we print the following lines:
End of explanation
def happy_birthday_to_emily(): # Function definition
Print a birthday song to Emily.
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print("Happy Birthday, dear Emily.")
print("Happy Birthday to you!")
Explanation: This could be the purpose of a function: to print the lines of a birthday song for Emily.
Now, we define a function to do this. Here is how you define a function:
write def;
the name you would like to call your function;
a set of parentheses containing the parameter(s) of your function;
a colon;
a docstring describing what your function does;
the function definition;
ending with a return statement
Statements must be indented so that Python knows what belongs in the function and what not. Functions are only executed when you call them. It is good practice to define your functions at the top of your program or in another Python module.
We give the function a clear name, happy_birthday_to_emily, and we define the function as shown below. Note that we specify what it does in the docstring at the beginning of the function:
End of explanation
# function definition:
def happy_birthday_to_emily(): # Function definition
Print a birthday song to Emily.
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print("Happy Birthday, dear Emily.")
print("Happy Birthday to you!")
# function call:
print('Function call 1')
happy_birthday_to_emily()
print()
# We can call the function as many times as we want (but we define it only once)
print('Function call 2')
happy_birthday_to_emily()
print()
print('Function call 3')
happy_birthday_to_emily()
print()
# This will not call the function
print('This is not a function call')
happy_birthday_to_emily
Explanation: If we execute the code above, we don't get any output. That's because we only told Python: "Here's a function to do this, please remember it." If we actually want Python to execute everything inside this function, we have to call it:
1.3 How to call a function
It is important to distinguish between a function definition and a function call. We illustrate this in 1.3.1. You can also call functions from within other functions. This will become useful when you split up your code into small chunks that can be combined to solve a larger problem. This is illustrated in 1.3.2.
1.3.1) A simple function call
A function is defined once. After the definition, Python has remembered what this function does in its memory.
A function is executed/called as many times as we like. When calling a function, you should always use parenthesis.
End of explanation
def new_line():
Print a new line.
print()
def two_new_lines():
Print two new lines.
new_line()
new_line()
print("Printing a single line...")
new_line()
print("Printing two lines...")
two_new_lines()
print("Printed two lines")
Explanation: 1.3.2 Calling a function from within another function
We can also define functions that call other functions, which is very helpful if we want to split our task into smaller, more manageable subtasks:
End of explanation
help(happy_birthday_to_emily)
type(happy_birthday_to_emily)
Explanation: You can do the same tricks that we learnt to apply on the built-in functions, like asking for help or for a function type:
End of explanation
# function definition with using the parameter `name'
def happy_birthday(name):
Print a birthday song with the "name" of the person inserted.
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print(f"Happy Birthday, dear {name}.")
print("Happy Birthday to you!")
# function call using specifying the value of the argument
happy_birthday("James")
Explanation: The help we get on a function will become more interesting once we learn about function inputs and outputs ;-)
1.4 Working with function input
1.4.1 Parameters and arguments
We use parameters and arguments to make a function execute a task depending on the input we provide. For instance, we can change the function above to input the name of a person and print a birthday song using this name. This results in a more generic function.
To understand how we use parameters and arguments, keep in mind the distinction between function definition and function call.
Parameter: The variable name in the function definition below is a parameter. Variables used in function definitions are called parameters.
Argument: The variable my_name in the function call below is a value for the parameter name at the time when the function is called. We refer to such variables as arguments. We use arguments so we can direct the function to do different kinds of work when we call it at different times.
End of explanation
my_name="James"
happy_birthday(my_name)
Explanation: We can also store the name in a variable:
End of explanation
happy_birthday()
Explanation: If we forgot to specify the name, we get an error:
End of explanation
def multiply(x, y):
Multiply two numeric values.
result = x * y
print(result)
multiply(2020,5278238)
multiply(2,3)
Explanation: Functions can have multiple parameters. We can for example multiply two numbers in a function (using the two parameters x and y) and then call the function by giving it two arguments:
End of explanation
def multiply(x, y, third_number=1): # x and y are positional parameters, third_number is a keyword parameter
Multiply two or three numbers and print the result.
result=x*y*third_number
print(result)
multiply(2,3) # We only specify values for the positional parameters
multiply(2,3,third_number=4) # We specify values for both the positional parameters, and the keyword parameter
Explanation: 1.4.2 Positional vs keyword parameters and arguments
The function definition tells Python which parameters are positional and which are keyword. As you might remember, positional means that you have to give an argument for that parameter; keyword means that you can give an argument value, but this is not necessary because there is a default value.
So, to summarize these two notes, we distinguish between:
1) positional parameters: (we indicate these when defining a function, and they are compulsory when calling the function)
2) keyword parameters: (we indicate these when defining a function, but they have a default value - and are optional when calling the function)
For example, if we want to have a function that can either multiply two or three numbers, we can make the third parameter a keyword parameter with a default of 1 (remember that any number multiplied with 1 results in that number):
End of explanation
multiply(3)
Explanation: If we do not specify a value for a positional parameter, the function call will fail (with a very helpful error message):
End of explanation
def multiply(x, y):
Multiply two numbers and return the result.
multiplied = x * y
return multiplied
#here we assign the returned value to variable z
result = multiply(2, 5)
print(result)
Explanation: 1.5 Output: the return statement
Functions can have a return statement. The return statement returns a value back to the caller and always ends the execution of the function. This also allows us to use the result of a function outside of that function by assigning it to a variable:
End of explanation
print(multiply(30,20))
Explanation: We can also print the result directly (without assigning it to a variable), which gives us the same effect as using the print statements we used before:
End of explanation
def multiply_no_return(x, y):
Multiply two numbers and does not return the result.
result = x * y
is_this_a_result = multiply_no_return(2,3)
print(is_this_a_result)
Explanation: If we assign the result to a variable, but do not use the return statement, the function cannot return it. Instead, it returns None (as you can try out below).
This is important to realize: even functions without a return statement do return a value, albeit a rather boring one. This value is called None (it’s a built-in name). You have seen this already with list methods - for example list.append(val) adds a value to a list, but does not return anything explicitly.
End of explanation
def calculate(x,y):
Calculate product and sum of two numbers.
product = x * y
summed = x + y
#we return a tuple of values
return product, summed
# the function returned a tuple and we unpack it to var1 and var2
var1, var2 = calculate(10,5)
print("product:",var1,"sum:",var2)
Explanation: Returning multiple values
Similarly as the input, a function can also return multiple values as output. We call such a collection of values a tuple (does this term sound familiar ;-)?).
End of explanation
#this will assign `var` to a tuple:
var = calculate(10,5)
print(var)
#this will generate an error
var1, var2, var3 = calculate(10,5)
Explanation: Make sure you actually save your 2 values into 2 variables, or else you end up with errors or unexpected behavior:
End of explanation
def sum_and_diff_len_strings(string1, string2):
Return the sum of and difference between the lengths of two strings.
sum_strings = len(string1) + len(string2)
diff_strings = len(string1) - len(string2)
return sum_strings, diff_strings
sum_strings, diff_strings = sum_and_diff_len_strings("horse", "dog")
print("Sum:", sum_strings)
print("Difference:", diff_strings)
Explanation: Saving the resulting values in different variables can be useful when you want to use them in different places in your code:
End of explanation
def my_function(param1, param2):
This is a reST style.
:param param1: this is a first param
:param param2: this is a second param
:returns: this is a description of what is returned
return
Explanation: 1.6 Documenting your functions with docstrings
Docstring is a string that occurs as the first statement in a function definition.
For consistency, always use triple double quotes around docstrings. Triple quotes are used even though the string fits on one line. This makes it easy to expand it later.
There's no blank line either before or after the docstring.
The docstring is a phrase ending in a period. It prescribes the function or method's effect as a command ("Do this", "Return that"), not as a description; e.g., don't write "Returns the pathname ...".
In practice, there are several formats for writing docstrings, and all of them contain more information than the single sentence description we mention here. Probably the most well-known format is reStructured Text. Here is an example of a function description in reStructured Text (reST):
End of explanation
def is_even(p):
Check whether a number is even.
if p % 2 == 1:
return False
else:
return True
Explanation: You can see that this docstring describes the function goal, its parameters, its outputs, and the errors it raises.
It is a good practice to write a docstring for your functions, so we will always do this! For now we will stick with single-sentence docstrings
You can read more about this topic here, here, and here.
1.7 Debugging a function
Sometimes, it can hard to write a function that works perfectly. A common practice in programming is to check whether the function performs as you expect it to do. The assert statement is one way of debugging your function. The syntax is as follows:
assert code == your expected output,message to show when code does not work as you'd expected
Let's try this on our simple function.
End of explanation
input_value = 2
expected_output = True
actual_output = is_even(input_value)
assert actual_output == expected_output, f'expected {expected_output}, got {actual_output}'
Explanation: If the function output is what you expect, Python will show nothing.
End of explanation
def is_even(p):
Check whether a number is even.
if p % 2 == 1:
return False
else:
return False
input_value = 2
expected_output = True
actual_output = is_even(input_value)
assert actual_output == expected_output, f'expected {expected_output}, got {actual_output}'
Explanation: However, when the actual output is different from what we expected, we got an error. Let's say we made a mistake in writing the function.
End of explanation
from utils_chapter11 import happy_birthday
happy_birthday('George')
from utils_chapter11 import multiply
multiply(1,2)
from utils_chapter11 import is_even
is_it_even = is_even(5)
print(is_it_even)
Explanation: 1.8 Storing a function in a Python module
Since Python functions are nice blocks of code with a clear focus, wouldn't it be nice if we can store them in a file? By doing this, we make our code visually very appealing since we are only left with functions calls instead of function definitions.
Please open the file utils_chapter11.py (is in the same folder as the notebook you are now reading). In it, you will find three of the functions that we've shown so far in this notebook. So, how can we use those functions? We can import the function using the following syntax:
from NAME OF FILE WITHOUT .PY import function name
End of explanation
def setx():
Set the value of a variable to 1.
x = 1
setx()
print(x)
Explanation: 2. Variable scope
Please note: scope is a hard concept to grasp, but we think it is important to introduce it here. We will do our best to repeat it during the course.
Any variables you declare in a function, as well as the arguments that are passed to a function will only exist within the scope of that function, i.e., inside the function itself. The following code will produce an error, because the variable x does not exist outside of the function:
End of explanation
def setx():
Set the value of a variable to 1.
x = 1
return x
setx()
print(x)
Explanation: Even when we return x, it does not exist outside of the function:
End of explanation
x = 0
def setx():
Set the value of a variable to 1.
x = 1
setx()
print(x)
Explanation: Also consider this:
End of explanation
x = 1
def getx():
Print the value of a variable x.
print(x)
getx()
Explanation: In fact, this code has produced two completely unrelated x's!
So, you can not read a local variable outside of the local context. Nevertheless, it is possible to read a global variable from within a function, in a strictly read-only fashion.
End of explanation
a=3
b=2
def setb():
Set the value of a variable b to 11.
b=11
c=20
print("Is 'a' defined locally in the function:", 'a' in locals())
print("Is 'b' defined locally in the function:", 'b' in locals())
print("Is 'b' defined globally:", 'b' in globals())
setb()
print("Is 'a' defined globally:", 'a' in globals())
print("Is 'b' defined globally:", 'b' in globals())
print("Is 'c' defined globally:", 'c' in globals())
Explanation: You can use two built-in functions in Python when you are unsure whether a variable is local or global. The function locals() returns a list of all local variables, and the function globals() - a list of all global variables. Note that there are many non-interesting system variables that these functions return, so in practice it is best to check for membership with the in operator. For example:
End of explanation
def setb_again():
Set the value of a variable to 3.
b=3
print("in 'setb_again' b =", b)
def setb():
Set the value of a variable b to 2.
b=2
setb_again()
print("in 'setb' b =", b)
b=1
setb()
print("global b =", b)
Explanation: Finally, note that the local context stays local to the function, and is not shared even with other functions called within a function, for example:
End of explanation
# you code here
Explanation: We call the function setb() from the global context, and we call the function setb_again() from the context of the function setb(). The variable b in the function setb_again() is set to 3, but this does not affect the value of this variable in the function setb() which is still 2. And as we saw before, the changes in setb() do not influence the value of the global variable (b=1).
Exercises
Exercise 1:
Write a function that converts meters to centimeters and prints the resulting value.
End of explanation
# function to modify:
def multiply(x, y, third_number=1):
Multiply two or three numbers and print the result.
result=x*y*third_number
print(result)
Explanation: Exercise 2:
Add another keyword parameter message to the multiply function, which will allow a user to print a message. The default value of this keyword parameter should be an empty string. Test this with 2 messages of your choice. Also test it without specifying a value for the keyword argument when calling a function.
End of explanation
def new_line():
Print a new line.
print()
# you code here
Explanation: Exercise 3:
Write a function called multiple_new_lines which takes as argument an integer and prints that many newlines by calling the function newLine.
End of explanation
def happy_birthday_to_you():
# your code here
# original function - replace the print statements by the happy_birthday_to_you() function:
def happy_birthday(name):
Print a birthday song with the "name" of the person inserted.
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print("Happy Birthday, dear " + name + ".")
print("Happy Birthday to you!")
Explanation: Exercise 4:
Let's refactor the happy birthday function to have no repetition. Note that previously we print "Happy birthday to you!" three times. Make another function happy_birthday_to_you() that only prints this line and call it inside the function happy_birthday(name).
End of explanation
def multiply(x, y, third_number=1):
Multiply two or three numbers and print the result.
result=x*y*third_number
return result
print(multiply(1+1,6-2))
print(multiply(multiply(4,2),multiply(2,5)))
print(len(str(multiply(10,100))))
Explanation: Exercise 5:
Try to figure out what is going on in the following examples. How does Python deal with the order of calling functions?
End of explanation
def switch_two_values(x,y):
# your code here
a='orange'
b='apple'
a,b = switch_two_values(a,b) # `a` should contain "apple" after this call, and `b` should contain "orange"
print(a,b)
Explanation: Exercise 6:
Complete this code to switch the values of two variables:
End of explanation |
15,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 2
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.
Convert to Numpy Array
Although SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional "array").
Recall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the "feature matrix" by the "weight vector".
First we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.
Step3: Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things
Step4: For testing let's use the 'sqft_living' feature and a constant as our features and price as our output
Step5: Predicting output given regression weights
Suppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0*1.0 + 1.0*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this
Step6: np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights
Step7: If you want to test your code run the following cell
Step8: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.
Since the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows
Step9: To test your feature derivartive run the following
Step10: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
Step11: A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features.
For similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values.
Running the Gradient Descent as Simple Regression
First let's split the data into training and test data.
Step12: Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model
Step13: Next run your gradient descent with the above parameters.
Step14: How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)?
Quiz Question
Step15: Now compute your predictions using test_simple_feature_matrix and your weights from above.
Step16: Quiz Question
Step17: Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
Step18: Running a multiple regression
Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters
Step19: Use the above parameters to estimate the model weights. Record these values for your quiz.
Step20: Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
Quiz Question
Step21: What is the actual price for the 1st house in the test data set?
Step22: Quiz Question
Step23: Quiz Question | Python Code:
import graphlab
graphlab.product_key.set_product_key("C0C2-04B4-D94B-70F6-8771-86F9-C6E1-E122")
Explanation: Regression Week 2: Multiple Regression (gradient descent)
In the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.
In this notebook we will cover estimating multiple regression weights via gradient descent. You will:
* Add a constant column of 1's to a graphlab SFrame to account for the intercept
* Convert an SFrame into a Numpy array
* Write a predict_output() function using Numpy
* Write a numpy function to compute the derivative of the regression weights with respect to a single feature
* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.
* Use the gradient descent function to estimate regression weights for multiple features
Fire up graphlab create
Make sure you have the latest version of graphlab (>= 1.7)
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/kc_house_data.gl')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
import numpy as np # note this allows us to refer to numpy as np instead
Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.
Convert to Numpy Array
Although SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional "array").
Recall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the "feature matrix" by the "weight vector".
First we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.
End of explanation
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe['price']
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
Explanation: Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:
* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')
* A numpy array containing the values of the output
With this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)
Please note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!
End of explanation
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list
print example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'
print example_output[0] # and the corresponding output
Explanation: For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:
End of explanation
my_weights = np.array([1., 1.]) # the example weights
my_features = example_features[0,] # we'll use the first data point
predicted_value = np.dot(my_features, my_weights)
print predicted_value
Explanation: Predicting output given regression weights
Suppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0*1.0 + 1.0*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:
End of explanation
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
Explanation: np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:
End of explanation
test_predictions = predict_output(example_features, my_weights)
print test_predictions[0] # should be 1181.0
print test_predictions[1] # should be 2571.0
Explanation: If you want to test your code run the following cell:
End of explanation
def feature_derivative(errors, feature):
# Assume that errors and feature are both numpy arrays of the same length (number of data points)
# compute twice the dot product of these vectors as 'derivative' and return the value
derivative = 2*np.dot(errors, feature)
return(derivative)
Explanation: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.
Since the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:
(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)^2
Where we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:
2*(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)* [feature_i]
The term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as:
2*error*[feature_i]
That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!
Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors.
With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).
End of explanation
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([0., 0.]) # this makes all the predictions 0
test_predictions = predict_output(example_features, my_weights)
# just like SFrames 2 numpy arrays can be elementwise subtracted with '-':
errors = test_predictions - example_output # prediction errors in this case is just the -example_output
feature = example_features[:,0] # let's compute the derivative with respect to 'constant', the ":" indicates "all rows"
derivative = feature_derivative(errors, feature)
print derivative
print -np.sum(example_output)*2 # should be the same as derivative
Explanation: To test your feature derivartive run the following:
End of explanation
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
gradient_magnitude = 0
while not converged:
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient sum of squares
# while we haven't reached the tolerance yet, update each feature's weight
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
drivative = feature_derivative(errors, feature_matrix[:, i])
# add the squared value of the derivative to the gradient magnitude (for assessing convergence)
gradient_sum_squares += drivative * drivative
# subtract the step size times the derivative from the current weight
weights[i] -= step_size * drivative
# compute the square-root of the gradient sum of squares to get the gradient matnigude:
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
Explanation: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features.
For similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values.
Running the Gradient Descent as Simple Regression
First let's split the data into training and test data.
End of explanation
# let's test out the gradient descent
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
Explanation: Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:
End of explanation
simple_weights = regression_gradient_descent(simple_feature_matrix, output,initial_weights, step_size,tolerance)
print simple_weights
Explanation: Next run your gradient descent with the above parameters.
End of explanation
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
Explanation: How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)?
Quiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?
Use your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:
End of explanation
test_predictions = predict_output(test_simple_feature_matrix, simple_weights)
Explanation: Now compute your predictions using test_simple_feature_matrix and your weights from above.
End of explanation
print test_predictions[0]
Explanation: Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?
End of explanation
rss = 0
for i in range(0, len(test_predictions)):
error = test_predictions[i] - test_data['price'][i]
rss += error * error
print rss
Explanation: Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
End of explanation
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
Explanation: Running a multiple regression
Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:
End of explanation
simple_weights = regression_gradient_descent(feature_matrix, output,initial_weights, step_size, tolerance)
print simple_weights
Explanation: Use the above parameters to estimate the model weights. Record these values for your quiz.
End of explanation
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
test_predictions = predict_output(test_simple_feature_matrix, simple_weights)
Explanation: Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?
End of explanation
print test_predictions[0]
Explanation: What is the actual price for the 1st house in the test data set?
End of explanation
test_data[0]
Explanation: Quiz Question: Which estimate was closer to the true price for the 1st house on the Test data set, model 1 or model 2?
Now use your predictions and the output to compute the RSS for model 2 on TEST data.
End of explanation
rss = 0
for i in range(0, len(test_predictions)):
error = test_predictions[i] - test_data['price'][i]
rss += error * error
print rss
Explanation: Quiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data?
End of explanation |
15,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Models Exercise 1
Imports
Step1: Fitting a quadratic curve
For this problem we are going to work with the following model
Step2: First, generate a dataset using this model using these parameters and the following characteristics
Step3: Now fit the model to the dataset to recover estimates for the model's parameters | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
from IPython.html.widgets import interact
Explanation: Fitting Models Exercise 1
Imports
End of explanation
a_true = 0.5
b_true = 2.0
c_true = -4.0
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
# YOUR CODE HERE
xdata=np.linspace(-5,5,30)
N=30
dy=2.0
def ymodel(a,b,c):
return a*x**2+b*x+c
ydata = a_true*x**2 + b_true * x + c_true + np.random.normal(0.0, dy, size=N)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y');
assert True # leave this cell for grading the raw data generation and plot
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
# YOUR CODE HERE
def chi2(theta, x, y, dy):
# theta = [b, m]
return np.sum(((y - theta[0] - theta[1] * x) / dy) ** 2)
def manual_fit(a, b, c):
modely = a*xdata**2 + b*xdata +c
plt.plot(xdata, modely)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y')
plt.text(1, 15, 'a={0:.2f}'.format(a))
plt.text(1, 12.5, 'b={0:.2f}'.format(b))
plt.text(1, 10, 'c={0:.2f}'.format(c))
plt.text(1, 8.0, '$\chi^2$={0:.2f}'.format(chi2([a,b,c],xdata,ydata, dy)))
interact(manual_fit, a=(-3.0,3.0,0.01), b=(0.0,4.0,0.01),c=(-5,5,0.1));
def deviations(theta, x, y, dy):
return (y - theta[0] - theta[1] * x) / dy
result = opt.leastsq(deviations, theta_guess, args=(xdata, ydata, dy), full_output=True)
theta_best = result[0]
theta_cov = result[1]
theta_mov = result[2]
print('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))
print('c = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2])))
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation |
15,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rapid Overview
build intuition about pandas
details later
documentation
Step1: Basic series; default integer index
documentation
Step2: datetime index
documentation
Step3: sample NumPy data
sample data frame, with column headers; uses our dates_index
documentation | Python Code:
import pandas as pd
import numpy as np
Explanation: Rapid Overview
build intuition about pandas
details later
documentation: http://pandas.pydata.org/pandas-docs/stable/10min.html
End of explanation
my_series = pd.Series([1,3,5,np.nan,6,8])
my_series
Explanation: Basic series; default integer index
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html
End of explanation
my_dates_index = pd.date_range('20160101', periods=6)
my_dates_index
Explanation: datetime index
documentation: http://pandas.pydata.org/pandas-docs/stable/timeseries.html
End of explanation
df_from_dictionary = pd.DataFrame({
'float' : 1.,
'time' : pd.Timestamp('20160825'),
'series' : pd.Series(1,index=list(range(4)),dtype='float32'),
'array' : np.array([3] * 4,dtype='int32'),
'categories' : pd.Categorical(["test","train","taxes","tools"]),
'dull' : 'boring data'
})
df_from_dictionary
Explanation: sample NumPy data
sample data frame, with column headers; uses our dates_index
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html
data frame from a Python dictionary
End of explanation |
15,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Предобработка данных и логистическая регрессия для задачи бинарной классификации
Programming assignment
В задании вам будет предложено ознакомиться с основными техниками предобработки данных, а так же применить их для обучения модели логистической регрессии. Ответ потребуется загрузить в соответствующую форму в виде 6 текстовых файлов.
Step1: Описание датасета
Задача
Step2: Предобработка данных
Базовым этапом в предобработке любого датасета для логистической регрессии будет кодирование категориальных признаков, а так же удаление или интерпретация пропущенных значений (при наличии того или другого).
Step3: Видно, что в датасете есть как числовые, так и категориальные признаки. Получим списки их названий
Step4: Также в нём присутствуют пропущенные значения. Очевидны решением будет исключение всех данных, у которых пропущено хотя бы одно значение. Сделаем это
Step5: Видно, что тогда мы выбросим почти все данные, и такой метод решения в данном случае не сработает.
Пропущенные значения можно так же интерпретировать, для этого существует несколько способов, они различаются для категориальных и вещественных признаков.
Для вещественных признаков
Step6: Преобразование категориальных признаков.
В предыдущей ячейке мы разделили наш датасет ещё на две части
Step7: Для построения метрики качества по результату обучения требуется разделить исходный датасет на обучающую и тестовую выборки.
Step8: Задание 1. Сравнение способов заполнения вещественных пропущенных значений.
Составьте две обучающие выборки из вещественных и категориальных признаков
Step9: Масштабирование вещественных признаков.
Попробуем улучшить качество классификации.
Step10: Сравнение признаковых пространств.
Step11: Как видно из графиков, мы не поменяли свойства признакового пространства
Step12: Задание 3. Балансировка классов.
Обучите логистическую регрессию и гиперпараметры с балансировкой классов, используя веса (параметр class_weight='balanced' регрессии) на отмасштабированных выборках, полученных в предыдущем задании. Убедитесь, что вы нашли максимум accuracy по гиперпараметрам.
Получите метрику ROC AUC на тестовой выборке.
Сбалансируйте выборку, досэмплировав в неё объекты из меньшего класса. Для получения индексов объектов, которые требуется добавить в обучающую выборку, используйте следующую комбинацию вызовов функций
Step13: Задание 4. Стратификация выборки.
Разбейте выборки X_real_zeros и X_cat_oh на обучение и тест, применяя стратификацию.
Выполните масштабирование новых вещественных выборок, обучите классификатор и его гиперпараметры при помощи метода кросс-валидации, делая поправку на несбалансированные классы при помощи весов. Убедитесь в том, что нашли оптимум accuracy по гиперпараметрам.
Оцените качество классификатора метрике AUC ROC на тестовой выборке.
Полученный ответ передайте функции write_answer_4
Step14: Задание 5. Трансформация вещественных признаков.
Реализуйте преобразование вещественных признаков модели при помощи полиномиальных признаков степени 2
Постройте логистическую регрессию на новых данных, одновременно подобрав оптимальные гиперпараметры. Обращаем внимание, что в преобразованных признаках уже присутствует столбец, все значения которого равны 1, поэтому обучать дополнительно значение $b$ не нужно, его функцию выполняет один из весов $w$. В связи с этим во избежание линейной зависимости в датасете, в вызов класса логистической регрессии требуется передавать параметр fit_intercept=False. Для обучения используйте стратифицированные выборки с балансировкой классов при помощи весов, преобразованные признаки требуется заново отмасштабировать.
Получите AUC ROC на тесте и сравните данный результат с использованием обычных признаков.
Передайте полученный ответ в функцию write_answer_5.
Step15: Задание 6. Отбор признаков при помощи регрессии Lasso.
Обучите регрессию Lasso на стратифицированных отмасштабированных выборках, используя балансировку классов при помощи весов.
Получите ROC AUC регрессии, сравните его с предыдущими результатами.
Найдите номера вещественных признаков, которые имеют нулевые веса в итоговой модели.
Передайте их список функции write_answer_6. | Python Code:
import pandas as pd
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
matplotlib.style.use('ggplot')
%matplotlib inline
Explanation: Предобработка данных и логистическая регрессия для задачи бинарной классификации
Programming assignment
В задании вам будет предложено ознакомиться с основными техниками предобработки данных, а так же применить их для обучения модели логистической регрессии. Ответ потребуется загрузить в соответствующую форму в виде 6 текстовых файлов.
End of explanation
data = pd.read_csv('data.csv')
data.shape
X = data.drop('Grant.Status', 1)
y = data['Grant.Status']
Explanation: Описание датасета
Задача: по 38 признакам, связанных с заявкой на грант (область исследований учёных, информация по их академическому бэкграунду, размер гранта, область, в которой он выдаётся) предсказать, будет ли заявка принята. Датасет включает в себя информацию по 6000 заявкам на гранты, которые были поданы в университете Мельбурна в период с 2004 по 2008 год.
Полную версию данных с большим количеством признаков можно найти на https://www.kaggle.com/c/unimelb.
End of explanation
data.head()
Explanation: Предобработка данных
Базовым этапом в предобработке любого датасета для логистической регрессии будет кодирование категориальных признаков, а так же удаление или интерпретация пропущенных значений (при наличии того или другого).
End of explanation
numeric_cols = ['RFCD.Percentage.1', 'RFCD.Percentage.2', 'RFCD.Percentage.3',
'RFCD.Percentage.4', 'RFCD.Percentage.5',
'SEO.Percentage.1', 'SEO.Percentage.2', 'SEO.Percentage.3',
'SEO.Percentage.4', 'SEO.Percentage.5',
'Year.of.Birth.1', 'Number.of.Successful.Grant.1', 'Number.of.Unsuccessful.Grant.1']
categorical_cols = list(set(X.columns.values.tolist()) - set(numeric_cols))
Explanation: Видно, что в датасете есть как числовые, так и категориальные признаки. Получим списки их названий:
End of explanation
data.dropna().shape
Explanation: Также в нём присутствуют пропущенные значения. Очевидны решением будет исключение всех данных, у которых пропущено хотя бы одно значение. Сделаем это:
End of explanation
def calculate_means(numeric_data):
means = np.zeros(numeric_data.shape[1])
for j in range(numeric_data.shape[1]):
to_sum = numeric_data.iloc[:,j]
indices = np.nonzero(~numeric_data.iloc[:,j].isnull())[0]
correction = np.amax(to_sum[indices])
to_sum /= correction
for i in indices:
means[j] += to_sum[i]
means[j] /= indices.size
means[j] *= correction
return pd.Series(means, numeric_data.columns)
means = calculate_means(X[numeric_cols])
means['RFCD.Percentage.1']
def NanChZero(datafr):
datawork = datafr
for item in datawork.columns:
datawork[item].fillna(0, inplace=True)
return datawork
def NanChMeans(datafr):
datawork = datafr
means = calculate_means(X[numeric_cols])
for item in datawork.columns:
datawork[item].fillna(means[item], inplace=True)
return datawork
X_real_zeros=X[numeric_cols].fillna(0)
X_real_mean=NanChMeans(X[numeric_cols])
def NanChNA(datafr):
datawork = datafr
for item in datawork.columns:
datawork[item].fillna('NA', inplace=True)
for item in datawork.columns:
datawork[item]=datawork[item].astype(str)
return datawork
X_cat = X[categorical_cols].applymap(str).fillna('NA')
Explanation: Видно, что тогда мы выбросим почти все данные, и такой метод решения в данном случае не сработает.
Пропущенные значения можно так же интерпретировать, для этого существует несколько способов, они различаются для категориальных и вещественных признаков.
Для вещественных признаков:
- заменить на 0 (данный признак давать вклад в предсказание для данного объекта не будет)
- заменить на среднее (каждый пропущенный признак будет давать такой же вклад, как и среднее значение признака на датасете)
Для категориальных:
- интерпретировать пропущенное значение, как ещё одну категорию (данный способ является самым естественным, так как в случае категорий у нас есть уникальная возможность не потерять информацию о наличии пропущенных значений; обратите внимание, что в случае вещественных признаков данная информация неизбежно теряется)
Задание 0. Обработка пропущенных значений.
Заполните пропущенные вещественные значения в X нулями и средними по столбцам, назовите полученные датафреймы X_real_zeros и X_real_mean соответственно.
Все категориальные признаки в X преобразуйте в строки, пропущенные значения требуется также преобразовать в какие-либо строки, которые не являются категориями (например, 'NA'), полученный датафрейм назовите X_cat.
End of explanation
encoder = DV(sparse = False)
X_cat_oh = encoder.fit_transform(X_cat.T.to_dict().values())
Explanation: Преобразование категориальных признаков.
В предыдущей ячейке мы разделили наш датасет ещё на две части: в одной присутствуют только вещественные признаки, в другой только категориальные. Это понадобится нам для раздельной последующей обработке этих данных, а так же для сравнения качества работы тех или иных методов.
Для использования модели регрессии требуется преобразовать категориальные признаки в вещественные. Основной способ преоборазования категориальных признаков в вещественные: one-hot encoding. Его идея заключается в том, что мы преобразуем категориальный признак при помощи бинарного кода: каждой категории ставим в соответствие набор из нулей и единиц.
End of explanation
from sklearn.cross_validation import train_test_split
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0)
(X_train_real_mean,
X_test_real_mean) = train_test_split(X_real_mean,
test_size=0.3,
random_state=0)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0)
Explanation: Для построения метрики качества по результату обучения требуется разделить исходный датасет на обучающую и тестовую выборки.
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import roc_auc_score
def plot_scores(optimizer):
scores = [[item[0]['C'],
item[1],
(np.sum((item[2]-item[1])**2)/(item[2].size-1))**0.5] for item in optimizer.grid_scores_]
scores = np.array(scores)
plt.semilogx(scores[:,0], scores[:,1])
plt.fill_between(scores[:,0], scores[:,1]-scores[:,2],
scores[:,1]+scores[:,2], alpha=0.3)
plt.show()
def write_answer_1(auc_1, auc_2):
auc = (auc_1 + auc_2)/2
with open("preprocessing_lr_answer1.txt", "w") as fout:
fout.write(str(auc))
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 3
dataZeros_train = np.hstack((X_train_real_zeros,X_train_cat_oh))
dataZeros_test = np.hstack((X_test_real_zeros,X_test_cat_oh))
dataMean_train =np.hstack((X_train_real_mean,X_train_cat_oh))
dataMean_test =np.hstack((X_test_real_mean,X_test_cat_oh))
estimator=LogisticRegression(penalty='l2', random_state = 0, class_weight='balanced', n_jobs=4)
optimizer = GridSearchCV(estimator, param_grid, scoring = 'accuracy', cv=cv)
optimizer.fit(dataZeros_train,y_train)
print 'Метки функции предсказаний:\n'
print y_test[:10]
print optimizer.predict(dataZeros_test)[:10]
print 'Вероятности(т.к. функция логистическая):\n'
print optimizer.predict_proba(dataZeros_test)[:10]
print 'Лучшая функция:\n'
print optimizer.best_estimator_
print 'Коэффициенты лучшей функции:\n'
print optimizer.best_estimator_.coef_
print 'Лучший параметр функции:\n'
print optimizer.best_params_
plot_scores(optimizer)
auc_zeros=roc_auc_score(y_test, optimizer.predict_proba(dataZeros_test)[:,1])
print auc_zeros
estimator=LogisticRegression(penalty='l2', random_state = 0, class_weight='balanced', n_jobs=4)
optimizer = GridSearchCV(estimator, param_grid, scoring = 'accuracy', cv=cv)
optimizer.fit(dataMean_train,y_train)
print 'Метки функции предсказаний:\n'
print y_test[:10]
print optimizer.predict(dataMean_test)[:10]
print 'Вероятности(т.к. функция логистическая):\n'
print optimizer.predict_proba(dataMean_test)[:10]
print optimizer.predict_proba(dataMean_test)[:,1]
print 'Лучшая функция:\n'
print optimizer.best_estimator_
print 'Коэффициенты лучшей функции:\n'
print optimizer.best_estimator_.coef_
print 'Лучший параметр функции:\n'
print optimizer.best_params_
plot_scores(optimizer)
auc_mean=roc_auc_score(y_test, optimizer.predict_proba(dataMean_test)[:,1])
print auc_mean
estimator=LogisticRegression(penalty='l2', random_state = 0, class_weight='balanced', n_jobs=4)
optimizer = GridSearchCV(estimator, param_grid, scoring = 'accuracy', cv=cv)
optimizer.fit(X_train_cat_oh,y_train)
print 'Метки функции предсказаний:\n'
print y_test[:10]
print optimizer.predict(X_test_cat_oh)[:10]
print 'Вероятности(т.к. функция логистическая):\n'
print optimizer.predict_proba(X_test_cat_oh)[:10]
print 'Лучшая функция:\n'
print optimizer.best_estimator_
print 'Коэффициенты лучшей функции:\n'
print optimizer.best_estimator_.coef_
print 'Лучший параметр функции:\n'
print optimizer.best_params_
plot_scores(optimizer)
roc_auc_score(y_test, optimizer.predict_proba(X_test_cat_oh)[:,1])
auc_zeros
auc_mean
write_answer_1(auc_zeros, auc_mean)
Explanation: Задание 1. Сравнение способов заполнения вещественных пропущенных значений.
Составьте две обучающие выборки из вещественных и категориальных признаков: в одной вещественные признаки, где пропущенные значения заполнены нулями, в другой - средними.
Обучите на них логистическую регрессию, подбирая параметры из заданной сетки param_grid по методу кросс-валидации с числом фолдов cv=3.
Постройте два графика оценок точности +- их стандратного отклонения в зависимости от гиперпараметра и убедитесь, что вы действительно нашли её максимум. Также обратите внимание на большую дисперсию получаемых оценок (уменьшить её можно увеличением числа фолдов cv).
Получите две метрики качества AUC ROC на тестовой выборке и сравните их между собой. Какой способ заполнения пропущенных вещественных значений работает лучше? В дальнейшем для выполнения задания в качестве вещественных признаков используйте ту выборку, которая даёт лучшее качество на тесте.
Передайте два значения AUC ROC (сначала для выборки, заполненной средними, потом для выборки, заполненной нулями) в функцию write_answer_1 и запустите её. Полученный файл является ответом на 1 задание.
Вообще говоря, не вполне логично оптимизировать на кросс-валидации заданный по умолчанию в классе логистической регрессии функционал accuracy, а измерять на тесте AUC ROC, но это, как и ограничение размера выборки, сделано для ускорения работы процесса кросс-валидации.
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_real_scaled=scaler.fit_transform(X_train_real_mean)
X_train_real_scaled=scaler.transform(X_train_real_scaled)
X_test_real_scaled=scaler.fit_transform(X_test_real_mean)
X_test_real_scaled=scaler.transform(X_test_real_scaled)
X_test_real_scaled
Explanation: Масштабирование вещественных признаков.
Попробуем улучшить качество классификации.
End of explanation
data_numeric_scaled = pd.DataFrame(X_train_real_scaled, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric_scaled[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
Explanation: Сравнение признаковых пространств.
End of explanation
dataMean_train =np.hstack((X_train_real_scaled,X_train_cat_oh))
dataMean_test =np.hstack((X_test_real_scaled,X_test_cat_oh))
estimator=LogisticRegression(penalty='l2', random_state = 0, n_jobs=4)
optimizer = GridSearchCV(estimator, param_grid, scoring = 'accuracy', cv=cv)
optimizer.fit(dataMean_train,y_train)
print 'Метки функции предсказаний:\n'
print y_test[:10]
print optimizer.predict(dataMean_test)[:10]
print 'Вероятности(т.к. функция логистическая):\n'
print optimizer.predict_proba(dataMean_test)[:10]
print optimizer.predict_proba(dataMean_test)[:,1]
print 'Лучшая функция:\n'
print optimizer.best_estimator_
print 'Коэффициенты лучшей функции:\n'
print optimizer.best_estimator_.coef_
print 'Лучший параметр функции:\n'
print optimizer.best_params_
plot_scores(optimizer)
auc_mean=roc_auc_score(y_test, optimizer.predict_proba(dataMean_test)[:,1])
print auc_mean
def write_answer_2(auc):
with open("preprocessing_lr_answer2.txt", "w") as fout:
fout.write(str(auc))
write_answer_2(auc_mean)
Explanation: Как видно из графиков, мы не поменяли свойства признакового пространства: гистограммы распределений значений признаков, как и их scatter-plots, выглядят так же, как и до нормировки, но при этом все значения теперь находятся примерно в одном диапазоне, тем самым повышая интерпретабельность результатов, а также лучше сочетаясь с идеологией регуляризации.
Задание 2. Сравнение качества классификации до и после масштабирования вещественных признаков.
Обучите ещё раз регрессию и гиперпараметры на новых признаках, объединив их с закодированными категориальными.
Проверьте, был ли найден оптимум accuracy по гиперпараметрам во время кроссвалидации.
Получите значение ROC AUC на тестовой выборке, сравните с лучшим результатом, полученными ранее.
Запишите полученный ответ в файл при помощи функции write_answer_2.
End of explanation
dataMean_train =np.hstack((X_train_real_scaled,X_train_cat_oh))
dataMean_test =np.hstack((X_test_real_scaled,X_test_cat_oh))
estimator=LogisticRegression(penalty='l2', random_state = 0, class_weight='balanced', n_jobs=4)
optimizer = GridSearchCV(estimator, param_grid, scoring = 'accuracy', cv=cv)
optimizer.fit(dataMean_train,y_train)
print 'Метки функции предсказаний:\n'
print y_test[:10]
print optimizer.predict(dataMean_test)[:10]
print 'Вероятности(т.к. функция логистическая):\n'
print optimizer.predict_proba(dataMean_test)[:10]
print optimizer.predict_proba(dataMean_test)[:,1]
print 'Лучшая функция:\n'
print optimizer.best_estimator_
print 'Коэффициенты лучшей функции:\n'
print optimizer.best_estimator_.coef_
print 'Лучший параметр функции:\n'
print optimizer.best_params_
plot_scores(optimizer)
auc_mean_2=roc_auc_score(y_test, optimizer.predict_proba(dataMean_test)[:,1])
print auc_mean_2
np.random.seed(0)
number_to_add = np.sum(y_train==0)-np.sum(y_train==1)
indices_to_add = np.random.randint(np.sum(y_train==1), size = number_to_add)
X_train_to_add = dataMean_train[y_train.as_matrix() == 1,:][indices_to_add,:]
X_train_enhanced=np.append(dataMean_train,X_train_to_add, axis=0)
y_train_enhanced=np.append(y_train,(np.ones((number_to_add,1))))
print X_train_enhanced.shape
print y_train_enhanced.shape
estimator=LogisticRegression(penalty='l2', random_state = 0, class_weight='balanced', n_jobs=4)
optimizer = GridSearchCV(estimator, param_grid, scoring = 'accuracy', cv=cv)
optimizer.fit(X_train_enhanced,y_train_enhanced)
print 'Метки функции предсказаний:\n'
print y_test[:10]
print optimizer.predict(dataMean_test)[:10]
print 'Лучшая функция:\n'
print optimizer.best_estimator_
print 'Коэффициенты лучшей функции:\n'
print optimizer.best_estimator_.coef_
print 'Лучший параметр функции:\n'
print optimizer.best_params_
plot_scores(optimizer)
auc_mean_enhanced=roc_auc_score(y_test, optimizer.predict_proba(dataMean_test)[:,1])
print auc_mean_enhanced
def write_answer_3(auc_1, auc_2):
auc = (auc_1 + auc_2) / 2
with open("preprocessing_lr_answer3.txt", "w") as fout:
fout.write(str(auc))
write_answer_3(auc_mean_2, auc_mean_enhanced)
Explanation: Задание 3. Балансировка классов.
Обучите логистическую регрессию и гиперпараметры с балансировкой классов, используя веса (параметр class_weight='balanced' регрессии) на отмасштабированных выборках, полученных в предыдущем задании. Убедитесь, что вы нашли максимум accuracy по гиперпараметрам.
Получите метрику ROC AUC на тестовой выборке.
Сбалансируйте выборку, досэмплировав в неё объекты из меньшего класса. Для получения индексов объектов, которые требуется добавить в обучающую выборку, используйте следующую комбинацию вызовов функций:
np.random.seed(0)
indices_to_add = np.random.randint(...)
X_train_to_add = X_train[y_train.as_matrix() == 1,:][indices_to_add,:]
После этого добавьте эти объекты в начало или конец обучающей выборки. Дополните соответствующим образом вектор ответов.
Получите метрику ROC AUC на тестовой выборке, сравните с предыдущим результатом.
Внесите ответы в выходной файл при помощи функции write_asnwer_3, передав в неё сначала ROC AUC для балансировки весами, а потом балансировки выборки вручную.
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import roc_auc_score
from sklearn.cross_validation import train_test_split
(X_train_real_zeros_str,
X_test_real_zeros_str,
y_train_str, y_test_str) = train_test_split(X_real_zeros, y,
stratify=y,
test_size=0.3,
random_state=0)
(X_train_cat_oh_str,
X_test_cat_oh_str) = train_test_split(X_cat_oh,stratify=y,
test_size=0.3,
random_state=0)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_real_scaled_str=scaler.fit_transform(X_train_real_zeros_str)
X_test_real_scaled_str=scaler.transform(X_test_real_zeros_str)
dataZeros_train_str = np.hstack((X_train_real_scaled_str,X_train_cat_oh_str))
dataZeros_test_str = np.hstack((X_test_real_scaled_str,X_test_cat_oh_str))
estimator=LogisticRegression(penalty='l2', random_state = 0, class_weight='balanced', n_jobs=4)
optimizer = GridSearchCV(estimator, param_grid, scoring = 'accuracy', cv=cv)
optimizer.fit(dataZeros_train_str,y_train_str)
print 'Метки функции предсказаний:\n'
print y_test_str[:10]
print optimizer.predict(dataZeros_test_str)[:10]
print 'Лучшая функция:\n'
print optimizer.best_estimator_
print 'Коэффициенты лучшей функции:\n'
print optimizer.best_estimator_.coef_
print 'Лучший параметр функции:\n'
print optimizer.best_params_
print 'параметр accuracy - ',optimizer.best_score_
plot_scores(optimizer)
auc_mean_str=roc_auc_score(y_test_str, optimizer.predict_proba(dataZeros_test_str)[:,1])
print auc_mean_str
def write_answer_4(auc):
with open("preprocessing_lr_answer4.txt", "w") as fout:
fout.write(str(auc))
write_answer_4(auc_mean_str)
Explanation: Задание 4. Стратификация выборки.
Разбейте выборки X_real_zeros и X_cat_oh на обучение и тест, применяя стратификацию.
Выполните масштабирование новых вещественных выборок, обучите классификатор и его гиперпараметры при помощи метода кросс-валидации, делая поправку на несбалансированные классы при помощи весов. Убедитесь в том, что нашли оптимум accuracy по гиперпараметрам.
Оцените качество классификатора метрике AUC ROC на тестовой выборке.
Полученный ответ передайте функции write_answer_4
End of explanation
(X_train_poly,
X_test_poly,
y_train_poly, y_test_poly) = train_test_split(X_real_zeros, y,
stratify=y,
test_size=0.3,
random_state=0)
(X_train_cat_oh_poly,
X_test_cat_oh_poly) = train_test_split(X_cat_oh, stratify=y,
test_size=0.3,
random_state=0)
from sklearn.preprocessing import PolynomialFeatures
transform_str = PolynomialFeatures(2)
Data_train_poly = transform_str.fit_transform(X_train_poly)
Data_test_poly = transform_str.transform(X_test_poly)
scaler2 = StandardScaler()
X_train_poly_2 = scaler2.fit_transform(Data_train_poly)
X_test_poly_2=scaler2.transform(Data_test_poly)
dataZeros_train_poly = np.hstack((X_train_poly_2,X_train_cat_oh_poly))
dataZeros_test_poly = np.hstack((X_test_poly_2,X_test_cat_oh_poly))
estimator=LogisticRegression(penalty='l2', random_state = 0, class_weight='balanced', n_jobs=-1, fit_intercept=False)
optimizer = GridSearchCV(estimator, param_grid, scoring = 'accuracy', cv=cv)
optimizer.fit(dataZeros_train_poly,y_train_poly)
print 'Метки функции предсказаний:\n'
print y_test_poly[:10]
print optimizer.predict(dataZeros_test_poly)[:10]
print 'Лучшая функция:\n'
print optimizer.best_estimator_
print 'Коэффициенты лучшей функции:\n'
print optimizer.best_estimator_.coef_
print 'Лучший параметр функции:\n'
print optimizer.best_params_
plot_scores(optimizer)
auc_zeros_str_poly=roc_auc_score(y_test_poly, optimizer.predict_proba(dataZeros_test_poly)[:,1])
print auc_zeros_str_poly
def write_answer_5(auc):
with open("preprocessing_lr_answer5.txt", "w") as fout:
fout.write(str(auc))
write_answer_5(auc_zeros_str_poly)
Explanation: Задание 5. Трансформация вещественных признаков.
Реализуйте преобразование вещественных признаков модели при помощи полиномиальных признаков степени 2
Постройте логистическую регрессию на новых данных, одновременно подобрав оптимальные гиперпараметры. Обращаем внимание, что в преобразованных признаках уже присутствует столбец, все значения которого равны 1, поэтому обучать дополнительно значение $b$ не нужно, его функцию выполняет один из весов $w$. В связи с этим во избежание линейной зависимости в датасете, в вызов класса логистической регрессии требуется передавать параметр fit_intercept=False. Для обучения используйте стратифицированные выборки с балансировкой классов при помощи весов, преобразованные признаки требуется заново отмасштабировать.
Получите AUC ROC на тесте и сравните данный результат с использованием обычных признаков.
Передайте полученный ответ в функцию write_answer_5.
End of explanation
(X_train_real_zeros_lasso,
X_test_real_zeros_lasso,
y_train_lasso, y_test_lasso) = train_test_split(X_real_zeros, y,
stratify=y,
test_size=0.3,
random_state=0)
(X_train_cat_oh_lasso,
X_test_cat_oh_lasso) = train_test_split(X_cat_oh, stratify=y,
test_size=0.3,
random_state=0)
scaler = StandardScaler()
X_train_real_scaled_lasso=scaler.fit_transform(X_train_real_zeros_lasso)
X_test_real_scaled_lasso=scaler.transform(X_test_real_zeros_lasso)
dataZeros_train_lasso = np.hstack((X_train_real_scaled_lasso,X_train_cat_oh_str))
dataZeros_test_lasso = np.hstack((X_test_real_scaled_lasso,X_test_cat_oh_str))
estimator=LogisticRegression(penalty='l1', random_state = 0, class_weight='balanced', n_jobs=4)
optimizer = GridSearchCV(estimator, param_grid, scoring = 'accuracy', cv=cv)
optimizer.fit(dataZeros_train_lasso,y_train_str)
print 'Метки функции предсказаний:\n'
print y_test_str[:10]
print optimizer.predict(dataZeros_test_lasso)[:10]
print 'Лучшая функция:\n'
print optimizer.best_estimator_
print 'Коэффициенты лучшей функции:\n'
print optimizer.best_estimator_.coef_[0][:13]
print 'Лучший параметр функции:\n'
print optimizer.best_params_
plot_scores(optimizer)
auc_zero_lasso=roc_auc_score(y_test_str, optimizer.predict_proba(dataZeros_test_lasso)[:,1])
print auc_zero_lasso
optimizer.best_estimator_.coef_[0,:,]
def write_answer_6(features):
with open("preprocessing_lr_answer6.txt", "w") as fout:
fout.write(" ".join([str(num) for num in features]))
write_answer_6(optimizer.best_estimator_.coef_[0][:13])
Explanation: Задание 6. Отбор признаков при помощи регрессии Lasso.
Обучите регрессию Lasso на стратифицированных отмасштабированных выборках, используя балансировку классов при помощи весов.
Получите ROC AUC регрессии, сравните его с предыдущими результатами.
Найдите номера вещественных признаков, которые имеют нулевые веса в итоговой модели.
Передайте их список функции write_answer_6.
End of explanation |
15,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: PWC-Net-small model finetuning (with cyclical learning rate schedule)
In this notebook we
Step2: TODO
Step3: Finetune on FlyingChairs+FlyingThings3DHalfRes mix
Load the dataset
Step4: Configure the finetuning
Step5: Finetune the model | Python Code:
pwcnet_finetune.ipynb
PWC-Net model finetuning.
Written by Phil Ferriere
Licensed under the MIT License (see LICENSE for details)
Tensorboard:
[win] tensorboard --logdir=E:\\repos\\tf-optflow\\tfoptflow\\pwcnet-sm-6-2-cyclic-chairsthingsmix_finetuned
[ubu] tensorboard --logdir=/media/EDrive/repos/tf-optflow/tfoptflow/pwcnet-sm-6-2-cyclic-chairsthingsmix_finetuned
from __future__ import absolute_import, division, print_function
import sys
from copy import deepcopy
from dataset_base import _DEFAULT_DS_TUNE_OPTIONS
from dataset_flyingchairs import FlyingChairsDataset
from dataset_flyingthings3d import FlyingThings3DHalfResDataset
from dataset_mixer import MixedDataset
from model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_FINETUNE_OPTIONS
Explanation: PWC-Net-small model finetuning (with cyclical learning rate schedule)
In this notebook we:
- Use a small model (no dense or residual connections), 6 level pyramid, uspample level 2 by 4 as the final flow prediction
- Train the PWC-Net-small model on a mix of the FlyingChairs and FlyingThings3DHalfRes dataset using a Cyclic<sub>short</sub> schedule of our own
- Let the Cyclic<sub>short</sub> schedule oscillate between 2e-05 and 1e-06 for 200,000 steps
- Switch to the "robust" loss described in the paper, instead of the "multiscale" loss used during training
Below, look for TODO references and customize this notebook based on your own needs.
Reference
[2018a]<a name="2018a"></a> Sun et al. 2018. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. [arXiv] [web] [PyTorch (Official)] [Caffe (Official)]
End of explanation
# TODO: You MUST set dataset_root to the correct path on your machine!
if sys.platform.startswith("win"):
_DATASET_ROOT = 'E:/datasets/'
else:
_DATASET_ROOT = '/media/EDrive/datasets/'
_FLYINGCHAIRS_ROOT = _DATASET_ROOT + 'FlyingChairs_release'
_FLYINGTHINGS3DHALFRES_ROOT = _DATASET_ROOT + 'FlyingThings3D_HalfRes'
# TODO: You MUST adjust the settings below based on the number of GPU(s) used for training
# Set controller device and devices
# A one-gpu setup would be something like controller='/device:GPU:0' and gpu_devices=['/device:GPU:0']
# Here, we use a dual-GPU setup, as shown below
# gpu_devices = ['/device:GPU:0', '/device:GPU:1']
# controller = '/device:CPU:0'
gpu_devices = ['/device:GPU:0']
controller = '/device:GPU:0'
# TODO: You MUST adjust this setting below based on the amount of memory on your GPU(s)
# Batch size
batch_size = 8
Explanation: TODO: Set this first!
End of explanation
# TODO: You MUST set the batch size based on the capabilities of your GPU(s)
# Load train dataset
ds_opts = deepcopy(_DEFAULT_DS_TUNE_OPTIONS)
ds_opts['in_memory'] = False # Too many samples to keep in memory at once, so don't preload them
ds_opts['aug_type'] = 'heavy' # Apply all supported augmentations
ds_opts['batch_size'] = batch_size * len(gpu_devices) # Use a multiple of 8; here, 16 for dual-GPU mode (Titan X & 1080 Ti)
ds_opts['crop_preproc'] = (256, 448) # Crop to a smaller input size
ds1 = FlyingChairsDataset(mode='train_with_val', ds_root=_FLYINGCHAIRS_ROOT, options=ds_opts)
ds_opts['type'] = 'into_future'
ds2 = FlyingThings3DHalfResDataset(mode='train_with_val', ds_root=_FLYINGTHINGS3DHALFRES_ROOT, options=ds_opts)
ds = MixedDataset(mode='train_with_val', datasets=[ds1, ds2], options=ds_opts)
# Display dataset configuration
ds.print_config()
Explanation: Finetune on FlyingChairs+FlyingThings3DHalfRes mix
Load the dataset
End of explanation
# Start from the default options
nn_opts = deepcopy(_DEFAULT_PWCNET_FINETUNE_OPTIONS)
nn_opts['verbose'] = True
nn_opts['ckpt_path'] = './models/pwcnet-sm-6-2-cyclic-chairsthingsmix/pwcnet.ckpt-49000'
nn_opts['ckpt_dir'] = './pwcnet-sm-6-2-cyclic-chairsthingsmix_finetuned/'
nn_opts['batch_size'] = ds_opts['batch_size']
nn_opts['x_shape'] = [2, ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 3]
nn_opts['y_shape'] = [ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 2]
nn_opts['use_tf_data'] = True # Use tf.data reader
nn_opts['gpu_devices'] = gpu_devices
nn_opts['controller'] = controller
# Use the PWC-Net-small model in quarter-resolution mode
nn_opts['use_dense_cx'] = False
nn_opts['use_res_cx'] = False
nn_opts['pyr_lvls'] = 6
nn_opts['flow_pred_lvl'] = 2
# Robust loss as described doesn't work, so try the following:
nn_opts['loss_fn'] = 'loss_multiscale' # 'loss_multiscale' # 'loss_robust' # 'loss_robust'
nn_opts['q'] = 1. # 0.4 # 1. # 0.4 # 1.
nn_opts['epsilon'] = 0. # 0.01 # 0. # 0.01 # 0.
# Set the learning rate schedule. This schedule is for a single GPU using a batch size of 8.
# Below,we adjust the schedule to the size of the batch and the number of GPUs.
nn_opts['lr_policy'] = 'multisteps'
nn_opts['init_lr'] = 1e-05
nn_opts['lr_boundaries'] = [80000, 120000, 160000, 200000]
nn_opts['lr_values'] = [1e-05, 5e-06, 2.5e-06, 1.25e-06, 6.25e-07]
nn_opts['max_steps'] = 200000
# Below,we adjust the schedule to the size of the batch and our number of GPUs (2).
nn_opts['max_steps'] = int(nn_opts['max_steps'] * 8 / ds_opts['batch_size'])
nn_opts['cyclic_lr_stepsize'] = int(nn_opts['cyclic_lr_stepsize'] * 8 / ds_opts['batch_size'])
# Instantiate the model and display the model configuration
nn = ModelPWCNet(mode='train_with_val', options=nn_opts, dataset=ds)
nn.print_config()
Explanation: Configure the finetuning
End of explanation
# Train the model
nn.train()
Explanation: Finetune the model
End of explanation |
15,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Critical Radii
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Detached Systems
Detached systems are the default case for default_binary. The requiv_max parameter is constrained to show the maximum value for requiv before the system will begin overflowing at periastron.
Step3: We can see that the default system is well within this critical value by printing all radii and critical radii.
Step4: If we increase 'requiv' past the critical point, we'll receive a warning from the logger and would get an error if attempting to call b.run_compute(). | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Critical Radii: Detached Systems
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b['requiv_max@component@primary']
b['requiv_max@constraint@primary']
Explanation: Detached Systems
Detached systems are the default case for default_binary. The requiv_max parameter is constrained to show the maximum value for requiv before the system will begin overflowing at periastron.
End of explanation
print(b.filter(qualifier='requiv*', context='component'))
Explanation: We can see that the default system is well within this critical value by printing all radii and critical radii.
End of explanation
b['requiv@primary'] = 2.2
print(b.run_checks())
Explanation: If we increase 'requiv' past the critical point, we'll receive a warning from the logger and would get an error if attempting to call b.run_compute().
End of explanation |
15,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load Truth Data
Our uruguay data comes in a csv format. It contains three attributes
Step1: Label distribution
In this section, data is binned by landcover and counted. Landcover classes with little to no labels will be unreliable candidates for classification as there may not be enough variance in the training labels to guarantee that the model learns to generalize.
Step2: Re-Labeling
Related classes are combined to boost the number of samples in the new classes.
Step3: Visualize Label Distribution
Step4: Export re-labled data | Python Code:
df = pd.read_csv('../data.csv')
df.head()
Explanation: Load Truth Data
Our uruguay data comes in a csv format. It contains three attributes:
latitude
longitude
landcover class
End of explanation
df.groupby("LandUse").size()
fig, ax = pyplot.subplots(figsize=(15,3))
sns.countplot(x="LandUse",data=df, palette="Greens_d");
Explanation: Label distribution
In this section, data is binned by landcover and counted. Landcover classes with little to no labels will be unreliable candidates for classification as there may not be enough variance in the training labels to guarantee that the model learns to generalize.
End of explanation
df_new = df.copy()
df_new['LandUse'].update(df_new['LandUse'].map(lambda x: "Forest" if x in ["Forestry","Fruittrees","Nativeforest"] else x ))
df_new['LandUse'].update(df_new['LandUse'].map(lambda x: "Misc" if x not in ["Forest","Prairie","Summercrops","Naturalgrassland"] else x ))
df_new.groupby("LandUse").size()
fig, ax = pyplot.subplots(figsize=(15,5))
sns.countplot(x="LandUse",data=df_new, palette="Greens_d");
Explanation: Re-Labeling
Related classes are combined to boost the number of samples in the new classes.
End of explanation
dc_display_map.display_grouped_pandas_rows_as_pins(df_new, group_name= "LandUse")
Explanation: Visualize Label Distribution
End of explanation
output_destination_name = "./relabeled_data.csv"
## Recap of structure
df_new.head()
df_new.to_csv(output_destination_name)
!ls
Explanation: Export re-labled data
End of explanation |
15,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Transfer Learning for the Audio Domain with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import TensorFlow, Model Maker and other libraries
Among the dependencies that are needed, you'll use TensorFlow and Model Maker. Aside those, the others are for audio manipulation, playing and visualizations.
Step3: The Birds dataset
The Birds dataset is an education collection of 5 types of birds songs
Step4: Explore the data
The audios are already split in train and test folders. Inside each split folder, there's one folder for each bird, using their bird_code as name.
The audios are all mono and with 16kHz sample rate.
For more information about each file, you can read the metadata.csv file. It contains all the files authors, lincenses and some more information. You won't need to read it yourself on this tutorial.
Step5: Playing some audio
To have a better understanding about the data, lets listen to a random audio files from the test split.
Note
Step6: Training the Model
When using Model Maker for audio, you have to start with a model spec. This is the base model that your new model will extract information to learn about the new classes. It also affects how the dataset will be transformed to respect the models spec parameters like
Step7: Loading the data
Model Maker has the API to load the data from a folder and have it in the expected format for the model spec.
The train and test split are based on the folders. The validation dataset will be created as 20% of the train split.
Note
Step8: Training the model
the audio_classifier has the create method that creates a model and already start training it.
You can customize many parameterss, for more information you can read more details in the documentation.
On this first try you'll use all the default configurations and train for 100 epochs.
Note
Step9: The accuracy looks good but it's important to run the evaluation step on the test data and vefify your model achieved good results on unseed data.
Step11: Understanding your model
When training a classifier, it's useful to see the confusion matrix. The confusion matrix gives you detailed knowledge of how your classifier is performing on test data.
Model Maker already creates the confusion matrix for you.
Step12: Testing the model [Optional]
You can try the model on a sample audio from the test dataset just to see the results.
First you get the serving model.
Step13: Coming back to the random audio you loaded earlier
Step14: The model created has a fixed input window.
For a given audio file, you'll have to split it in windows of data of the expected size. The last window might need to be filled with zeros.
Step15: You'll loop over all the splitted audio and apply the model for each one of them.
The model you've just trained has 2 outputs
Step16: Exporting the model
The last step is exporting your model to be used on embedded devices or on the browser.
The export method export both formats for you.
Step17: You can also export the SavedModel version for serving or using on a Python environment. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
! pip install tflite-model-maker tensorflow==2.5
Explanation: Transfer Learning for the Audio Domain with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_audio_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_audio_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_audio_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_audio_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/yamnet/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
In this colab notebook, you'll learn how to use the TensorFlow Lite Model Maker to train a custom audio classification model.
The Model Maker library uses transfer learning to simplify the process of training a TensorFlow Lite model using a custom dataset. Retraining a TensorFlow Lite model with your own custom dataset reduces the amount of training data and time required.
It is part of the Codelab to Customize an Audio model and deploy on Android.
You'll use a custom birds dataset and export a TFLite model that can be used on a phone, a TensorFlow.JS model that can be used for inference in the browser and also a SavedModel version that you can use for serving.
Intalling dependencies
Model Maker for the Audio domain needs TensorFlow 2.5 to work.
End of explanation
import tensorflow as tf
import tflite_model_maker as mm
from tflite_model_maker import audio_classifier
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
import glob
import random
from IPython.display import Audio, Image
from scipy.io import wavfile
print(f"TensorFlow Version: {tf.__version__}")
print(f"Model Maker Version: {mm.__version__}")
Explanation: Import TensorFlow, Model Maker and other libraries
Among the dependencies that are needed, you'll use TensorFlow and Model Maker. Aside those, the others are for audio manipulation, playing and visualizations.
End of explanation
birds_dataset_folder = tf.keras.utils.get_file('birds_dataset.zip',
'https://storage.googleapis.com/laurencemoroney-blog.appspot.com/birds_dataset.zip',
cache_dir='./',
cache_subdir='dataset',
extract=True)
Explanation: The Birds dataset
The Birds dataset is an education collection of 5 types of birds songs:
White-breasted Wood-Wren
House Sparrow
Red Crossbill
Chestnut-crowned Antpitta
Azara's Spinetail
The original audio came from Xeno-canto which is a website dedicated to sharing bird sounds from all over the world.
Let's start by downloading the data.
End of explanation
# @title [Run this] Util functions and data structures.
data_dir = './dataset/small_birds_dataset'
bird_code_to_name = {
'wbwwre1': 'White-breasted Wood-Wren',
'houspa': 'House Sparrow',
'redcro': 'Red Crossbill',
'chcant2': 'Chestnut-crowned Antpitta',
'azaspi1': "Azara's Spinetail",
}
birds_images = {
'wbwwre1': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Henicorhina_leucosticta_%28Cucarachero_pechiblanco%29_-_Juvenil_%2814037225664%29.jpg/640px-Henicorhina_leucosticta_%28Cucarachero_pechiblanco%29_-_Juvenil_%2814037225664%29.jpg', # Alejandro Bayer Tamayo from Armenia, Colombia
'houspa': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/52/House_Sparrow%2C_England_-_May_09.jpg/571px-House_Sparrow%2C_England_-_May_09.jpg', # Diliff
'redcro': 'https://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Red_Crossbills_%28Male%29.jpg/640px-Red_Crossbills_%28Male%29.jpg', # Elaine R. Wilson, www.naturespicsonline.com
'chcant2': 'https://upload.wikimedia.org/wikipedia/commons/thumb/6/67/Chestnut-crowned_antpitta_%2846933264335%29.jpg/640px-Chestnut-crowned_antpitta_%2846933264335%29.jpg', # Mike's Birds from Riverside, CA, US
'azaspi1': 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Synallaxis_azarae_76608368.jpg/640px-Synallaxis_azarae_76608368.jpg', # https://www.inaturalist.org/photos/76608368
}
test_files = os.path.abspath(os.path.join(data_dir, 'test/*/*.wav'))
def get_random_audio_file():
test_list = glob.glob(test_files)
random_audio_path = random.choice(test_list)
return random_audio_path
def show_bird_data(audio_path):
sample_rate, audio_data = wavfile.read(audio_path, 'rb')
bird_code = audio_path.split('/')[-2]
print(f'Bird name: {bird_code_to_name[bird_code]}')
print(f'Bird code: {bird_code}')
display(Image(birds_images[bird_code]))
plttitle = f'{bird_code_to_name[bird_code]} ({bird_code})'
plt.title(plttitle)
plt.plot(audio_data)
display(Audio(audio_data, rate=sample_rate))
print('functions and data structures created')
Explanation: Explore the data
The audios are already split in train and test folders. Inside each split folder, there's one folder for each bird, using their bird_code as name.
The audios are all mono and with 16kHz sample rate.
For more information about each file, you can read the metadata.csv file. It contains all the files authors, lincenses and some more information. You won't need to read it yourself on this tutorial.
End of explanation
random_audio = get_random_audio_file()
show_bird_data(random_audio)
Explanation: Playing some audio
To have a better understanding about the data, lets listen to a random audio files from the test split.
Note: later in this notebook you'll run inference on this audio for testing
End of explanation
spec = audio_classifier.YamNetSpec(
keep_yamnet_and_custom_heads=True,
frame_step=3 * audio_classifier.YamNetSpec.EXPECTED_WAVEFORM_LENGTH,
frame_length=6 * audio_classifier.YamNetSpec.EXPECTED_WAVEFORM_LENGTH)
Explanation: Training the Model
When using Model Maker for audio, you have to start with a model spec. This is the base model that your new model will extract information to learn about the new classes. It also affects how the dataset will be transformed to respect the models spec parameters like: sample rate, number of channels.
YAMNet is an audio event classifier trained on the AudioSet dataset to predict audio events from the AudioSet ontology.
It's input is expected to be at 16kHz and with 1 channel.
You don't need to do any resampling yourself. Model Maker takes care of that for you.
frame_length is to decide how long each traininng sample is. in this caase EXPECTED_WAVEFORM_LENGTH * 3s
frame_steps is to decide how far appart are the training samples. In this case, the ith sample will start at EXPECTED_WAVEFORM_LENGTH * 6s after the (i-1)th sample.
The reason to set these values is to work around some limitation in real world dataset.
For example, in the bird dataset, birds don't sing all the time. They sing, rest and sing again, with noises in between. Having a long frame would help capture the singing, but setting it too long will reduce the number of samples for training.
End of explanation
train_data = audio_classifier.DataLoader.from_folder(
spec, os.path.join(data_dir, 'train'), cache=True)
train_data, validation_data = train_data.split(0.8)
test_data = audio_classifier.DataLoader.from_folder(
spec, os.path.join(data_dir, 'test'), cache=True)
Explanation: Loading the data
Model Maker has the API to load the data from a folder and have it in the expected format for the model spec.
The train and test split are based on the folders. The validation dataset will be created as 20% of the train split.
Note: The cache=True is important to make training later faster but it will also require more RAM to hold the data. For the birds dataset that is not a problem since it's only 300MB, but if you use your own data you have to pay attention to it.
End of explanation
batch_size = 128
epochs = 100
print('Training the model')
model = audio_classifier.create(
train_data,
spec,
validation_data,
batch_size=batch_size,
epochs=epochs)
Explanation: Training the model
the audio_classifier has the create method that creates a model and already start training it.
You can customize many parameterss, for more information you can read more details in the documentation.
On this first try you'll use all the default configurations and train for 100 epochs.
Note: The first epoch takes longer than all the other ones because it's when the cache is created. After that each epoch takes close to 1 second.
End of explanation
print('Evaluating the model')
model.evaluate(test_data)
Explanation: The accuracy looks good but it's important to run the evaluation step on the test data and vefify your model achieved good results on unseed data.
End of explanation
def show_confusion_matrix(confusion, test_labels):
Compute confusion matrix and normalize.
confusion_normalized = confusion.astype("float") / confusion.sum(axis=1)
axis_labels = test_labels
ax = sns.heatmap(
confusion_normalized, xticklabels=axis_labels, yticklabels=axis_labels,
cmap='Blues', annot=True, fmt='.2f', square=True)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
confusion_matrix = model.confusion_matrix(test_data)
show_confusion_matrix(confusion_matrix.numpy(), test_data.index_to_label)
Explanation: Understanding your model
When training a classifier, it's useful to see the confusion matrix. The confusion matrix gives you detailed knowledge of how your classifier is performing on test data.
Model Maker already creates the confusion matrix for you.
End of explanation
serving_model = model.create_serving_model()
print(f'Model\'s input shape and type: {serving_model.inputs}')
print(f'Model\'s output shape and type: {serving_model.outputs}')
Explanation: Testing the model [Optional]
You can try the model on a sample audio from the test dataset just to see the results.
First you get the serving model.
End of explanation
# if you want to try another file just uncoment the line below
random_audio = get_random_audio_file()
show_bird_data(random_audio)
Explanation: Coming back to the random audio you loaded earlier
End of explanation
sample_rate, audio_data = wavfile.read(random_audio, 'rb')
audio_data = np.array(audio_data) / tf.int16.max
input_size = serving_model.input_shape[1]
splitted_audio_data = tf.signal.frame(audio_data, input_size, input_size, pad_end=True, pad_value=0)
print(f'Test audio path: {random_audio}')
print(f'Original size of the audio data: {len(audio_data)}')
print(f'Number of windows for inference: {len(splitted_audio_data)}')
Explanation: The model created has a fixed input window.
For a given audio file, you'll have to split it in windows of data of the expected size. The last window might need to be filled with zeros.
End of explanation
print(random_audio)
results = []
print('Result of the window ith: your model class -> score, (spec class -> score)')
for i, data in enumerate(splitted_audio_data):
yamnet_output, inference = serving_model(data)
results.append(inference[0].numpy())
result_index = tf.argmax(inference[0])
spec_result_index = tf.argmax(yamnet_output[0])
t = spec._yamnet_labels()[spec_result_index]
result_str = f'Result of the window {i}: ' \
f'\t{test_data.index_to_label[result_index]} -> {inference[0][result_index].numpy():.3f}, ' \
f'\t({spec._yamnet_labels()[spec_result_index]} -> {yamnet_output[0][spec_result_index]:.3f})'
print(result_str)
results_np = np.array(results)
mean_results = results_np.mean(axis=0)
result_index = mean_results.argmax()
print(f'Mean result: {test_data.index_to_label[result_index]} -> {mean_results[result_index]}')
Explanation: You'll loop over all the splitted audio and apply the model for each one of them.
The model you've just trained has 2 outputs: The original YAMNet's output and the one you've just trained. This is important because the real world environment is more complicated than just bird sounds. You can use the YAMNet's output to filter out non relevant audio, for example, on the birds use case, if YAMNet is not classifying Birds or Animals, this might show that the output from your model might have an irrelevant classification.
Below both outpus are printed to make it easier to understand their relation. Most of the mistakes that your model make are when YAMNet's prediction is not related to your domain (eg: birds).
End of explanation
models_path = './birds_models'
print(f'Exporing the TFLite model to {models_path}')
model.export(models_path, tflite_filename='my_birds_model.tflite')
Explanation: Exporting the model
The last step is exporting your model to be used on embedded devices or on the browser.
The export method export both formats for you.
End of explanation
model.export(models_path, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])
Explanation: You can also export the SavedModel version for serving or using on a Python environment.
End of explanation |
15,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this tutorial I’ll explain how to build a simple working
Recurrent Neural Network in TensorFlow!
We will build a simple Echo-RNN that remembers the input sequence and then echoes it after a few time-steps. This will help us understand how
memory works
We are mapping two sequences!
What is an RNN?
It is short for “Recurrent Neural Network”, and is basically a neural
network that can be used when your data is treated as a sequence, where
the particular order of the data-points matter. More importantly, this
sequence can be of arbitrary length.
The most straight-forward example is perhaps a time-seriedems of numbers,
where the task is to predict the next value given previous values. The
input to the RNN at every time-step is the current value as well as a
state vector which represent what the network has “seen” at time-steps
before. This state-vector is the encoded memory of the RNN, initially
set to zero.
Great paper on this
https
Step1: The figure below shows the input data-matrix, and the current batch batchX_placeholder
is in the dashed rectangle. As we will see later, this “batch window” is slided truncated_backprop_length
steps to the right at each run, hence the arrow. In our example below batch_size = 3, truncated_backprop_length = 3,
and total_series_length = 36. Note that these numbers are just for visualization purposes, the values are different in the code.
The series order index is shown as numbers in a few of the data-points.
Step2: As you can see in the picture below that is done by unpacking the columns (axis = 1) of the batch into a Python list. The RNN will simultaneously be training on different parts in the time-series; steps 4 to 6, 16 to 18 and 28 to 30 in the current batch-example. The reason for using the variable names “plural”_”series” is to emphasize that the variable is a list that represent a time-series with multiple entries at each step.
Step3: The fact that the training is done on three places simultaneously in our time-series, requires us to save three instances of states when propagating forward. That has already been accounted for, as you see that the init_state placeholder has batch_size rows.
Step4: Notice the concatenation on line 6, what we actually want to do is calculate the sum of two affine transforms current_input * Wa + current_state * Wb in the figure below. By concatenating those two tensors you will only use one matrix multiplication. The addition of the bias b is broadcasted on all samples in the batch.
Step5: You may wonder the variable name truncated_backprop_length is supposed to mean. When a RNN is trained, it is actually treated as a deep neural network with reoccurring weights in every layer. These layers will not be unrolled to the beginning of time, that would be too computationally expensive, and are therefore truncated at a limited number of time-steps. In our sample schematics above, the error is backpropagated three steps in our batch
Step6: The last line is adding the training functionality, TensorFlow will perform back-propagation for us automatically — the computation graph is executed once for each mini-batch and the network-weights are updated incrementally.
Notice the API call to sparse_softmax_cross_entropy_with_logits, it automatically calculates the softmax internally and then computes the cross-entropy. In our example the classes are mutually exclusive (they are either zero or one), which is the reason for using the “Sparse-softmax”, you can read more about it in the API. The usage is to havelogits is of shape [batch_size, num_classes] and labels of shape [batch_size].
Step7: There is a visualization function so we can se what’s going on in the network as we train. It will plot the loss over the time, show training input, training output and the current predictions by the network on different sample series in a training batch.
Step8: You can see that we are moving truncated_backprop_length steps forward on each iteration (line 15–19), but it is possible have different strides. This subject is further elaborated in this article. The downside with doing this is that truncated_backprop_length need to be significantly larger than the time dependencies (three steps in our case) in order to encapsulate the relevant training data. Otherwise there might a lot of “misses”, as you can see on the figure below.
Step9: Time series of squares, the elevated black square symbolizes an echo-output, which is activated three steps from the echo input (black square). The sliding batch window is also striding three steps at each run, which in our sample case means that no batch will encapsulate the dependency, so it can not train.
The network will be able to exactly learn the echo behavior so there is no need for testing data.
The program will update the plot as training progresses, Blue bars denote a training input signal (binary one), red bars show echos in the training output and green bars are the echos the net is generating. The different bar plots show different sample series in the current batch. Fully trained at 100 epochs look like this | Python Code:
from IPython.display import Image
from IPython.core.display import HTML
from __future__ import print_function, division
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
Image(url= "https://cdn-images-1.medium.com/max/1600/1*UkI9za9zTR-HL8uM15Wmzw.png")
#hyperparams
num_epochs = 100
total_series_length = 50000
truncated_backprop_length = 15
state_size = 4
num_classes = 2
echo_step = 3
batch_size = 5
num_batches = total_series_length//batch_size//truncated_backprop_length
#Step 1 - Collect data
#Now generate the training data,
#the input is basically a random binary vector. The output will be the
#“echo” of the input, shifted echo_step steps to the right.
#Notice the reshaping of the data into a matrix with batch_size rows.
#Neural networks are trained by approximating the gradient of loss function
#with respect to the neuron-weights, by looking at only a small subset of the data,
#also known as a mini-batch.The reshaping takes the whole dataset and puts it into
#a matrix, that later will be sliced up into these mini-batches.
def generateData():
#0,1, 50K samples, 50% chance each chosen
x = np.array(np.random.choice(2, total_series_length, p=[0.5, 0.5]))
#shift 3 steps to the left
y = np.roll(x, echo_step)
#padd beginning 3 values with 0
y[0:echo_step] = 0
#Gives a new shape to an array without changing its data.
#The reshaping takes the whole dataset and puts it into a matrix,
#that later will be sliced up into these mini-batches.
x = x.reshape((batch_size, -1)) # The first index changing slowest, subseries as rows
y = y.reshape((batch_size, -1))
return (x, y)
data = generateData()
print(data)
#Schematic of the reshaped data-matrix, arrow curves shows adjacent time-steps that ended up on different rows.
#Light-gray rectangle represent a “zero” and dark-gray a “one”.
Image(url= "https://cdn-images-1.medium.com/max/1600/1*aFtwuFsboLV8z5PkEzNLXA.png")
#TensorFlow works by first building up a computational graph, that
#specifies what operations will be done. The input and output of this graph
#is typically multidimensional arrays, also known as tensors.
#The graph, or parts of it can then be executed iteratively in a
#session, this can either be done on the CPU, GPU or even a resource
#on a remote server.
#operations and tensors
#The two basic TensorFlow data-structures that will be used in this
#example are placeholders and variables. On each run the batch data
#is fed to the placeholders, which are “starting nodes” of the
#computational graph. Also the RNN-state is supplied in a placeholder,
#which is saved from the output of the previous run.
#Step 2 - Build the Model
#datatype, shape (5, 15) 2D array or matrix, batch size shape for later
batchX_placeholder = tf.placeholder(tf.float32, [batch_size, truncated_backprop_length])
batchY_placeholder = tf.placeholder(tf.int32, [batch_size, truncated_backprop_length])
#and one for the RNN state, 5,4
init_state = tf.placeholder(tf.float32, [batch_size, state_size])
#The weights and biases of the network are declared as TensorFlow variables,
#which makes them persistent across runs and enables them to be updated
#incrementally for each batch.
#3 layer recurrent net, one hidden state
#randomly initialize weights
W = tf.Variable(np.random.rand(state_size+1, state_size), dtype=tf.float32)
#anchor, improves convergance, matrix of 0s
b = tf.Variable(np.zeros((1,state_size)), dtype=tf.float32)
W2 = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32)
b2 = tf.Variable(np.zeros((1,num_classes)), dtype=tf.float32)
Explanation: In this tutorial I’ll explain how to build a simple working
Recurrent Neural Network in TensorFlow!
We will build a simple Echo-RNN that remembers the input sequence and then echoes it after a few time-steps. This will help us understand how
memory works
We are mapping two sequences!
What is an RNN?
It is short for “Recurrent Neural Network”, and is basically a neural
network that can be used when your data is treated as a sequence, where
the particular order of the data-points matter. More importantly, this
sequence can be of arbitrary length.
The most straight-forward example is perhaps a time-seriedems of numbers,
where the task is to predict the next value given previous values. The
input to the RNN at every time-step is the current value as well as a
state vector which represent what the network has “seen” at time-steps
before. This state-vector is the encoded memory of the RNN, initially
set to zero.
Great paper on this
https://arxiv.org/pdf/1506.00019.pdf
End of explanation
Image(url= "https://cdn-images-1.medium.com/max/1600/1*n45uYnAfTDrBvG87J-poCA.jpeg")
#Now it’s time to build the part of the graph that resembles the actual RNN computation,
#first we want to split the batch data into adjacent time-steps.
# Unpack columns
#Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.
#so a bunch of arrays, 1 batch per time step
inputs_series = tf.unpack(batchX_placeholder, axis=1)
labels_series = tf.unpack(batchY_placeholder, axis=1)
Explanation: The figure below shows the input data-matrix, and the current batch batchX_placeholder
is in the dashed rectangle. As we will see later, this “batch window” is slided truncated_backprop_length
steps to the right at each run, hence the arrow. In our example below batch_size = 3, truncated_backprop_length = 3,
and total_series_length = 36. Note that these numbers are just for visualization purposes, the values are different in the code.
The series order index is shown as numbers in a few of the data-points.
End of explanation
Image(url= "https://cdn-images-1.medium.com/max/1600/1*f2iL4zOkBUBGOpVE7kyajg.png")
#Schematic of the current batch split into columns, the order index is shown on each data-point
#and arrows show adjacent time-steps.
Explanation: As you can see in the picture below that is done by unpacking the columns (axis = 1) of the batch into a Python list. The RNN will simultaneously be training on different parts in the time-series; steps 4 to 6, 16 to 18 and 28 to 30 in the current batch-example. The reason for using the variable names “plural”_”series” is to emphasize that the variable is a list that represent a time-series with multiple entries at each step.
End of explanation
#Forward pass
#state placeholder
current_state = init_state
#series of states through time
states_series = []
#for each set of inputs
#forward pass through the network to get new state value
#store all states in memory
for current_input in inputs_series:
#format input
current_input = tf.reshape(current_input, [batch_size, 1])
#mix both state and input data
input_and_state_concatenated = tf.concat(1, [current_input, current_state]) # Increasing number of columns
#perform matrix multiplication between weights and input, add bias
#squash with a nonlinearity, for probabiolity value
next_state = tf.tanh(tf.matmul(input_and_state_concatenated, W) + b) # Broadcasted addition
#store the state in memory
states_series.append(next_state)
#set current state to next one
current_state = next_state
Explanation: The fact that the training is done on three places simultaneously in our time-series, requires us to save three instances of states when propagating forward. That has already been accounted for, as you see that the init_state placeholder has batch_size rows.
End of explanation
Image(url= "https://cdn-images-1.medium.com/max/1600/1*fdwNNJ5UOE3Sx0R_Cyfmyg.png")
Explanation: Notice the concatenation on line 6, what we actually want to do is calculate the sum of two affine transforms current_input * Wa + current_state * Wb in the figure below. By concatenating those two tensors you will only use one matrix multiplication. The addition of the bias b is broadcasted on all samples in the batch.
End of explanation
#calculate loss
#second part of forward pass
#logits short for logistic transform
logits_series = [tf.matmul(state, W2) + b2 for state in states_series] #Broadcasted addition
#apply softmax nonlinearity for output probability
predictions_series = [tf.nn.softmax(logits) for logits in logits_series]
#measure loss, calculate softmax again on logits, then compute cross entropy
#measures the difference between two probability distributions
#this will return A Tensor of the same shape as labels and of the same type as logits
#with the softmax cross entropy loss.
losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels) for logits, labels in zip(logits_series,labels_series)]
#computes average, one value
total_loss = tf.reduce_mean(losses)
#use adagrad to minimize with .3 learning rate
#minimize it with adagrad, not SGD
#One downside of SGD is that it is sensitive to
#the learning rate hyper-parameter. When the data are sparse and features have
#different frequencies, a single learning rate for every weight update can have
#exponential regret.
#Some features can be extremely useful and informative to an optimization problem but
#they may not show up in most of the training instances or data. If, when they do show up,
#they are weighted equally in terms of learning rate as a feature that has shown up hundreds
#of times we are practically saying that the influence of such features means nothing in the
#overall optimization. it's impact per step in the stochastic gradient descent will be so small
#that it can practically be discounted). To counter this, AdaGrad makes it such that features
#that are more sparse in the data have a higher learning rate which translates into a larger
#update for that feature
#sparse features can be very useful.
#Each feature has a different learning rate which is adaptable.
#gives voice to the little guy who matters a lot
#weights that receive high gradients will have their effective learning rate reduced,
#while weights that receive small or infrequent updates will have their effective learning rate increased.
#great paper http://seed.ucsd.edu/mediawiki/images/6/6a/Adagrad.pdf
train_step = tf.train.AdagradOptimizer(0.3).minimize(total_loss)
Explanation: You may wonder the variable name truncated_backprop_length is supposed to mean. When a RNN is trained, it is actually treated as a deep neural network with reoccurring weights in every layer. These layers will not be unrolled to the beginning of time, that would be too computationally expensive, and are therefore truncated at a limited number of time-steps. In our sample schematics above, the error is backpropagated three steps in our batch
End of explanation
#visualizer
def plot(loss_list, predictions_series, batchX, batchY):
plt.subplot(2, 3, 1)
plt.cla()
plt.plot(loss_list)
for batch_series_idx in range(5):
one_hot_output_series = np.array(predictions_series)[:, batch_series_idx, :]
single_output_series = np.array([(1 if out[0] < 0.5 else 0) for out in one_hot_output_series])
plt.subplot(2, 3, batch_series_idx + 2)
plt.cla()
plt.axis([0, truncated_backprop_length, 0, 2])
left_offset = range(truncated_backprop_length)
plt.bar(left_offset, batchX[batch_series_idx, :], width=1, color="blue")
plt.bar(left_offset, batchY[batch_series_idx, :] * 0.5, width=1, color="red")
plt.bar(left_offset, single_output_series * 0.3, width=1, color="green")
plt.draw()
plt.pause(0.0001)
Explanation: The last line is adding the training functionality, TensorFlow will perform back-propagation for us automatically — the computation graph is executed once for each mini-batch and the network-weights are updated incrementally.
Notice the API call to sparse_softmax_cross_entropy_with_logits, it automatically calculates the softmax internally and then computes the cross-entropy. In our example the classes are mutually exclusive (they are either zero or one), which is the reason for using the “Sparse-softmax”, you can read more about it in the API. The usage is to havelogits is of shape [batch_size, num_classes] and labels of shape [batch_size].
End of explanation
#Step 3 Training the network
with tf.Session() as sess:
#we stupidly have to do this everytime, it should just know
#that we initialized these vars. v2 guys, v2..
sess.run(tf.initialize_all_variables())
#interactive mode
plt.ion()
#initialize the figure
plt.figure()
#show the graph
plt.show()
#to show the loss decrease
loss_list = []
for epoch_idx in range(num_epochs):
#generate data at eveery epoch, batches run in epochs
x,y = generateData()
#initialize an empty hidden state
_current_state = np.zeros((batch_size, state_size))
print("New data, epoch", epoch_idx)
#each batch
for batch_idx in range(num_batches):
#starting and ending point per batch
#since weights reoccuer at every layer through time
#These layers will not be unrolled to the beginning of time,
#that would be too computationally expensive, and are therefore truncated
#at a limited number of time-steps
start_idx = batch_idx * truncated_backprop_length
end_idx = start_idx + truncated_backprop_length
batchX = x[:,start_idx:end_idx]
batchY = y[:,start_idx:end_idx]
#run the computation graph, give it the values
#we calculated earlier
_total_loss, _train_step, _current_state, _predictions_series = sess.run(
[total_loss, train_step, current_state, predictions_series],
feed_dict={
batchX_placeholder:batchX,
batchY_placeholder:batchY,
init_state:_current_state
})
loss_list.append(_total_loss)
if batch_idx%100 == 0:
print("Step",batch_idx, "Loss", _total_loss)
plot(loss_list, _predictions_series, batchX, batchY)
plt.ioff()
plt.show()
Explanation: There is a visualization function so we can se what’s going on in the network as we train. It will plot the loss over the time, show training input, training output and the current predictions by the network on different sample series in a training batch.
End of explanation
Image(url= "https://cdn-images-1.medium.com/max/1600/1*uKuUKp_m55zAPCzaIemucA.png")
Explanation: You can see that we are moving truncated_backprop_length steps forward on each iteration (line 15–19), but it is possible have different strides. This subject is further elaborated in this article. The downside with doing this is that truncated_backprop_length need to be significantly larger than the time dependencies (three steps in our case) in order to encapsulate the relevant training data. Otherwise there might a lot of “misses”, as you can see on the figure below.
End of explanation
Image(url= "https://cdn-images-1.medium.com/max/1600/1*ytquMdmGMJo0-3kxMCi1Gg.png")
Explanation: Time series of squares, the elevated black square symbolizes an echo-output, which is activated three steps from the echo input (black square). The sliding batch window is also striding three steps at each run, which in our sample case means that no batch will encapsulate the dependency, so it can not train.
The network will be able to exactly learn the echo behavior so there is no need for testing data.
The program will update the plot as training progresses, Blue bars denote a training input signal (binary one), red bars show echos in the training output and green bars are the echos the net is generating. The different bar plots show different sample series in the current batch. Fully trained at 100 epochs look like this
End of explanation |
15,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of Boolean operators
Think about
Step2: Thinking about how results work
Look at truth tables to understand how values can be combined for these binary operators
Step5: Black Jack
Here is a simple implementation of some black jack related functions.
Step8: Studio | Python Code:
True and False
True or False and False
False and False
print((True or False) and False)
print(True or (False and False))
print(not False)
print(not True)
Explanation: Examples of Boolean operators
Think about:
What operators exist
What these operators can be used on
The precedence of these operators
End of explanation
def factorial(number):
Return the factorial of a passed number.
e.g. if 5 is passed return: 5 * 4 * 3 * 2 * 1
if number == 0:
return 1
return number * factorial(number -1)
import math
print(factorial(5))
print(math.factorial(5))
Explanation: Thinking about how results work
Look at truth tables to understand how values can be combined for these binary operators: https://en.wikipedia.org/wiki/Truth_table#Logical_conjunction_.28AND.29
Also understand the not unary operator:
https://en.wikipedia.org/wiki/Truth_table#Logical_negation
End of explanation
import random
def dealer_will_hit(cards_value):
Returns true if the dealer must take a hit
if cards_value <= 16:
return True
return False
def should_i_hit(dealer_showing_value, my_cards_value):
Returns true if you should takea hit
# Hit because you've got the space
if my_cards_value <= 11:
return True
if my_cards_value >= 17 and my_cards_value <= 19:
# Assume the dealer will bust
if dealer_showing_value < 5:
return False
# Assume the dealer will beat you if you don't hit
elif dealer_showing_value >= 7:
return True
# In all other cases see how it works out.
return False
def get_card():
card_values = [1,2,3,4,5,6,7,8,9,10,10,10,10]
return random.choice(card_values)
def print_who_won(dealer_cards_value, my_cards_value):
if my_cards_value > 21:
print("Dealer won I busted")
elif dealer_cards_value > 21:
print("I won because dealer busted")
elif dealer_cards_value > my_cards_value:
print("Dealer won with higher value cards")
elif my_cards_value > dealer_cards_value:
print("I won with higher value cards")
else:
print("We tied! No one loses.")
# Setup the game
dealer_showing_card = get_card()
dealer_card_values = dealer_showing_card + get_card()
player_cards_value = get_card() + get_card()
# Check to see if the player should hit
# Only going to hit once
if should_i_hit(dealer_showing_card, player_cards_value):
player_cards_value += get_card()
# Same the dealer will only hit once
if dealer_will_hit(dealer_card_values):
dealer_card_values += get_card()
print("Dealer ended with: ", dealer_card_values)
print("I ended with: ", player_cards_value)
print_who_won(dealer_card_values, player_cards_value)
def no_return():
print('hi')
def has_return():
return 'hi'
print(has_return() + ' Hello')
print(type(no_return()))
print(no_return() + ' Hello')
x = 8
if x < 15:
print('less than 15')
if x < 13:
print('less than 13')
if x < 9:
print('less than 9')
print('always printed if less than 15')
else:
print('bye')
print('super default')
print(' hi')
print('\t\t\thi')
Explanation: Black Jack
Here is a simple implementation of some black jack related functions.
End of explanation
temp = [0, 1, 5, 3, 10]
some_number = 11
def contains_n(list_of_numbers, test_number):
Tests if the passed `test_number` is equal
to a value inside `list_of_numbers`
for number in list_of_numbers:
# executions
#
# number = 0
# number = 1
# number = 5
# number = 3
# number = 10
if number == test_number:
return True
return False
def contains_type_and_value_n(numbers, n):
Tests if the passed `test_number` is equal
to a value inside `list_of_numbers` as well
as the value's type.
for num in numbers:
if num == n and type(num) == type(n):
return True
return False
def contains_number_of_n(numbers, n):
count = 0
for num in numbers:
if num == n:
count += 1
return count
def contains_number_of_type_n(numbers, n):
count = 0
for num in numbers:
if num == n and type(num) == type(n):
count += 1
return count
print(contains_n([5, 7, 9, 10], 6))
print(contains_type_and_value_n([5, 6.2, 5.8, 9], 5))
print(contains_number_of_n([5, 7, 9, 10, 9], 9))
print(contains_number_of_type_n([5, 7, 9, 10.0, 9], 10))
print(type(10))
print(type(10.0000))
def contains(numbers, n):
return n in numbers
def count(numbers, n):
return numbers.count(n)
numbers = [2, 3.0, 3, 5.0, 6.666, 4]
value = 4.0
value2 = 5.0
print(
contains(numbers, value),
contains(numbers, value),
'\n',
contains(numbers, value2),
contains(numbers, value2)
)
# in operator: https://docs.python.org/3.4/reference/expressions.html#in
Explanation: Studio: contains_n
End of explanation |
15,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: The Scenario
Imagine we have a function that takes in some external API or database and we want to test that function, but with fake (or mocked) inputs. The Python mock library lets us do that.
For this tutorial pretend that math.exp is some expensive operation (e.g. database query, API call, etc) that costs \$10,000 every time we use it. To test it without paying \$10,000, we can create mock_function which imitates the behavior of math.exp and allows us to test it.
Create The Mock Function
Step2: Create A Unit Test
Step3: Run Unit Test | Python Code:
import unittest
import mock
from math import exp
Explanation: Title: Mocking Functions
Slug: mocking_functions
Summary: Mocking Functions in Python.
Date: 2016-01-23 12:00
Category: Python
Tags: Testing
Authors: Chris Albon
Interesting in learning more? Here are some good books on unit testing in Python: Python Testing: Beginner's Guide and Python Testing Cookbook.
Preliminaries
End of explanation
# Create a function,
def mock_function(x):
# That returns a string.
return 'This is not exp, but rather mock_function.'
Explanation: The Scenario
Imagine we have a function that takes in some external API or database and we want to test that function, but with fake (or mocked) inputs. The Python mock library lets us do that.
For this tutorial pretend that math.exp is some expensive operation (e.g. database query, API call, etc) that costs \$10,000 every time we use it. To test it without paying \$10,000, we can create mock_function which imitates the behavior of math.exp and allows us to test it.
Create The Mock Function
End of explanation
# Create a test case,
class TestRandom(unittest.TestCase):
# where math.exp (__main__.exp is because we imported the exp module from math)
# math.exp is mocked (replaced) by mock_function,
@mock.patch('__main__.exp', side_effect=mock_function)
# now create a unit test that would only be true IF the exp(4) was being mocked
# (so we can prove that math.exp is actually being mocked)
def test_math_exp(self, mock_function):
# assert that math.exp(4) is actually a string, which would only be the case
# if math.exp was being mocked by mock_function
assert exp(4) == 'This is not exp, but rather mock_function.'
Explanation: Create A Unit Test
End of explanation
unittest.main(argv=['ignored', '-v'], exit=False)
Explanation: Run Unit Test
End of explanation |
15,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Merge
Concat
Join
Append
Step1: concat()
documentation
Step2: concatenate first and last elements
append()
documentation | Python Code:
import pandas as pd
import numpy as np
starting_date = '20160701'
sample_numpy_data = np.array(np.arange(24)).reshape((6,4))
dates_index = pd.date_range(starting_date, periods=6)
sample_df = pd.DataFrame(sample_numpy_data, index=dates_index, columns=list('ABCD'))
sample_df_2 = sample_df.copy()
sample_df_2['Fruits'] = ['apple', 'orange','banana','strawberry','blueberry','pineapple']
sample_series = pd.Series([1,2,3,4,5,6], index=pd.date_range(starting_date, periods=6))
sample_df_2['Extra Data'] = sample_series *3 +1
second_numpy_array = np.array(np.arange(len(sample_df_2))) *100 + 7
sample_df_2['G'] = second_numpy_array
sample_df_2
Explanation: Merge
Concat
Join
Append
End of explanation
pieces = [sample_df_2[:2], sample_df_2[2:4], sample_df_2[4:]]
pieces
Explanation: concat()
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html
separate data frame into a list with 3 elements
End of explanation
left = pd.DataFrame({'my_key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'my_key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
Explanation: concatenate first and last elements
append()
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html
merge()
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html
Merge DataFrame objects by performing a database-style join operation by columns or indexes.
If joining columns on columns, the DataFrame indexes will be ignored. Otherwise if joining indexes on indexes or indexes on a column or columns, the index will be passed on.
End of explanation |
15,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Systemic Velocity
NOTE
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Now we'll create empty lc, rv, orb, and mesh datasets. We'll then look to see how the systemic velocity (vgamma) affects the observables in each of these datasets, and how those are also affected by light-time effects (ltte).
To see the effects over long times, we'll compute one cycle starting at t=0, and another in the distant future.
Step3: Changing Systemic Velocity and LTTE
By default, vgamma is initially set to 0.0
Step4: We'll leave it set at 0.0 for now, and then change vgamma to see how that affects the observables.
The other relevant parameter here is t0 - that is the time at which all quantities are provided, the time at which nbody integration would start (if applicable), and the time at which the center-of-mass of the system is defined to be at (0,0,0). Unless you have a reason to do otherwise, it makes sense to have this value near the start of your time data... so if we don't have any other changing quantities defined in our system and are using BJDs, we would want to set this to be non-zero. In this case, our times all start at 0, so we'll leave t0 at 0 as well.
Step5: The option to enable or disable LTTE are in the compute options, we can either set ltte or we can just temporarily pass a value when we call run_compute.
Step6: Let's first compute the model with 0 systemic velocity and ltte=False (not that it would matter in this case). Let's also name the model so we can keep track of what settings were used.
Step7: For our second model, we'll set a somewhat ridiculous value for the systemic velocity (so that the affects are exagerated and clearly visible over one orbit), but leave ltte off.
Step8: Lastly, let's leave this value of vgamma, but enable light-time effects.
Step9: Influence on Light Curves (fluxes)
Now let's compare the various models across all our different datasets.
Let's set the colors so that all figures will have systemic velocity shown in blue, systemic velocity with ltte=False in red, and systemic velocity with ltte=True in green.
Step10: In each of the figures below, the first panel will be the first cycle (days 0-3) and the second panel will be 100 cycles later (days 900-903).
Without light-time effects, the light curve remains unchanged by the introduction of a systemic velocity (blue and red overlap each other). However, once ltte is enabled, the time between two eclipses (ie the observed period of the system) changes. This occurs because the path between the system and observer has changed. This is an important effect to note - the period parameter sets the TRUE period of the system, not necessarily the observed period between two successive eclipses.
Step11: Influence on Radial Velocities
Radial velocities are perhaps the most logical observable in the case of systemic velocities. Introducing a non-zero value for vgamma simply offsets the observed values.
Light-time will have a similar affect on RVs as it does on LCs - it simply changes the observed period.
Step12: Influence on Orbits (positions, velocities)
In the orbit, the addition of a systemic velocity affects both the positions and velocities. So if we plot the orbits from above (u-w plane) we can see see orbit spiral in the w-direction. Note that this actually shows the barycenter of the orbit moving - and it was only at 0,0,0 at t0. This also stresses the importance of using a reasonable t0 - here 900 days later, the barycenter has moved significantly from the center of the coordinate system.
Step13: Plotting the w-velocities with respect to time would show the same as the RVs, except without any Rossiter-McLaughlin like effects. Note however the flip in w-convention between vw and radial velocities (+w is defined as towards the observer to make a right-handed system, but by convention +rv is a red shift).
Step14: Now let's look at the effect that enabling ltte has on these same plots.
Step15: Influence on Meshes
Step16: As you can see, since the center of mass of the system was at 0,0,0 at t0 - including systemic velocity actually shows the system spiraling towards or away from the observer (who is in the positive w direction). In other words - the positions of the meshes are affected in the same way as the orbits (note the offset on the ylimit scales).
In addition, the actual values of vw and rv in the meshes are adjusted to include the systemic velocity. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
Explanation: Systemic Velocity
NOTE: the definition of the systemic velocity has been flipped between 2.0.x and 2.1.0+ to adhere to usual conventions. If importing a file from PHOEBE 2.0.x, the value should be flipped automatically, but if adopting an old script with non-zero systemic velocity, make sure the sign is correct.
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
times1 = np.linspace(0,1,201)
times2 = np.linspace(90,91,201)
b.add_dataset('lc', times=times1, dataset='lc1')
b.add_dataset('lc', times=times2, dataset='lc2')
b.add_dataset('rv', times=times1, dataset='rv1')
b.add_dataset('rv', times=times2, dataset='rv2')
b.add_dataset('orb', times=times1, dataset='orb1')
b.add_dataset('orb', times=times2, dataset='orb2')
b.add_dataset('mesh', times=[0], dataset='mesh1', columns=['vws'])
b.add_dataset('mesh', times=[90], dataset='mesh2', columns=['vws'])
Explanation: Now we'll create empty lc, rv, orb, and mesh datasets. We'll then look to see how the systemic velocity (vgamma) affects the observables in each of these datasets, and how those are also affected by light-time effects (ltte).
To see the effects over long times, we'll compute one cycle starting at t=0, and another in the distant future.
End of explanation
b['vgamma@system']
Explanation: Changing Systemic Velocity and LTTE
By default, vgamma is initially set to 0.0
End of explanation
b['t0@system']
Explanation: We'll leave it set at 0.0 for now, and then change vgamma to see how that affects the observables.
The other relevant parameter here is t0 - that is the time at which all quantities are provided, the time at which nbody integration would start (if applicable), and the time at which the center-of-mass of the system is defined to be at (0,0,0). Unless you have a reason to do otherwise, it makes sense to have this value near the start of your time data... so if we don't have any other changing quantities defined in our system and are using BJDs, we would want to set this to be non-zero. In this case, our times all start at 0, so we'll leave t0 at 0 as well.
End of explanation
b['ltte@compute']
Explanation: The option to enable or disable LTTE are in the compute options, we can either set ltte or we can just temporarily pass a value when we call run_compute.
End of explanation
b.run_compute(irrad_method='none', model='0_false')
Explanation: Let's first compute the model with 0 systemic velocity and ltte=False (not that it would matter in this case). Let's also name the model so we can keep track of what settings were used.
End of explanation
b['vgamma@system'] = 100
b.run_compute(irrad_method='none', model='100_false')
Explanation: For our second model, we'll set a somewhat ridiculous value for the systemic velocity (so that the affects are exagerated and clearly visible over one orbit), but leave ltte off.
End of explanation
b.run_compute(irrad_method='none', ltte=True, model='100_true')
Explanation: Lastly, let's leave this value of vgamma, but enable light-time effects.
End of explanation
colors = {'0_false': 'b', '100_false': 'r', '100_true': 'g'}
Explanation: Influence on Light Curves (fluxes)
Now let's compare the various models across all our different datasets.
Let's set the colors so that all figures will have systemic velocity shown in blue, systemic velocity with ltte=False in red, and systemic velocity with ltte=True in green.
End of explanation
afig, mplfig = b['lc'].plot(c=colors, linestyle='solid',
axorder={'lc1': 0, 'lc2': 1},
subplot_grid=(1,2), tight_layout=True, show=True)
Explanation: In each of the figures below, the first panel will be the first cycle (days 0-3) and the second panel will be 100 cycles later (days 900-903).
Without light-time effects, the light curve remains unchanged by the introduction of a systemic velocity (blue and red overlap each other). However, once ltte is enabled, the time between two eclipses (ie the observed period of the system) changes. This occurs because the path between the system and observer has changed. This is an important effect to note - the period parameter sets the TRUE period of the system, not necessarily the observed period between two successive eclipses.
End of explanation
afig, mplfig = b['rv'].plot(c=colors, linestyle='solid',
axorder={'rv1': 0, 'rv2': 1},
subplot_grid=(1,2), tight_layout=True, show=True)
Explanation: Influence on Radial Velocities
Radial velocities are perhaps the most logical observable in the case of systemic velocities. Introducing a non-zero value for vgamma simply offsets the observed values.
Light-time will have a similar affect on RVs as it does on LCs - it simply changes the observed period.
End of explanation
afig, mplfig = b.filter(kind='orb', model=['0_false', '100_false']).plot(x='us', y='ws',
c=colors, linestyle='solid',
axorder={'orb1': 0, 'orb2': 1},
subplot_grid=(1,2), tight_layout=True, show=True)
Explanation: Influence on Orbits (positions, velocities)
In the orbit, the addition of a systemic velocity affects both the positions and velocities. So if we plot the orbits from above (u-w plane) we can see see orbit spiral in the w-direction. Note that this actually shows the barycenter of the orbit moving - and it was only at 0,0,0 at t0. This also stresses the importance of using a reasonable t0 - here 900 days later, the barycenter has moved significantly from the center of the coordinate system.
End of explanation
afig, mplfig = b.filter(kind='orb', model=['0_false', '100_false']).plot(x='times', y='vws',
c=colors, linestyle='solid',
axorder={'orb1': 0, 'orb2': 1},
subplot_grid=(1,2), tight_layout=True, show=True)
Explanation: Plotting the w-velocities with respect to time would show the same as the RVs, except without any Rossiter-McLaughlin like effects. Note however the flip in w-convention between vw and radial velocities (+w is defined as towards the observer to make a right-handed system, but by convention +rv is a red shift).
End of explanation
afig, mplfig = b.filter(kind='orb', model=['100_false', '100_true']).plot(x='us', y='ws',
c=colors, linestyle='solid',
axorder={'orb1': 0, 'orb2': 1},
subplot_grid=(1,2), tight_layout=True, show=True)
afig, mplfig = b.filter(kind='orb', model=['100_false', '100_true']).plot(x='times', y='vws',
c=colors, linestyle='solid',
axorder={'orb1': 0, 'orb2': 1},
subplot_grid=(1,2), tight_layout=True, show=True)
Explanation: Now let's look at the effect that enabling ltte has on these same plots.
End of explanation
afig, mplfig = b.filter(kind='mesh', model=['0_false', '100_false']).plot(x='us', y='ws',
axorder={'mesh1': 0, 'mesh2': 1},
subplot_grid=(1,2), tight_layout=True, show=True)
afig, mplfig = b.filter(kind='mesh', model=['100_false', '100_true']).plot(x='us', y='ws',
axorder={'mesh1': 0, 'mesh2': 1},
subplot_grid=(1,2), tight_layout=True, show=True)
Explanation: Influence on Meshes
End of explanation
b['primary@mesh1@0_false'].get_value('vws', time=0.0)[:5]
b['primary@mesh1@100_false'].get_value('vws', time=0.0)[:5]
b['primary@mesh1@100_true'].get_value('vws', time=0.0)[:5]
Explanation: As you can see, since the center of mass of the system was at 0,0,0 at t0 - including systemic velocity actually shows the system spiraling towards or away from the observer (who is in the positive w direction). In other words - the positions of the meshes are affected in the same way as the orbits (note the offset on the ylimit scales).
In addition, the actual values of vw and rv in the meshes are adjusted to include the systemic velocity.
End of explanation |
15,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Google form analysis tests
Table of Contents
'Google form analysis' functions checks
Google form loading
Selection of a question
Selection of a user's answers
checking answers
comparison of checkpoints completion and answers
answers submitted through time
merge English and French answers
Step1: 'Google form analysis' functions checks
<a id=funcchecks />
copy-paste for unit tests
(userIdThatDidNotAnswer)
(userId1AnswerEN)
(userIdAnswersEN)
(userId1ScoreEN)
(userIdScoresEN)
(userId1AnswerFR)
(userIdAnswersFR)
(userId1ScoreFR)
(userIdScoresFR)
(userIdAnswersENFR)
getAllResponders
Step2: hasAnswered
Step3: getAnswers
Step4: getCorrections
Step5: getScore
Step6: code to explore scores
for userId in gform['userId'].values
Step7: getValidatedCheckpoints
Step8: getNonValidated
Step9: getNonValidatedCheckpoints
Step10: getValidatedCheckpointsCounts
Step11: getNonValidatedCheckpointsCounts
Step12: getAllAnswerRows
Step13: getPercentCorrectPerColumn
tested through getPercentCorrectKnowingAnswer
getPercentCorrectKnowingAnswer
Step14: Google form loading
<a id=gformload />
Step15: Selection of a question
<a id=selquest />
Step16: Selection of a user's answers
<a id=selusans />
userIdThatDidNotAnswer
userId1AnswerEN
userIdAnswersEN
userId1AnswerFR
userIdAnswersFR
userIdAnswersENFR
getUniqueUserCount tinkering
Step17: getAllRespondersGFormGUID tinkering
Step18: getRandomGFormGUID tinkering
Step19: getAnswers tinkering
Step20: answer selection
Step21: checking answers
<a id=checkans />
Step22: getTemporality tinkering
Step23: getTemporality tinkering
Step24: getTestAnswers tinkering
Step25: getCorrections tinkering
Step26: getCorrections extensions tinkering
Step27: getBinarizedCorrections tinkering
Step28: getBinarized tinkering
Step29: getAllBinarized tinkering
Step30: plotCorrelationMatrix tinkering
Step31: data = transposed[[0,1]]
data.corr(method = 'spearman')
Step32: getCrossCorrectAnswers tinkering
Before
Step33: after
Step34: getScore tinkering
Step35: comparison of checkpoints completion and answers
<a id=compcheckans />
Theoretically, they should match. Whoever understood an item should beat the matching challenge. The discrepancies are due to game design or level design.
getValidatedCheckpoints tinkering
Step36: getNonValidated tinkering
Step37: getAllAnswerRows tinkering
Step38: getPercentCorrectPerColumn tinkering
Step39: getPercentCorrectKnowingAnswer tinkering
Step40: tests on all user Ids, including those who answered more than once
Step41: answers submitted through time
<a id=ansthrutime />
merging answers in English and French
<a id=mergelang />
tests
Step42: add language column
Scores will be evaluated per language
Step43: concatenate
Step44: getEventCountRatios tinkering
Step45: setAnswerTemporalities2 tinkering / Temporalities analysis
Step46: question types | Python Code:
%run "../Functions/2. Google form analysis.ipynb"
# Localplayerguids of users who answered the questionnaire (see below).
# French
#localplayerguid = 'a4d4b030-9117-4331-ba48-90dc05a7e65a'
#localplayerguid = 'd6826fd9-a6fc-4046-b974-68e50576183f'
#localplayerguid = 'deb089c0-9be3-4b75-9b27-28963c77b10c'
#localplayerguid = '75e264d6-af94-4975-bb18-50cac09894c4'
#localplayerguid = '3d733347-0313-441a-b77c-3e4046042a53'
# English
localplayerguid = '8d352896-a3f1-471c-8439-0f426df901c1'
#localplayerguid = '7037c5b2-c286-498e-9784-9a061c778609'
#localplayerguid = '5c4939b5-425b-4d19-b5d2-0384a515539e'
#localplayerguid = '7825d421-d668-4481-898a-46b51efe40f0'
#localplayerguid = 'acb9c989-b4a6-4c4d-81cc-6b5783ec71d8'
#localplayerguid = devPCID5
Explanation: Google form analysis tests
Table of Contents
'Google form analysis' functions checks
Google form loading
Selection of a question
Selection of a user's answers
checking answers
comparison of checkpoints completion and answers
answers submitted through time
merge English and French answers
End of explanation
len(getAllResponders())
Explanation: 'Google form analysis' functions checks
<a id=funcchecks />
copy-paste for unit tests
(userIdThatDidNotAnswer)
(userId1AnswerEN)
(userIdAnswersEN)
(userId1ScoreEN)
(userIdScoresEN)
(userId1AnswerFR)
(userIdAnswersFR)
(userId1ScoreFR)
(userIdScoresFR)
(userIdAnswersENFR)
getAllResponders
End of explanation
userIdThatDidNotAnswer in gform['userId'].values, hasAnswered( userIdThatDidNotAnswer )
assert(not hasAnswered( userIdThatDidNotAnswer )), "User has NOT answered"
assert(hasAnswered( userId1AnswerEN )), "User HAS answered"
assert(hasAnswered( userIdAnswersEN )), "User HAS answered"
assert(hasAnswered( userId1AnswerFR )), "User HAS answered"
assert(hasAnswered( userIdAnswersFR )), "User HAS answered"
assert(hasAnswered( userIdAnswersENFR )), "User HAS answered"
Explanation: hasAnswered
End of explanation
assert (len(getAnswers( userIdThatDidNotAnswer ).columns) == 0),"Too many answers"
assert (len(getAnswers( userId1AnswerEN ).columns) == 1),"Too many answers"
assert (len(getAnswers( userIdAnswersEN ).columns) >= 2),"Not enough answers"
assert (len(getAnswers( userId1AnswerFR ).columns) == 1),"Not enough columns"
assert (len(getAnswers( userIdAnswersFR ).columns) >= 2),"Not enough answers"
assert (len(getAnswers( userIdAnswersENFR ).columns) >= 2),"Not enough answers"
Explanation: getAnswers
End of explanation
assert (len(getCorrections( userIdThatDidNotAnswer ).columns) == 0),"Too many answers"
assert (len(getCorrections( userId1AnswerEN ).columns) == 2),"Too many answers"
assert (len(getCorrections( userIdAnswersEN ).columns) >= 4),"Not enough answers"
assert (len(getCorrections( userId1AnswerFR ).columns) == 2),"Too many answers"
assert (len(getCorrections( userIdAnswersFR ).columns) >= 4),"Not enough answers"
assert (len(getCorrections( userIdAnswersENFR ).columns) >= 4),"Not enough answers"
Explanation: getCorrections
End of explanation
assert (len(pd.DataFrame(getScore( userIdThatDidNotAnswer ).values.flatten().tolist()).values.flatten().tolist()) == 0),"Too many answers"
score = getScore( userId1AnswerEN )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score[answerTemporalities[0]][0][0] == 0
),"Incorrect score"
Explanation: getScore
End of explanation
score = getScore( userIdAnswersEN )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score[answerTemporalities[0]][0][0] == 5
and
score[answerTemporalities[1]][0][0] == 25
),"Incorrect score"
score = getScore( userId1AnswerFR )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score[answerTemporalities[0]][0][0] == 23
),"Incorrect score"
score = getScore( userIdAnswersFR )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score[answerTemporalities[0]][0][0] == 15
and
score[answerTemporalities[1]][0][0] == 26
),"Incorrect score"
score = getScore( userIdAnswersENFR )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score[answerTemporalities[0]][0][0] == 4
and
score[answerTemporalities[1]][0][0] == 13
),"Incorrect score"
Explanation: code to explore scores
for userId in gform['userId'].values:
score = getScore( userId )
pretestScore = score[answerTemporalities[0]][0]
posttestScore = score[answerTemporalities[1]][0]
if len(pretestScore) == 1 and len(posttestScore) == 0 and 0 != pretestScore[0]:
#gform[gform['userId']]
print(userId + ': ' + str(pretestScore[0]))
End of explanation
objective = 0
assert (len(getValidatedCheckpoints( userIdThatDidNotAnswer )) == objective),"Incorrect number of answers"
objective = 1
assert (len(getValidatedCheckpoints( userId1AnswerEN )) == objective),"Incorrect number of answers"
assert (getValidatedCheckpoints( userId1AnswerEN )[0].equals(validableCheckpoints)) \
, "User has validated everything"
objective = 2
assert (len(getValidatedCheckpoints( userIdAnswersEN )) == objective),"Incorrect number of answers"
objective = 3
assert (len(getValidatedCheckpoints( userIdAnswersEN )[0]) == objective) \
, "User has validated " + objective + " chapters on first try"
objective = 1
assert (len(getValidatedCheckpoints( userId1AnswerFR )) == objective),"Incorrect number of answers"
assert (getValidatedCheckpoints( userId1AnswerFR )[0].equals(validableCheckpoints)) \
, "User has validated everything"
objective = 2
assert (len(getValidatedCheckpoints( userIdAnswersFR )) == objective),"Incorrect number of answers"
objective = 5
assert (len(getValidatedCheckpoints( userIdAnswersFR )[1]) == objective) \
, "User has validated " + objective + " chapters on second try"
objective = 2
assert (len(getValidatedCheckpoints( userIdAnswersENFR )) == objective),"Incorrect number of answers"
objective = 5
assert (len(getValidatedCheckpoints( userIdAnswersENFR )[1]) == objective) \
, "User has validated " + objective + " chapters on second try"
Explanation: getValidatedCheckpoints
End of explanation
getValidatedCheckpoints( userIdThatDidNotAnswer )
pd.Series(getValidatedCheckpoints( userIdThatDidNotAnswer ))
type(getNonValidated(pd.Series(getValidatedCheckpoints( userIdThatDidNotAnswer ))))
validableCheckpoints
assert(getNonValidated(getValidatedCheckpoints( userIdThatDidNotAnswer ))).equals(validableCheckpoints), \
"incorrect validated checkpoints: should contain all checkpoints that can be validated"
testSeries = pd.Series(
[
'', # 7
'', # 8
'', # 9
'', # 10
'tutorial1.Checkpoint00', # 11
'tutorial1.Checkpoint00', # 12
'tutorial1.Checkpoint00', # 13
'tutorial1.Checkpoint00', # 14
'tutorial1.Checkpoint02', # 15
'tutorial1.Checkpoint01', # 16
'tutorial1.Checkpoint05'
]
)
assert(getNonValidated(pd.Series([testSeries]))[0][0] == 'tutorial1.Checkpoint13'), "Incorrect non validated checkpoint"
Explanation: getNonValidated
End of explanation
getNonValidatedCheckpoints( userIdThatDidNotAnswer )
getNonValidatedCheckpoints( userId1AnswerEN )
getNonValidatedCheckpoints( userIdAnswersEN )
getNonValidatedCheckpoints( userId1AnswerFR )
getNonValidatedCheckpoints( userIdAnswersFR )
getNonValidatedCheckpoints( userIdAnswersENFR )
Explanation: getNonValidatedCheckpoints
End of explanation
getValidatedCheckpointsCounts(userIdThatDidNotAnswer)
getValidatedCheckpointsCounts(userId1AnswerEN)
getValidatedCheckpointsCounts(userIdAnswersEN)
getValidatedCheckpointsCounts(userId1ScoreEN)
getValidatedCheckpointsCounts(userIdScoresEN)
getValidatedCheckpointsCounts(userId1AnswerFR)
getValidatedCheckpointsCounts(userIdAnswersFR)
getValidatedCheckpointsCounts(userId1ScoreFR)
getValidatedCheckpointsCounts(userIdScoresFR)
getValidatedCheckpointsCounts(userIdAnswersENFR)
Explanation: getValidatedCheckpointsCounts
End of explanation
getNonValidatedCheckpointsCounts(userIdThatDidNotAnswer)
getNonValidatedCheckpointsCounts(userId1AnswerEN)
getNonValidatedCheckpointsCounts(userIdAnswersEN)
getNonValidatedCheckpointsCounts(userId1ScoreEN)
getNonValidatedCheckpointsCounts(userIdScoresEN)
getNonValidatedCheckpointsCounts(userId1AnswerFR)
getNonValidatedCheckpointsCounts(userIdAnswersFR)
getNonValidatedCheckpointsCounts(userId1ScoreFR)
getNonValidatedCheckpointsCounts(userIdScoresFR)
getNonValidatedCheckpointsCounts(userIdAnswersENFR)
Explanation: getNonValidatedCheckpointsCounts
End of explanation
aYes = ["Yes", "Oui"]
aNo = ["No", "Non"]
aNoIDK = ["No", "Non", "I don't know", "Je ne sais pas"]
# How long have you studied biology?
qBiologyEducationLevelIndex = 5
aBiologyEducationLevelHigh = ["Until bachelor's degree", "Jusqu'à la license"]
aBiologyEducationLevelLow = ['Until the end of high school', 'Until the end of middle school', 'Not even in middle school'\
"Jusqu'au bac", "Jusqu'au brevet", 'Jamais']
# Have you ever heard about BioBricks?
qHeardBioBricksIndex = 8
# Have you played the current version of Hero.Coli?
qPlayedHerocoliIndex = 10
qPlayedHerocoliYes = ['Yes', 'Once', 'Multiple times', 'Oui',
'De nombreuses fois', 'Quelques fois', 'Une fois']
qPlayedHerocoliNo = ['No', 'Non',]
gform[QStudiedBiology].unique()
gform['Before playing Hero.Coli, had you ever heard about BioBricks?'].unique()
gform['Have you played the current version of Hero.Coli?'].unique()
getAllAnswerRows(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)
assert(len(getAllAnswerRows(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)) != 0)
assert(len(getAllAnswerRows(qBiologyEducationLevelIndex, aBiologyEducationLevelLow)) != 0)
assert(len(getAllAnswerRows(qHeardBioBricksIndex, aYes)) != 0)
assert(len(getAllAnswerRows(qHeardBioBricksIndex, aNoIDK)) != 0)
assert(len(getAllAnswerRows(qPlayedHerocoliIndex, qPlayedHerocoliYes)) != 0)
assert(len(getAllAnswerRows(qPlayedHerocoliIndex, qPlayedHerocoliNo)) != 0)
Explanation: getAllAnswerRows
End of explanation
questionIndex = 15
gform.iloc[:, questionIndex].head()
(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)
getAllAnswerRows(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)
getPercentCorrectKnowingAnswer(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)
getPercentCorrectKnowingAnswer(qBiologyEducationLevelIndex, aBiologyEducationLevelLow)
getPercentCorrectKnowingAnswer(qHeardBioBricksIndex, aYes)
getPercentCorrectKnowingAnswer(qHeardBioBricksIndex, aNoIDK)
playedHerocoliIndexYes = getPercentCorrectKnowingAnswer(qPlayedHerocoliIndex, qPlayedHerocoliYes)
playedHerocoliIndexYes
playedHerocoliIndexNo = getPercentCorrectKnowingAnswer(qPlayedHerocoliIndex, qPlayedHerocoliNo)
playedHerocoliIndexNo
playedHerocoliIndexYes - playedHerocoliIndexNo
(playedHerocoliIndexYes - playedHerocoliIndexNo) / (1 - playedHerocoliIndexNo)
Explanation: getPercentCorrectPerColumn
tested through getPercentCorrectKnowingAnswer
getPercentCorrectKnowingAnswer
End of explanation
#gform = gformEN
transposed = gform.T
#answers = transposed[transposed[]]
transposed
type(gform)
Explanation: Google form loading
<a id=gformload />
End of explanation
gform.columns
gform.columns.get_loc('Do not edit - pre-filled anonymous ID')
localplayerguidkey
# Using the whole question:
gform[localplayerguidkey]
# Get index from question
localplayerguidindex
# Using the index of the question:
gform.iloc[:, localplayerguidindex]
Explanation: Selection of a question
<a id=selquest />
End of explanation
sample = gform
#def getUniqueUserCount(sample):
sample[localplayerguidkey].nunique()
Explanation: Selection of a user's answers
<a id=selusans />
userIdThatDidNotAnswer
userId1AnswerEN
userIdAnswersEN
userId1AnswerFR
userIdAnswersFR
userIdAnswersENFR
getUniqueUserCount tinkering
End of explanation
userIds = gform[localplayerguidkey].unique()
len(userIds)
Explanation: getAllRespondersGFormGUID tinkering
End of explanation
allResponders = getAllResponders()
uniqueUsers = np.unique(allResponders)
print(len(allResponders))
print(len(uniqueUsers))
for guid in uniqueUsers:
if(not isGUIDFormat(guid)):
print('incorrect guid: ' + str(guid))
uniqueUsers = getAllResponders()
userCount = len(uniqueUsers)
guid = '0'
while (not isGUIDFormat(guid)):
userIndex = randint(0,userCount-1)
guid = uniqueUsers[userIndex]
guid
Explanation: getRandomGFormGUID tinkering
End of explanation
#userId = userIdThatDidNotAnswer
#userId = userId1AnswerEN
userId = userIdAnswersEN
_form = gform
#def getAnswers( userId, _form = gform ):
answers = _form[_form[localplayerguidkey]==userId]
_columnAnswers = answers.T
if 0 != len(answers):
_newColumns = []
for column in _columnAnswers.columns:
_newColumns.append(answersColumnNameStem + str(column))
_columnAnswers.columns = _newColumns
else:
# user has never answered
print("user " + str(userId) + " has never answered")
_columnAnswers
Explanation: getAnswers tinkering
End of explanation
answers
# Selection of a specific answer
answers.iloc[:,localplayerguidindex]
answers.iloc[:,localplayerguidindex].iloc[0]
type(answers.iloc[0,:])
answers.iloc[0,:].values
Explanation: answer selection
End of explanation
#### Question that has a correct answer:
questionIndex = 15
answers.iloc[:,questionIndex].iloc[0]
correctAnswers.iloc[questionIndex][0]
answers.iloc[:,questionIndex].iloc[0].startswith(correctAnswers.iloc[questionIndex][0])
#### Question that has no correct answer:
questionIndex = 0
#answers.iloc[:,questionIndex].iloc[0].startswith(correctAnswers.iloc[questionIndex].iloc[0])
#### Batch check:
columnAnswers = getAnswers( userId )
columnAnswers.values[2,0]
columnAnswers[columnAnswers.columns[0]][2]
correctAnswers
type(columnAnswers)
indexOfFirstEvaluationQuestion = 13
columnAnswers.index[indexOfFirstEvaluationQuestion]
Explanation: checking answers
<a id=checkans />
End of explanation
gform.tail(50)
gform[gform[localplayerguidkey] == 'ba202bbc-af77-42e8-85ff-e25b871717d5']
gformRealBefore = gform.loc[88, QTimestamp]
gformRealBefore
gformRealAfter = gform.loc[107, QTimestamp]
gformRealAfter
RMRealFirstEvent = getFirstEventDate(gform.loc[88,localplayerguidkey])
RMRealFirstEvent
Explanation: getTemporality tinkering
End of explanation
tzAnswerDate = gformRealBefore
gameEventDate = RMRealFirstEvent
#def getTemporality( answerDate, gameEventDate ):
result = answerTemporalities[2]
if(gameEventDate != pd.Timestamp.max.tz_localize('utc')):
if(answerDate <= gameEventDate):
result = answerTemporalities[0]
elif (answerDate > gameEventDate):
result = answerTemporalities[1]
result, tzAnswerDate, gameEventDate
firstEventDate = getFirstEventDate(gform.loc[userIndex,localplayerguidkey])
firstEventDate
gformTestBefore = pd.Timestamp('2018-01-16 14:28:20.998000+0000', tz='UTC')
getTemporality(gformTestBefore,firstEventDate)
gformTestWhile = pd.Timestamp('2018-01-16 14:28:23.998000+0000', tz='UTC')
getTemporality(gformTestWhile,firstEventDate)
gformTestAfter = pd.Timestamp('2018-01-16 14:28:24.998000+0000', tz='UTC')
getTemporality(gformTestAfter,firstEventDate)
Explanation: getTemporality tinkering
End of explanation
_form = gform
_rmDF = rmdf1522
_rmTestDF = normalizedRMDFTest
includeAndroid = True
#def getTestAnswers( _form = gform, _rmDF = rmdf1522, _rmTestDF = normalizedRMDFTest, includeAndroid = True):
_form[_form[localplayerguidkey].isin(testUsers)]
_form[localplayerguidkey]
testUsers
len(getTestAnswers()[localplayerguidkey])
rmdf1522['customData.platform'].unique()
rmdf1522[rmdf1522['customData.platform'].apply(lambda s: str(s).endswith('editor'))]
rmdf1522[rmdf1522['userId'].isin(getTestAnswers()[localplayerguidkey])][['userTime','customData.platform','userId']].dropna()
Explanation: getTestAnswers tinkering
End of explanation
columnAnswers
#testUserId = userId1AnswerEN
testUserId = '8d352896-a3f1-471c-8439-0f426df901c1'
getCorrections(testUserId)
testUserId = '8d352896-a3f1-471c-8439-0f426df901c1'
source = correctAnswers
#def getCorrections( _userId, _source = correctAnswers, _form = gform ):
columnAnswers = getAnswers( testUserId )
if 0 != len(columnAnswers.columns):
questionsCount = len(columnAnswers.values)
for columnName in columnAnswers.columns:
if answersColumnNameStem in columnName:
answerNumber = columnName.replace(answersColumnNameStem,"")
newCorrectionsColumnName = correctionsColumnNameStem + answerNumber
columnAnswers[newCorrectionsColumnName] = columnAnswers[columnName]
columnAnswers[newCorrectionsColumnName] = pd.Series(np.full(questionsCount, np.nan))
for question in columnAnswers[columnName].index:
#print()
#print(question)
__correctAnswers = source.loc[question]
if(len(__correctAnswers) > 0):
columnAnswers.loc[question,newCorrectionsColumnName] = False
for correctAnswer in __correctAnswers:
#print("-> " + correctAnswer)
if str(columnAnswers.loc[question,columnName])\
.startswith(str(correctAnswer)):
columnAnswers.loc[question,newCorrectionsColumnName] = True
break
else:
# user has never answered
print("can't give correct answers")
columnAnswers
question = QAge
columnName = ''
for column in columnAnswers.columns:
if str.startswith(column, 'answers'):
columnName = column
break
type(columnAnswers.loc[question,columnName])
getCorrections(localplayerguid)
gform.columns[20]
columnAnswers.loc[gform.columns[20],columnAnswers.columns[1]]
columnAnswers[columnAnswers.columns[1]][gform.columns[13]]
columnAnswers.loc[gform.columns[13],columnAnswers.columns[1]]
columnAnswers.iloc[20,1]
questionsCount
np.full(3, np.nan)
pd.Series(np.full(questionsCount, np.nan))
columnAnswers.loc[question,newCorrectionsColumnName]
question
correctAnswers[question]
getCorrections('8d352896-a3f1-471c-8439-0f426df901c1')
Explanation: getCorrections tinkering
End of explanation
correctAnswersEN
#demographicAnswersEN
type([])
mergedCorrectAnswersEN = correctAnswersEN.copy()
for index in mergedCorrectAnswersEN.index:
#print(str(mergedCorrectAnswersEN.loc[index,column]))
mergedCorrectAnswersEN.loc[index] =\
demographicAnswersEN.loc[index] + mergedCorrectAnswersEN.loc[index]
mergedCorrectAnswersEN
correctAnswersEN + demographicAnswersEN
correctAnswers + demographicAnswers
Explanation: getCorrections extensions tinkering
End of explanation
corrections = getCorrections(userIdAnswersENFR)
#corrections
for columnName in corrections.columns:
if correctionsColumnNameStem in columnName:
for index in corrections[columnName].index:
if(True==corrections.loc[index,columnName]):
corrections.loc[index,columnName] = 1
elif (False==corrections.loc[index,columnName]):
corrections.loc[index,columnName] = 0
corrections
binarized = getBinarizedCorrections(corrections)
binarized
slicedBinarized = binarized[13:40]
slicedBinarized
slicedBinarized =\
binarized[13:40][binarized.columns[\
binarized.columns.to_series().str.contains(correctionsColumnNameStem)\
]]
slicedBinarized
Explanation: getBinarizedCorrections tinkering
End of explanation
_source = correctAnswers
_userId = getRandomGFormGUID()
getCorrections(_userId, _source=_source, _form = gform)
_userId = '5e978fb3-316a-42ba-bb58-00856353838d'
gform[gform[localplayerguidkey] == _userId].iloc[0].index
_gformLine = gform[gform[localplayerguidkey] == _userId].iloc[0]
_gformLine.loc['Before playing Hero.Coli, had you ever heard about synthetic biology?']
_gformLine = gform[gform[localplayerguidkey] == _userId].iloc[0]
# only for one user
# def getBinarized(_gformLine, _source = correctAnswers):
_notEmptyIndexes = []
for _index in _source.index:
if(len(_source.loc[_index]) > 0):
_notEmptyIndexes.append(_index)
_binarized = pd.Series(np.full(len(_gformLine.index), np.nan), index = _gformLine.index)
for question in _gformLine.index:
_correctAnswers = _source.loc[question]
if(len(_correctAnswers) > 0):
_binarized[question] = 0
for _correctAnswer in _correctAnswers:
if str(_gformLine.loc[question])\
.startswith(str(_correctAnswer)):
_binarized.loc[question] = 1
break
_slicedBinarized = _binarized.loc[_notEmptyIndexes]
_slicedBinarized
_slicedBinarized.loc['What are BioBricks and devices?']
Explanation: getBinarized tinkering
End of explanation
allBinarized = getAllBinarized()
plotCorrelationMatrix(allBinarized)
source
source = correctAnswers + demographicAnswers
notEmptyIndexes = []
for eltIndex in source.index:
#print(eltIndex)
if(len(source.loc[eltIndex]) > 0):
notEmptyIndexes.append(eltIndex)
len(source)-len(notEmptyIndexes)
emptyForm = gform[gform[localplayerguidkey] == 'incorrectGUID']
emptyForm
_source = correctAnswers + demographicAnswers
_form = gform #emptyForm
#def getAllBinarized(_source = correctAnswers, _form = gform ):
_notEmptyIndexes = []
for _index in _source.index:
if(len(_source.loc[_index]) > 0):
_notEmptyIndexes.append(_index)
_result = pd.DataFrame(index = _notEmptyIndexes)
for _userId in getAllResponders( _form = _form ):
_corrections = getCorrections(_userId, _source=_source, _form = _form)
_binarized = getBinarizedCorrections(_corrections)
_slicedBinarized =\
_binarized.loc[_notEmptyIndexes][_binarized.columns[\
_binarized.columns.to_series().str.contains(correctionsColumnNameStem)\
]]
_result = pd.concat([_result, _slicedBinarized], axis=1)
_result = _result.T
#_result
if(_result.shape[0] > 0 and _result.shape[1] > 0):
correlation = _result.astype(float).corr()
#plt.matshow(correlation)
sns.clustermap(correlation,cmap=plt.cm.jet,square=True,figsize=(10,10))
#ax = sns.clustermap(correlation,cmap=plt.cm.jet,square=True,figsize=(10,10),cbar_kws={\
#"orientation":"vertical"})
correlation_pearson = _result.T.astype(float).corr(methods[0])
correlation_kendall = _result.T.astype(float).corr(methods[1])
correlation_spearman = _result.T.astype(float).corr(methods[2])
print(correlation_pearson.equals(correlation_kendall))
print(correlation_kendall.equals(correlation_spearman))
diff = (correlation_pearson - correlation_kendall)
flattened = diff[diff > 0.1].values.flatten()
flattened[~np.isnan(flattened)]
correlation
Explanation: getAllBinarized tinkering
End of explanation
scientificQuestionsLabels = gform.columns[13:40]
scientificQuestionsLabels = [
'In order to modify the abilities of the bacterium, you have to... #1',
'What are BioBricks and devices? #2',
'What is the name of this BioBrick? #3',
'What is the name of this BioBrick?.1 #4',
'What is the name of this BioBrick?.2 #5',
'What is the name of this BioBrick?.3 #6',
'What does this BioBrick do? #7',
'What does this BioBrick do?.1 #8',
'What does this BioBrick do?.2 #9',
'What does this BioBrick do?.3 #10',
'Pick the case where the BioBricks are well-ordered: #11',
'When does green fluorescence happen? #12',
'What happens when you unequip the movement device? #13',
'What is this? #14',
'What does this device do? #15',
'What does this device do?.1 #16',
'What does this device do?.2 #17',
'What does this device do?.3 #18',
'What does this device do?.4 #19',
'What does this device do?.5 #20',
'What does this device do?.6 #21',
'What does this device do?.7 #22',
'Guess: what would a device producing l-arabinose do, if it started with a l-arabinose-induced promoter? #23',
'Guess: the bacterium would glow yellow... #24',
'What is the species of the bacterium of the game? #25',
'What is the scientific name of the tails of the bacterium? #26',
'Find the antibiotic: #27',
]
scientificQuestionsLabelsX = [
'#1 In order to modify the abilities of the bacterium, you have to...',
'#2 What are BioBricks and devices?',
'#3 What is the name of this BioBrick?',
'#4 What is the name of this BioBrick?.1',
'#5 What is the name of this BioBrick?.2',
'#6 What is the name of this BioBrick?.3',
'#7 What does this BioBrick do?',
'#8 What does this BioBrick do?.1',
'#9 What does this BioBrick do?.2',
'#10 What does this BioBrick do?.3',
'#11 Pick the case where the BioBricks are well-ordered:',
'#12 When does green fluorescence happen?',
'#13 What happens when you unequip the movement device?',
'#14 What is this?',
'#15 What does this device do?',
'#16 What does this device do?.1',
'#17 What does this device do?.2',
'#18 What does this device do?.3',
'#19 What does this device do?.4',
'#20 What does this device do?.5',
'#21 What does this device do?.6',
'#22 What does this device do?.7',
'Guess: what would a device producing l-arabinose do, if it started with a l-arabinose-induced p#23 romoter?',
'#24 Guess: the bacterium would glow yellow...',
'#25 What is the species of the bacterium of the game?',
'#26 What is the scientific name of the tails of the bacterium?',
'#27 Find the antibiotic:',
]
questionsLabels = scientificQuestionsLabels
questionsLabelsX = scientificQuestionsLabelsX
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
ax.set_yticklabels(['']+questionsLabels)
ax.set_xticklabels(['']+questionsLabelsX, rotation='vertical')
ax.matshow(correlation)
ax.set_xticks(np.arange(-1,len(questionsLabels),1.));
ax.set_yticks(np.arange(-1,len(questionsLabels),1.));
questionsLabels = correlation.columns.copy()
newLabels = []
for index in range(0, len(questionsLabels)):
newLabels.append(questionsLabels[index] + ' #' + str(index + 1))
correlationRenamed = correlation.copy()
correlationRenamed.columns = newLabels
correlationRenamed.index = newLabels
correlationRenamed
correlationRenamed = correlation.copy()
correlationRenamed.columns = pd.Series(correlation.columns).apply(lambda x: x + ' #' + str(correlation.columns.get_loc(x) + 1))
correlationRenamed.index = correlationRenamed.columns
correlationRenamed
correlation.shape
fig = plt.figure(figsize=(10,10))
ax12 = plt.subplot(111)
ax12.set_title('Heatmap')
sns.heatmap(correlation,ax=ax12,cmap=plt.cm.jet,square=True)
ax = sns.clustermap(correlation,cmap=plt.cm.jet,square=True,figsize=(10,10),cbar_kws={\
"orientation":"vertical"})
questionsLabels = pd.Series(correlation.columns).apply(lambda x: x + ' #' + str(correlation.columns.get_loc(x) + 1))
fig = plt.figure(figsize=(10,10))
ax = plt.subplot(111)
cmap=plt.cm.jet
#cmap=plt.cm.ocean
cax = ax.imshow(correlation, interpolation='nearest', cmap=cmap,
# extent=(0.5,np.shape(correlation)[0]+0.5,0.5,np.shape(correlation)[1]+0.5)
)
#ax.grid(True)
plt.title('Questions\' Correlations')
ax.set_yticklabels(questionsLabels)
ax.set_xticklabels(questionsLabels, rotation='vertical')
ax.set_xticks(np.arange(len(questionsLabels)));
ax.set_yticks(np.arange(len(questionsLabels)));
#ax.set_xticks(np.arange(-1,len(questionsLabels),1.));
#ax.set_yticks(np.arange(-1,len(questionsLabels),1.));
fig.colorbar(cax)
plt.show()
ax.get_xticks()
transposed = _result.T.astype(float)
transposed.head()
transposed.corr()
transposed.columns = range(0,len(transposed.columns))
transposed.index = range(0,len(transposed.index))
transposed.head()
transposed = transposed.iloc[0:10,0:3]
transposed
transposed = transposed.astype(float)
type(transposed[0][0])
transposed.columns = list('ABC')
transposed
transposed.loc[0, 'A'] = 0
transposed
transposed.corr()
Explanation: plotCorrelationMatrix tinkering
End of explanation
round(7.64684)
df = pd.DataFrame(10*np.random.randint(2, size=[20,2]),index=range(0,20),columns=list('AB'))
#df.columns = range(0,len(df.columns))
df.head()
#type(df[0][0])
type(df.columns)
df.corr()
#corr = pd.Series({}, index = methods)
for meth in methods:
#corr[meth] = result.corr(method = meth)
print(meth + ":\n" + str(transposed.corr(method = meth)) + "\n\n")
Explanation: data = transposed[[0,1]]
data.corr(method = 'spearman')
End of explanation
befores = gform.copy()
befores = befores[befores[QTemporality] == answerTemporalities[0]]
print(len(befores))
allBeforesBinarized = getAllBinarized( _source = correctAnswers + demographicAnswers, _form = befores)
np.unique(allBeforesBinarized.values.flatten())
allBeforesBinarized.columns[20]
allBeforesBinarized.T.dot(allBeforesBinarized)
np.unique(allBeforesBinarized.iloc[:,20].values)
plotCorrelationMatrix( allBeforesBinarized, _abs=False,\
_clustered=False, _questionNumbers=True )
_correlation = allBeforesBinarized.astype(float).corr()
overlay = allBeforesBinarized.T.dot(allBeforesBinarized).astype(int)
_correlation.columns = pd.Series(_correlation.columns).apply(\
lambda x: x + ' #' + str(_correlation.columns.get_loc(x) + 1))
_correlation.index = _correlation.columns
_correlation = _correlation.abs()
_fig = plt.figure(figsize=(20,20))
_ax = plt.subplot(111)
#sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True,annot=overlay,fmt='d')
sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True,annot=True)
Explanation: getCrossCorrectAnswers tinkering
Before
End of explanation
afters = gform.copy()
afters = afters[afters[QTemporality] == answerTemporalities[1]]
print(len(afters))
allAftersBinarized = getAllBinarized( _source = correctAnswers + demographicAnswers, _form = afters)
np.unique(allAftersBinarized.values.flatten())
plotCorrelationMatrix( allAftersBinarized, _abs=False,\
_clustered=False, _questionNumbers=True )
#for answerIndex in range(0,len(allAftersBinarized)):
# print(str(answerIndex) + " " + str(allAftersBinarized.iloc[answerIndex,0]))
allAftersBinarized.iloc[28,0]
len(allAftersBinarized)
len(allAftersBinarized.index)
_correlation = allAftersBinarized.astype(float).corr()
overlay = allAftersBinarized.T.dot(allAftersBinarized).astype(int)
_correlation.columns = pd.Series(_correlation.columns).apply(\
lambda x: x + ' #' + str(_correlation.columns.get_loc(x) + 1))
_correlation.index = _correlation.columns
_fig = plt.figure(figsize=(10,10))
_ax = plt.subplot(111)
#sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True,annot=overlay,fmt='d')
sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True)
crossCorrect = getCrossCorrectAnswers(allAftersBinarized)
pd.Series((overlay == crossCorrect).values.flatten()).unique()
allAftersBinarized.shape
cross = allAftersBinarized.T.dot(allAftersBinarized)
cross.shape
equal = (cross == crossCorrect)
type(equal)
pd.Series(equal.values.flatten()).unique()
Explanation: after
End of explanation
testUser = userIdAnswersFR
gform[gform[localplayerguidkey] == testUser].T
getScore(testUser)
print("draft test")
testUserId = "3ef14300-4987-4b54-a56c-5b6d1f8a24a1"
testUserId = userIdAnswersEN
#def getScore( _userId, _form = gform ):
score = pd.DataFrame({}, columns = answerTemporalities)
score.loc['score',:] = np.nan
for column in score.columns:
score.loc['score', column] = []
if hasAnswered( testUserId ):
columnAnswers = getCorrections(testUserId)
for columnName in columnAnswers.columns:
# only work on corrected columns
if correctionsColumnNameStem in columnName:
answerColumnName = columnName.replace(correctionsColumnNameStem,\
answersColumnNameStem)
temporality = columnAnswers.loc[QTemporality,answerColumnName]
counts = (columnAnswers[columnName]).value_counts()
thisScore = 0
if(True in counts):
thisScore = counts[True]
score.loc['score',temporality].append(thisScore)
else:
print("user " + str(testUserId) + " has never answered")
#expectedScore = 18
#if (expectedScore != score[0]):
# print("ERROR incorrect score: expected "+ str(expectedScore) +", got "+ str(score))
score
score = pd.DataFrame({}, columns = answerTemporalities)
score.loc['score',:] = np.nan
for column in score.columns:
score.loc['score', column] = []
score
#score.loc['user0',:] = [1,2,3]
#score
#type(score)
#type(score[0])
#for i,v in score[0].iteritems():
# print(v)
#score[0][answerTemporalities[2]]
#columnAnswers.loc[QTemporality,'answers0']
False in (columnAnswers[columnName]).value_counts()
getScore("3ef14300-4987-4b54-a56c-5b6d1f8a24a1")
#gform[gform[localplayerguidkey]=="3ef14300-4987-4b54-a56c-5b6d1f8a24a1"].T
correctAnswers
Explanation: getScore tinkering
End of explanation
#questionnaireValidatedCheckpointsPerQuestion = pd.Series(np.nan, index=range(35))
questionnaireValidatedCheckpointsPerQuestion = pd.Series(np.nan, index=range(len(checkpointQuestionMatching)))
questionnaireValidatedCheckpointsPerQuestion.head()
checkpointQuestionMatching['checkpoint'][19]
userId = localplayerguid
_form = gform
#function that returns the list of checkpoints from user id
#def getValidatedCheckpoints( userId, _form = gform ):
_validatedCheckpoints = []
if hasAnswered( userId, _form = _form ):
_columnAnswers = getCorrections( userId, _form = _form)
for _columnName in _columnAnswers.columns:
# only work on corrected columns
if correctionsColumnNameStem in _columnName:
_questionnaireValidatedCheckpointsPerQuestion = pd.Series(np.nan, index=range(len(checkpointQuestionMatching)))
for _index in range(0, len(_questionnaireValidatedCheckpointsPerQuestion)):
if _columnAnswers[_columnName][_index]==True:
_questionnaireValidatedCheckpointsPerQuestion[_index] = checkpointQuestionMatching['checkpoint'][_index]
else:
_questionnaireValidatedCheckpointsPerQuestion[_index] = ''
_questionnaireValidatedCheckpoints = _questionnaireValidatedCheckpointsPerQuestion.unique()
_questionnaireValidatedCheckpoints = _questionnaireValidatedCheckpoints[_questionnaireValidatedCheckpoints!='']
_questionnaireValidatedCheckpoints = pd.Series(_questionnaireValidatedCheckpoints)
_questionnaireValidatedCheckpoints = _questionnaireValidatedCheckpoints.sort_values()
_questionnaireValidatedCheckpoints.index = range(0, len(_questionnaireValidatedCheckpoints))
_validatedCheckpoints.append(_questionnaireValidatedCheckpoints)
else:
print("user " + str(userId) + " has never answered")
result = pd.Series(data=_validatedCheckpoints)
result
type(result[0])
Explanation: comparison of checkpoints completion and answers
<a id=compcheckans />
Theoretically, they should match. Whoever understood an item should beat the matching challenge. The discrepancies are due to game design or level design.
getValidatedCheckpoints tinkering
End of explanation
testSeries1 = pd.Series(
[
'tutorial1.Checkpoint00',
'tutorial1.Checkpoint01',
'tutorial1.Checkpoint02',
'tutorial1.Checkpoint05'
]
)
testSeries2 = pd.Series(
[
'tutorial1.Checkpoint01',
'tutorial1.Checkpoint05'
]
)
np.setdiff1d(testSeries1, testSeries2)
np.setdiff1d(testSeries1.values, testSeries2.values)
getAnswers(localplayerguid).head(2)
getCorrections(localplayerguid).head(2)
getScore(localplayerguid)
getValidatedCheckpoints(localplayerguid)
getNonValidatedCheckpoints(localplayerguid)
Explanation: getNonValidated tinkering
End of explanation
qPlayedHerocoliIndex = 10
qPlayedHerocoliYes = ['Yes', 'Once', 'Multiple times', 'Oui',
'De nombreuses fois', 'Quelques fois', 'Une fois']
questionIndex = qPlayedHerocoliIndex
choice = qPlayedHerocoliYes
_form = gform
# returns all rows of Google form's answers that contain an element
# of the array 'choice' for question number 'questionIndex'
#def getAllAnswerRows(questionIndex, choice, _form = gform ):
_form[_form.iloc[:, questionIndex].isin(choice)]
Explanation: getAllAnswerRows tinkering
End of explanation
_df = getAllAnswerRows(qPlayedHerocoliIndex, qPlayedHerocoliYes, _form = gform )
#def getPercentCorrectPerColumn(_df):
_count = len(_df)
_percents = pd.Series(np.full(len(_df.columns), np.nan), index=_df.columns)
for _rowIndex in _df.index:
for _columnName in _df.columns:
_columnIndex = _df.columns.get_loc(_columnName)
if ((_columnIndex >= firstEvaluationQuestionIndex) \
and (_columnIndex < len(_df.columns)-3)):
if(str(_df[_columnName][_rowIndex]).startswith(str(correctAnswers[_columnIndex]))):
if (np.isnan(_percents[_columnName])):
_percents[_columnName] = 1;
else:
_percents[_columnName] = _percents[_columnName]+1
else:
if (np.isnan(_percents[_columnName])):
_percents[_columnName] = 0;
_percents = _percents/_count
_percents['Count'] = _count
_percents
print('\n\n\npercents=\n' + str(_percents))
Explanation: getPercentCorrectPerColumn tinkering
End of explanation
questionIndex = qPlayedHerocoliIndex
choice = qPlayedHerocoliYes
_form = gform
#def getPercentCorrectKnowingAnswer(questionIndex, choice, _form = gform):
_answerRows = getAllAnswerRows(questionIndex, choice, _form = _form);
getPercentCorrectPerColumn(_answerRows)
Explanation: getPercentCorrectKnowingAnswer tinkering
End of explanation
#localplayerguid = '8d352896-a3f1-471c-8439-0f426df901c1'
#localplayerguid = '7037c5b2-c286-498e-9784-9a061c778609'
#localplayerguid = '5c4939b5-425b-4d19-b5d2-0384a515539e'
#localplayerguid = '7825d421-d668-4481-898a-46b51efe40f0'
#localplayerguid = 'acb9c989-b4a6-4c4d-81cc-6b5783ec71d8'
for id in getAllResponders():
print("===========================================")
print("id=" + str(id))
print("-------------------------------------------")
print(getAnswers(id).head(2))
print("-------------------------------------------")
print(getCorrections(id).head(2))
print("-------------------------------------------")
print("scores=" + str(getScore(id)))
print("#ValidatedCheckpoints=" + str(getValidatedCheckpointsCounts(id)))
print("#NonValidatedCheckpoints=" + str(getNonValidatedCheckpointsCounts(id)))
print("===========================================")
gform[localplayerguidkey]
hasAnswered( '8d352896-a3f1-471c-8439-0f426df901c1' )
'8d352896-a3f1-471c-8439-0f426df901c1' in gform[localplayerguidkey].values
apostropheTestString = 'it\'s a test'
apostropheTestString
Explanation: tests on all user Ids, including those who answered more than once
End of explanation
#gformEN.head(2)
#gformFR.head(2)
Explanation: answers submitted through time
<a id=ansthrutime />
merging answers in English and French
<a id=mergelang />
tests
End of explanation
#gformEN[QLanguage] = pd.Series(enLanguageID, index=gformEN.index)
#gformFR[QLanguage] = pd.Series(frLanguageID, index=gformFR.index)
#gformFR.head(2)
Explanation: add language column
Scores will be evaluated per language
End of explanation
# rename columns
#gformFR.columns = gformEN.columns
#gformFR.head(2)
#gformTestMerge = pd.concat([gformEN, gformFR])
#gformTestMerge.head(2)
#gformTestMerge.tail(2)
gform
localplayerguid
someAnswers = getAnswers( '8ca16c7a-70a6-4723-bd72-65b8485a2e86' )
someAnswers
testQuestionIndex = 24
thisUsersFirstEvaluationQuestion = str(someAnswers[someAnswers.columns[0]][testQuestionIndex])
thisUsersFirstEvaluationQuestion
someAnswers[someAnswers.columns[0]][QLanguage]
firstEvaluationQuestionCorrectAnswer = str(correctAnswers[testQuestionIndex])
firstEvaluationQuestionCorrectAnswer
thisUsersFirstEvaluationQuestion.startswith(firstEvaluationQuestionCorrectAnswer)
Explanation: concatenate
End of explanation
answerDate = gform[gform['userId'] == '51f1ef77-ec48-4976-be1f-89b7cbd1afab'][QTimestamp][0]
answerDate
allEvents = rmdf1522[rmdf1522['userId']=='51f1ef77-ec48-4976-be1f-89b7cbd1afab']
allEventsCount = len(allEvents)
eventsBeforeRatio = len(allEvents[allEvents['userTime'] > answerDate])/allEventsCount
eventsAfterRatio = len(allEvents[allEvents['userTime'] < answerDate])/allEventsCount
result = [eventsBeforeRatio, eventsAfterRatio]
result
Explanation: getEventCountRatios tinkering
End of explanation
len(gform)
len(gform[gform[QTemporality] == answerTemporalities[2]])
len(gform[gform[QTemporality] == answerTemporalities[0]])
len(gform[gform[QTemporality] == answerTemporalities[1]])
gform.loc[:, [QPlayed, 'userId', QTemporality, QTimestamp]].sort_values(by = ['userId', QTimestamp])
gform.loc[:, [QPlayed, 'userId', QTemporality, QTimestamp]].sort_values(by = ['userId', QTimestamp])
sortedGFs = gform.loc[:, [QPlayed, 'userId', QTemporality, QTimestamp]].sort_values(by = ['userId', QTimestamp])
sortedGFs[sortedGFs[QTemporality] == answerTemporalities[2]]
result = pd.DataFrame()
maxuserIdIndex = len(sortedGFs['userId'])
userIdIndex = 0
userIdIntProgress = IntProgress(
value=0,
min=0,
max=maxuserIdIndex,
description='userIdIndex:'
)
display(userIdIntProgress)
userIdText = Text('')
display(userIdText)
for userid in sortedGFs['userId']:
userIdIndex += 1
userIdIntProgress.value = userIdIndex
userIdText.value = userid
if (len(sortedGFs[sortedGFs['userId'] == userid]) >= 2) and (answerTemporalities[2] in sortedGFs[sortedGFs['userId'] == userid][QTemporality].values):
if len(result) == 0:
result = sortedGFs[sortedGFs['userId'] == userid]
else:
result = pd.concat([result, sortedGFs[sortedGFs['userId'] == userid]])
#print(sortedGFs[sortedGFs['userId'] == userid])
result
len(gform) - len(result)
len(gform[gform[QTemporality] == answerTemporalities[2]])
len(gform[gform[QTemporality] == answerTemporalities[0]])
len(gform[gform[QTemporality] == answerTemporalities[1]])
gform.loc[:, [QPlayed, 'userId', QTemporality, QTimestamp]].sort_values(by = ['userId', QTimestamp])
rmdf1522['userTime'].min(),gform[QTimestamp].min(),rmdf1522['userTime'].min().floor('d') == gform[QTimestamp].min().floor('d')
# code to find special userIds
enSpeakers = gform[gform[QLanguage]==enLanguageID]
frSpeakers = gform[gform[QLanguage]==frLanguageID]
sortedGFs = gform.loc[:, ['userId', QTemporality, QTimestamp, QLanguage]].sort_values(by = ['userId', QTimestamp])
foundUserIDThatDidNotAnswer = False
foundUserID1AnswerEN = False
foundUserIDAnswersEN = False
foundUserID1ScoreEN = False
foundUserIDScoresEN = False
foundUserID1AnswerFR = False
foundUserIDAnswersFR = False
foundUserID1ScoreFR = False
foundUserIDScoresFR = False
foundUserIDAnswersENFR = False
maxuserIdIndex = len(sortedGFs['userId'])
userIdIndex = 0
userIdIntProgress = IntProgress(
value=0,
min=0,
max=maxuserIdIndex,
description='userIdIndex:'
)
display(userIdIntProgress)
userIdText = Text('')
display(userIdText)
# survey1522startDate = Timestamp('2018-03-24 12:00:00.000000+0000', tz='UTC')
survey1522startDate = gform[QTimestamp].min().floor('d')
if (rmdf1522['userTime'].min().floor('d') != gform[QTimestamp].min().floor('d')):
print("rmdf and gform first date don't match")
for userId in rmdf1522[rmdf1522['userTime'] >= survey1522startDate]['userId']:
if userId not in sortedGFs['userId'].values:
print("userIdThatDidNotAnswer = '" + userId + "'")
foundUserIDThatDidNotAnswer = True
break
for userId in sortedGFs['userId']:
userIdIndex += 1
userIdIntProgress.value = userIdIndex
userIdText.value = userId
answers = sortedGFs[sortedGFs['userId'] == userId]
if not foundUserID1AnswerEN and (len(answers) == 1) and (answers[QLanguage].unique() == [enLanguageID]):
print("userId1AnswerEN = '" + userId + "'")
print("userId1ScoreEN = '" + userId + "'")
foundUserID1AnswerEN = True
foundUserID1ScoreEN = True
if not foundUserIDAnswersEN and (len(answers) >= 2) and (answers[QLanguage].unique() == [enLanguageID]):
print("userIdAnswersEN = '" + userId + "'")
print("userIdScoresEN = '" + userId + "'")
foundUserIDAnswersEN = True
foundUserIDScoresEN = True
# if not foundUserID1ScoreEN and :
# print("userId1ScoreEN = '" + userId + "'")
# foundUserID1ScoreEN = True
# if not foundUserIDScoresEN and :
# print("userIdScoresEN = '" + userId + "'")
# foundUserIDScoresEN = True
if not foundUserID1AnswerFR and (len(answers) == 1) and (answers[QLanguage].unique() == [frLanguageID]):
print("userId1AnswerFR = '" + userId + "'")
print("userId1ScoreFR = '" + userId + "'")
foundUserID1AnswerFR = True
foundUserID1ScoreFR = True
if not foundUserIDAnswersFR and (len(answers) >= 2) and (answers[QLanguage].unique() == [frLanguageID]):
print("userIdAnswersFR = '" + userId + "'")
print("userIdScoresFR = '" + userId + "'")
foundUserIDAnswersFR = True
foundUserIDScoresFR = True
# if not foundUserID1ScoreFR and :
# print("userId1ScoreFR = '" + userId + "'")
# foundUserID1ScoreFR = True
# if not foundUserIDScoresFR and :
# print("userIdScoresFR = '" + userId + "'")
# foundUserIDScoresFR = True
if not foundUserIDAnswersENFR and (len(answers) >= 2) and (enLanguageID in answers[QLanguage].unique()) and (frLanguageID in answers[QLanguage].unique()):
print("userIdAnswersENFR = '" + userId + "'")
foundUserIDAnswersENFR = True
answers
answerDate = gform[gform['userId'] == '51f1ef77-ec48-4976-be1f-89b7cbd1afab'][QTimestamp][0]
answerDate
getEventCountRatios(answerDate, '51f1ef77-ec48-4976-be1f-89b7cbd1afab')
allEvents = rmdf1522[rmdf1522['userId']=='51f1ef77-ec48-4976-be1f-89b7cbd1afab']
allEventsCount = len(allEvents)
eventsBeforeRatio = len(allEvents[allEvents['userTime'] < answerDate])/allEventsCount
eventsAfterRatio = len(allEvents[allEvents['userTime'] > answerDate])/allEventsCount
result = [eventsBeforeRatio, eventsAfterRatio]
result
[answerDate, allEvents.loc[:, ['userTime']].iloc[0], allEvents.loc[:, ['userTime']].iloc[-1]]
gform[gform['userId'] == '51f1ef77-ec48-4976-be1f-89b7cbd1afab'][QTemporality].iloc[0]
userId = '51f1ef77-ec48-4976-be1f-89b7cbd1afab'
answerDate = gform[gform['userId'] == userId][QTimestamp][0]
[eventsBeforeRatio, eventsAfterRatio] = getEventCountRatios(answerDate, userId)
[eventsBeforeRatio, eventsAfterRatio]
Explanation: setAnswerTemporalities2 tinkering / Temporalities analysis
End of explanation
# code to find currently-sorted-as-posttest answers that have nan answers to content questions
QQ = QBioBricksDevicesComposition
for answerIndex in gform.index:
if gform.loc[answerIndex, QTemporality] == answerTemporalities[1]:
if pd.isnull(gform.loc[answerIndex,QQ]):
print(answerIndex)
# code to find which answers have both already played but also filled in profile questions
answersPlayedButProfile = []
for answerIndex in gform.index:
if gform.loc[answerIndex, QTemporality] == answerTemporalities[1]:
if ~pd.isnull(gform.iloc[answerIndex, QAge]):
answersPlayedButProfile.append(answerIndex)
gform.loc[answersPlayedButProfile, QPlayed]
userId = gform.loc[54, 'userId']
thisUserIdsAnswers = gform[gform['userId'] == userId]
thisUserIdsAnswers[thisUserIdsAnswers[QTemporality] == answerTemporalities[0]][QAge].values[0]
gform[gform[QTemporality] == answerTemporalities[0]][QAge].unique()
# pretest ages
ages = gform[(gform[QTemporality] == answerTemporalities[0])][QAge].unique()
ages.sort()
ages
# the answers that are a problem for the analysis
AUnclassifiable = 'I played recently on an other computer'
#_gformDF[(_gformDF[QTemporality] == answerTemporalities[1]) & (_gformDF[QAge].apply(type) == str)]
gform[gform[QPlayed] == AUnclassifiable]
# various tests around setPosttestsProfileInfo
len(_gformDF[pd.isnull(_gformDF[QAge])])/len(_gformDF)
_gformDF[pd.isnull(_gformDF[QAge])][QTemporality].unique()
_gformDF[_gformDF[QTemporality] == answerTemporalities[1]][QAge].unique()
nullAge = _gformDF[pd.isnull(_gformDF[QAge])]['userId']
nullAge = _gformDF[_gformDF['userId'].isin(nullAge)]
len(nullAge)
nullAge.sort_values(QPlayed)
dates = np.unique(nullAge[QTimestamp].apply(pd.Timestamp.date).values)
dates.sort()
dates
nullAge[QTimestamp].apply(pd.Timestamp.date).value_counts().sort_index()
len(nullAge['userId'].unique())/len(gform['userId'].unique())
pretestIds = _gformDF[_gformDF[QTemporality] == answerTemporalities[0]]['userId']
posttestIds = _gformDF[_gformDF[QTemporality] == answerTemporalities[1]]['userId']
posttestsWithoutPretests = posttestIds[~posttestIds.isin(pretestIds)]
pretestsWithoutPosttests = pretestIds[~pretestIds.isin(posttestIds)]
len(posttestsWithoutPretests), len(posttestIds), len(pretestsWithoutPosttests), len(pretestIds)
intersectionIds1 = pretestIds[pretestIds.isin(posttestIds)]
intersectionIds2 = posttestIds[posttestIds.isin(pretestIds)]
_gformDF.loc[intersectionIds2.index]
len(gform) - len(getWithoutIncompleteAnswers())
_gformDF2.iloc[_gformDF2.index[pd.isnull(_gformDF2[_gformDF2.columns[survey1522DF[profileColumn]]].T).any()]]
withoutIncompleteAnswers = getWithoutIncompleteAnswers()
len(gform) - len(withoutIncompleteAnswers)
len(getWithoutIncompleteAnswers())
# tests for getPerfectPretestPostestPairs
'29b739fc-4f9f-4f5e-bfee-8ba12de4b7fa' in testUsers
_gformDF3 = getWithoutIncompleteAnswers(gform)
sortedPosttests = _gformDF3[_gformDF3[QTemporality] == answerTemporalities[1]]['userId'].value_counts()
posttestDuplicatesUserIds = sortedPosttests[sortedPosttests > 1].index
_gformDF4 = _gformDF3[_gformDF3['userId'].isin(posttestDuplicatesUserIds)].drop_duplicates(subset=['userId', QTemporality], keep='first')
_gformDF5 = _gformDF3.sort_values(['userId', QTimestamp]).drop_duplicates(subset=['userId', QTemporality], keep='first')
len(gform),len(_gformDF3),len(_gformDF4),len(_gformDF5)
gform[gform['userId'].isin(posttestDuplicatesUserIds)][[QTimestamp, 'userId', QTemporality]].sort_values(['userId', QTimestamp])
gform.iloc[getPosttestsWithoutPretests(gform)][[QTimestamp, 'userId', QTemporality]].sort_values(['userId', QTimestamp])
# tests for getPerfectPretestPostestPairs
_gformDF = gform
_gformDF2 = getWithoutIncompleteAnswers(_gformDF)
vc = _gformDF2['userId'].value_counts()
vc[vc == 1]
# remove ulterior pretests and posttests
_gformDF3 = _gformDF2.sort_values(['userId', QTimestamp]).drop_duplicates(subset=['userId', QTemporality], keep='first')
vc = _gformDF3['userId'].value_counts()
vc[vc == 1]
# only keep pretests that have matching posttests
posttestIds = _gformDF3[_gformDF3[QTemporality] == answerTemporalities[1]]['userId']
_gformDF4 = _gformDF3.drop(_gformDF3.index[~_gformDF3['userId'].isin(posttestIds)])
vc = _gformDF4['userId'].value_counts()
vc[vc == 1]
vc
Explanation: question types
End of explanation |
15,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate Reactions
This script performs the same task as the script in scripts/generateReactions.py but in visual ipynb format.
It can also evaluate the reaction forward and reverse rates at a user selected temperature.
Step2: Declare database variables here by changing the thermo and reaction libraries, or restrict to certain reaction families.
Step4: List all species you want reactions between | Python Code:
from rmgpy.rmg.main import RMG
from rmgpy.rmg.model import CoreEdgeReactionModel
from rmgpy import settings
from IPython.display import display
from rmgpy.cantherm.output import prettify
Explanation: Generate Reactions
This script performs the same task as the script in scripts/generateReactions.py but in visual ipynb format.
It can also evaluate the reaction forward and reverse rates at a user selected temperature.
End of explanation
database =
database(
thermoLibraries = ['KlippensteinH2O2','SulfurLibrary', 'primaryThermoLibrary','DFT_QCI_thermo','CBS_QB3_1dHR'],
reactionLibraries = [],
seedMechanisms = [],
kineticsDepositories = 'default',
kineticsFamilies = ['Intra_R_Add_Exocyclic'], # Select a few families
# kineticsFamilies = 'all', # Or select 'all' or 'default' for the families
kineticsEstimator = 'rate rules',
)
options(
verboseComments=True, # Set to True for detailed kinetics comments
)
Explanation: Declare database variables here by changing the thermo and reaction libraries, or restrict to certain reaction families.
End of explanation
speciesList =
species(
label = "RAD1",
structure = SMILES("CCCCCCCCCCCCc1[c]cccc1"))
species(
label = "RAD1",
structure = SMILES("CCCCCCCCCCC[CH]c1ccccc1"))
species(
label = "RAD2",
structure = SMILES("CCCCCCCCCC[CH]Cc1ccccc1"))
species(
label = "RAD3",
structure = SMILES("CCCCCCCCC[CH]CCc1ccccc1"))
species(
label = "RAD4",
structure = SMILES("CCCCCCCC[CH]CCCc1ccccc1"))
species(
label = "RAD5",
structure = SMILES("CCCCCCC[CH]CCCCc1ccccc1"))
species(
label = "RAD6",
structure = SMILES("CCCCCC[CH]CCCCCc1ccccc1"))
species(
label = "RAD7",
structure = SMILES("CCCCC[CH]CCCCCCc1ccccc1"))
species(
label = "RAD8",
structure = SMILES("CCCC[CH]CCCCCCCc1ccccc1"))
species(
label = "RAD9",
structure = SMILES("CCC[CH]CCCCCCCCc1ccccc1"))
species(
label = "RAD10",
structure = SMILES("CC[CH]CCCCCCCCCc1ccccc1"))
species(
label = "RAD11",
structure = SMILES("C[CH]CCCCCCCCCCc1ccccc1"))
species(
label = "RAD12",
structure = SMILES("[CH2]CCCCCCCCCCCc1ccccc1"))
# Write input file to disk
inputFile = open('temp/input.py','w')
inputFile.write(database)
inputFile.write(speciesList)
inputFile.close()
# Execute generate reactions
from rmgpy.tools.generate_reactions import *
rmg = RMG(inputFile='temp/input.py', outputDirectory='temp')
rmg = execute(rmg)
# Pick some temperature to evaluate the forward and reverse kinetics
T = 623.0 # K
for rxn in rmg.reactionModel.outputReactionList:
print '========================='
display(rxn)
print 'Reaction Family = {0}'.format(rxn.family)
print ''
print 'Reactants'
for reactant in rxn.reactants:
print 'Label: {0}'.format(reactant.label)
print 'SMILES: {0}'.format(reactant.molecule[0].toSMILES())
print ''
print 'Products'
for product in rxn.products:
print 'Label: {0}'.format(product.label)
print 'SMILES: {0}'.format(product.molecule[0].toSMILES())
print ''
print rxn.toChemkin()
print ''
print 'Heat of Reaction = {0:.2F} kcal/mol'.format(rxn.getEnthalpyOfReaction(623.0)/4184)
print 'Forward kinetics at {0} K: {1:.2E}'.format(T, rxn.getRateCoefficient(T))
reverseRate = rxn.generateReverseRateCoefficient()
print 'Reverse kinetics at {0} K: {1:.2E}'.format(T, reverseRate.getRateCoefficient(T))
Explanation: List all species you want reactions between
End of explanation |
15,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 14
Step1: Hash index
To prevent the same hash from being counted upon again and again, we maintain a hash index to store the hashes of indexes. We also trim the index to remove any index for keys lower than the current key since we do not require those.
Step2: Part Two
Of course, in order to make this process even more secure, you've also implemented key stretching.
Key stretching forces attackers to spend more time generating hashes. Unfortunately, it forces everyone else to spend more time, too.
To implement key stretching, whenever you generate a hash, before you use it, you first find the MD5 hash of that hash, then the MD5 hash of that hash, and so on, a total of 2016 additional hashings. Always use lowercase hexadecimal representations of hashes.
For example, to find the stretched hash for index 0 and salt abc | Python Code:
import re
three_repeating_characters = re.compile(r'(.)\1{2}')
with open('../inputs/day14.txt', 'r') as f:
salt = f.readline().strip()
# TEST DATA
# salt = 'abc'
print(salt)
Explanation: Day 14: One-Time Pad
author: Harshvardhan Pandit
license: MIT
link to problem statement
In order to communicate securely with Santa while you're on this mission, you've been using a one-time pad that you generate using a pre-agreed algorithm. Unfortunately, you've run out of keys in your one-time pad, and so you need to generate some more.
To generate keys, you first get a stream of random data by taking the MD5 of a pre-arranged [salt](https://en.wikipedia.org/wiki/Salt_(cryptography) (your puzzle input) and an increasing integer index (starting with 0, and represented in decimal); the resulting MD5 hash should be represented as a string of lowercase hexadecimal digits.
However, not all of these MD5 hashes are keys, and you need 64 new keys for your one-time pad. A hash is a key only if:
It contains three of the same character in a row, like 777. Only consider the first such triplet in a hash.
One of the next 1000 hashes in the stream contains that same character five times in a row, like 77777.
Considering future hashes for five-of-a-kind sequences does not cause those hashes to be skipped; instead, regardless of whether the current hash is a key, always resume testing for keys starting with the very next hash.
For example, if the pre-arranged salt is abc:
The first index which produces a triple is 18, because the MD5 hash of abc18 contains ...cc38887a5.... However, index 18 does not count as a key for your one-time pad, because none of the next thousand hashes (index 19 through index 1018) contain 88888.
The next index which produces a triple is 39; the hash of abc39 contains eee. It is also the first key: one of the next thousand hashes (the one at index 816) contains eeeee.
None of the next six triples are keys, but the one after that, at index 92, is: it contains 999 and index 200 contains 99999.
Eventually, index 22728 meets all of the criteria to generate the 64th key.
So, using our example salt of abc, index 22728 produces the 64th key.
Given the actual salt in your puzzle input, what index produces your 64th one-time pad key?
Solution logic
Our salt is out input, we apply it increasingly to integers, take their MD5 hash, and if it contains a character repeating three times, we check if there is a hash in the next 1000 integers that features the same character 7 times, and if it does, then the integer we had is our key. Find 64 such keys, with the index of the 64th key being the answer. Seems simple and straightforward.
End of explanation
import hashlib
hash_index= {}
def get_hash_string(key):
if key in hash_index:
return hash_index[key]
string = '{salt}{key}'.format(salt=salt, key=key)
md5 = hashlib.md5()
md5.update(string.encode('ascii'))
hashstring = md5.hexdigest()
hash_index[key] = hashstring
return hashstring
def run():
keys = []
current_key = 0
while(len(keys) < 64):
for i in range(0, current_key):
hash_index.pop(i, None)
hashstring = get_hash_string(current_key)
repeating_chacter = three_repeating_characters.findall(hashstring)
if not repeating_chacter:
current_key += 1
continue
repeating_chacter = repeating_chacter[0]
repeating_character_five = ''.join(repeating_chacter for i in range(0, 5))
for qualifying_index in range(current_key + 1, current_key + 1001):
hashstring = get_hash_string(qualifying_index)
if repeating_character_five in hashstring:
break
else:
current_key += 1
continue
keys.append(current_key)
print(len(keys), current_key)
current_key += 1
return keys
print('answer', run()[63])
Explanation: Hash index
To prevent the same hash from being counted upon again and again, we maintain a hash index to store the hashes of indexes. We also trim the index to remove any index for keys lower than the current key since we do not require those.
End of explanation
hash_index = {}
def get_hash_string(key):
if key in hash_index:
return hash_index[key]
string = '{salt}{key}'.format(salt=salt, key=key)
md5 = hashlib.md5()
md5.update(string.encode('ascii'))
hashstring = md5.hexdigest()
# PART TWO
for i in range(0, 2016):
md5 = hashlib.md5()
md5.update(hashstring.encode('ascii'))
hashstring = md5.hexdigest()
hash_index[key] = hashstring
return hashstring
print('answer', run()[63])
Explanation: Part Two
Of course, in order to make this process even more secure, you've also implemented key stretching.
Key stretching forces attackers to spend more time generating hashes. Unfortunately, it forces everyone else to spend more time, too.
To implement key stretching, whenever you generate a hash, before you use it, you first find the MD5 hash of that hash, then the MD5 hash of that hash, and so on, a total of 2016 additional hashings. Always use lowercase hexadecimal representations of hashes.
For example, to find the stretched hash for index 0 and salt abc:
Find the MD5 hash of abc0: 577571be4de9dcce85a041ba0410f29f.
Then, find the MD5 hash of that hash: eec80a0c92dc8a0777c619d9bb51e910.
Then, find the MD5 hash of that hash: 16062ce768787384c81fe17a7a60c7e3.
...repeat many times...
Then, find the MD5 hash of that hash: a107ff634856bb300138cac6568c0f24.
So, the stretched hash for index 0 in this situation is a107ff.... In the end, you find the original hash (one use of MD5), then find the hash-of-the-previous-hash 2016 times, for a total of 2017 uses of MD5.
The rest of the process remains the same, but now the keys are entirely different. Again for salt abc:
The first triple (222, at index 5) has no matching 22222 in the next thousand hashes.
The second triple (eee, at index 10) hash a matching eeeee at index 89, and so it is the first key.
Eventually, index 22551 produces the 64th key (triple fff with matching fffff at index 22859.
Given the actual salt in your puzzle input and using 2016 extra MD5 calls of key stretching, what index now produces your 64th one-time pad key?
Solution logic
We only need to change the definition of get_hash_string to calculate the hash 2016 times more. And then simply run the algorithm again.
To prevent computationally intensive operations from repeating themselves, we maintain an index of hashes so that we can easily lookup indexes without needing to calculate their hashes.
End of explanation |
15,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 10
ML Techniques
Step1: There are a number of different features here. We'll focus on the first two
Step2: Standarization scaling
As we noted above, the goal is to turn these features into unitless parameters. That means we have to divide the feature by a quantity with the same units. There are two typical ways of doing this
Step3: We've accomplished what we set out to do
Step4: It looks like we've re-scaled the data without changing the basic shape or the relationships between the points. That's good. The standardized data can now be used as inputs for the machine learning algorithms.
Min-Max Scaling
One of the things we didn't mention before was that the standardization scaling only really works if we have enough data to get a good standard deviation and if the data are normally distributed (i.e. they look like a bell-shaped curve). IF these things don't apply, then standardization may not be the best thing to do.
There is another way we could scale the data
Step5: As you can see, we've really shifted things around. The means are not very pretty (because the dataset has been shrunk to fit between 0 and 1). Additionally the standard deviation has changed and isn't very pretty, either. Let's plot it to see if the shape has changed. | Python Code:
import pandas as pd
df = pd.read_csv('Class10_wine_data.csv')
df.head()
Explanation: Class 10
ML Techniques: Feature scaling
Another aspect of optimizing machine learning algorithms is to think about feature scaling. When we use multiple numeric features as inputs to a regression or classification algorithm, the computer just sees those values as numbers without context or units. What if we have the data that has one column corresponding to a driver's age and another column that corresponds to the vehicle gross weight. Those are very different sets of numbers. Without doing any other scaling or pre-processing, the machine learning algorithm may emphasize the vehicle weights more than the ages because they are larger numbers. We don't want that to happen.
The process of changing the data scale is called feature scaling and we'll look at two different types of feature scaling. If you want a good in-depth tutorial on how this works, I recommend this tutorial.
Working with Different Data Types
We'll use a sample dataset from the UCI archives. This dataset is looking at various characteristics of different wine samples and has classified the samples into one of three different classes (1,2, and 3). Let's import the data and take a look at it.
End of explanation
# Plot the first two feature columns
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
#plt.figure(figsize=(8,6))
plt.scatter(df['Alcohol'], df['Malic acid'])
plt.xlabel('Alcohol (%/L)')
plt.ylabel('Malic Acid (g/L)')
plt.xlim(0,16)
plt.ylim(0,16)
plt.axes().set_aspect('equal')
Explanation: There are a number of different features here. We'll focus on the first two: Alcohol which has units (percent/volumne) and Malic acid with units (g/l). Any machine learning algorithm that uses these features is going to treat them as if they were on the same scale. However, they aren't. So we need to re-scale them to unitless values in order to work with them.
Let's first look at the distribution of points in those columns.
End of explanation
from sklearn.preprocessing import StandardScaler
std_scaler = StandardScaler().fit(df[['Alcohol', 'Malic acid']])
df_std = std_scaler.transform(df[['Alcohol', 'Malic acid']])
print('Mean before standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df['Alcohol'].mean(), df['Malic acid'].mean()))
print('\nMean after standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df_std[:,0].mean(), df_std[:,1].mean()))
print('\nStandard deviation before standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df['Alcohol'].std(), df['Malic acid'].std()))
print('\nStandard deviation after standardization:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df_std[:,0].std(), df_std[:,1].std()))
Explanation: Standarization scaling
As we noted above, the goal is to turn these features into unitless parameters. That means we have to divide the feature by a quantity with the same units. There are two typical ways of doing this: we'll start with standardization or Z-scale normalization. We need to calculate the standard deviation of the feature, then divide the entire column by the standard deviation. Because the standard deviation has the same units as the feature itself, this takes care of the normalization.
We'll also do one more thing: we'll subtract the mean value of the distribution from each data point first. That way we end up with a distribution that is centered at zero. Of course there is a library that will do this for us. We need to fit the library on the data we want to scale, then use the transform() function to transform the input data based on the scaler. We can do both features at the same time: the scaler keeps track of them individually.
End of explanation
fig, ax = plt.subplots(1,2)
for a,d,l in zip(range(len(ax)),
(df[['Alcohol', 'Malic acid']].values, df_std),
('Input scale',
'Standardized [$N (\mu=0, \; \sigma=1)$]')
):
for i,c in zip(range(1,4), ('red', 'blue', 'green')):
ax[a].scatter(d[df['Wine Class'].values == i, 0],
d[df['Wine Class'].values == i, 1],
alpha=0.5,
color=c,
label='Class %s' %i
)
ax[a].set_aspect('equal')
ax[a].set_title(l)
ax[a].set_xlabel('Alcohol')
ax[a].set_ylabel('Malic Acid')
ax[a].legend(loc='upper left')
ax[a].grid()
plt.tight_layout()
Explanation: We've accomplished what we set out to do: the new means are close to zero and the standard deviations are 1. Let's see how it changed the shape of the distributions.
End of explanation
from sklearn.preprocessing import MinMaxScaler
minmax_scaler = MinMaxScaler().fit(df[['Alcohol', 'Malic acid']])
df_minmax = minmax_scaler.transform(df[['Alcohol', 'Malic acid']])
print('Mean before min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df['Alcohol'].mean(), df['Malic acid'].mean()))
print('\nMean after min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df_minmax[:,0].mean(), df_minmax[:,1].mean()))
print('\nStandard deviation before min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df['Alcohol'].std(), df['Malic acid'].std()))
print('\nStandard deviation after min-max scaling:\nAlcohol={:.2f}, Malic acid={:.2f}'
.format(df_minmax[:,0].std(), df_minmax[:,1].std()))
from sklearn.model_selection import train_test_split
train, test = train_test_split(df, test_size=0.2, random_state=23)
minmax_scaler = MinMaxScaler().fit(train[['Alcohol', 'Malic acid']])
train_features = minmax_scaler.transform(train[['Alcohol', 'Malic acid']])
train_features[0:2]
test[['Alcohol' ,'Malic acid']].head(2)
test_features = minmax_scaler.transform(test[['Alcohol', 'Malic acid']])
test_features[0:2]
Explanation: It looks like we've re-scaled the data without changing the basic shape or the relationships between the points. That's good. The standardized data can now be used as inputs for the machine learning algorithms.
Min-Max Scaling
One of the things we didn't mention before was that the standardization scaling only really works if we have enough data to get a good standard deviation and if the data are normally distributed (i.e. they look like a bell-shaped curve). IF these things don't apply, then standardization may not be the best thing to do.
There is another way we could scale the data: we could figure out the "distance" between the maximum and the minimum and then scale the data based on this "distance". Of course we should also subtract off the minimum point. The net result is that now the entire dataset lies between 0 and 1. Let's do this and see how it looks.
End of explanation
fig, ax = plt.subplots(1,2)
for a,d,l in zip(range(len(ax)),
(df[['Alcohol', 'Malic acid']].values, df_minmax),
('Input scale',
'Min-max scale')
):
for i,c in zip(range(1,4), ('red', 'blue', 'green')):
ax[a].scatter(d[df['Wine Class'].values == i, 0],
d[df['Wine Class'].values == i, 1],
alpha=0.5,
color=c,
label='Class %s' %i
)
ax[a].set_aspect('equal')
ax[a].set_title(l)
ax[a].set_xlabel('Alcohol')
ax[a].set_ylabel('Malic Acid')
ax[a].legend(loc='upper left')
ax[a].grid()
plt.tight_layout()
Explanation: As you can see, we've really shifted things around. The means are not very pretty (because the dataset has been shrunk to fit between 0 and 1). Additionally the standard deviation has changed and isn't very pretty, either. Let's plot it to see if the shape has changed.
End of explanation |
15,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project 2
Step1: 1. Murder rates
Punishment for crime has many philosophical justifications. An important one is that fear of punishment may deter people from committing crimes.
In the United States, some jurisdictions execute some people who are convicted of particularly serious crimes, like murder. This punishment is called the death penalty or capital punishment. The death penalty is controversial, and deterrence has been one focal point of the debate. There are other reasons to support or oppose the death penalty, but in this project we'll focus on deterrence.
The key question about deterrence is
Step2: So far, this looks like a dataset that lends itself to an observational study. In fact, these data aren't even enough to demonstrate an association between the existence of the death penalty in a state in a year and the murder rate in that state and year!
Question 1.1. What additional information will we need before we can check for that association?
Write your answer here, replacing this text.
Murder rates vary over time, and different states exhibit different trends. The rates in some states change dramatically from year to year, while others are quite stable. Let's plot a couple, just to see the variety.
Question 1.2. Draw a line plot with years on the horizontal axis and murder rates on the
vertical axis. Include two lines
Step3: A reminder about tests
The automated tests check for basic errors (like the number of rows in your ak_mn table, or whether you defined a function named most_murderous for the next question), but they aren't comprehensive.
If you're not sure that your answer is correct, think about how you can check it. For example, if a table has the right number of rows and columns, and a few randomly-selected values from each column are correct, then you can be somewhat confident you've computed it correctly. For the previous question, try checking some of the values in ak_mn manually, by searching through the murder_rates table.
Question 1.3. Implement the function most_murderous, which takes a year (an integer) as its argument. It does two things
Step4: Question 1.4. How many more people were murdered in California in 1988 than in 1975? Assign ca_change to the answer.
Hint
Step5: Certain mistakes would make your answer to the previous question way too small or way too big, and the automatic tests don't check that. Make sure your answer looks reasonable after carefully reading the question.
2. Changes in Murder Rates
Murder rates vary widely across states and years, presumably due to the vast array of differences among states and across US history. Rather than attempting to analyze rates themselves, here we will restrict our analysis to whether or not murder rates increased or decreased over certain time spans. We will not concern ourselves with how much rates increased or decreased; only the direction of the change - whether they increased or decreased.
The np.diff function takes an array of values and computes the differences between adjacent items of a list or array. Instead, we may wish to compute the difference between items that are two positions apart. For example, given a 5-element array, we may want
Step6: Question 2.1. Implement the function two_year_changes that takes an array of murder rates for a state, ordered by increasing year. For all two-year periods (e.g., from 1960 to 1962), it computes and returns the number of increases minus the number of decreases.
For example, the rates r = make_array(10, 7, 12, 9, 13, 9, 11) contain three increases (10 to 12, 7 to 9, and 12 to 13), one decrease (13 to 11), and one change that is neither an increase or decrease (9 to 9). Therefore, two_year_changes(r) would return 2, the difference between three increases and 1 decrease.
Step7: We can use two_year_changes to summarize whether rates are mostly increasing or decreasing over time for some state or group of states. Let's see how it varies across the 50 US states.
Question 2.2. Assign changes_by_state to a table with one row per state that has two columns
Step8: Some states have more increases than decreases (a positive number), while some have more decreases than increases (a negative number).
Question 2.3. Assign total_changes to the total increases minus the total decreases for all two-year periods and all states in our data set.
Step9: "More increases than decreases," one student exclaims, "Murder rates tend to go up across two-year periods. What dire times we live in."
"Not so fast," another student replies, "Even if murder rates just moved up and down uniformly at random, there would be some difference between the increases and decreases. There were a lot of states and a lot of years, so there were many chances for changes to happen. Perhaps this difference we observed is a typical value when so many changes are observed if the state murder rates increase and decrease at random!"
Question 2.4. Set num_changes to the number of different two-year periods in the entire data set that could result in a change of a state's murder rate. Include both those periods where a change occurred and the periods where a state's rate happened to stay the same.
For example, 1968 to 1970 of Alaska would count as one distinct two-year period.
Step10: We now have enough information to perform a hypothesis test.
Null Hypothesis
Step12: Question 2.6. Complete the simulation below, which samples num_changes increases/decreases at random many times and forms an empirical distribution of your test statistic under the null hypothesis. Your job is to
* fill in the function simulate_under_null, which simulates a single sample under the null hypothesis, and
* fill in its argument when it's called below.
Step14: Question 2.7. Looking at this histogram, draw a conclusion about whether murder rates basically increase as often as they decrease. (You do not need to compute a P-value for this question.)
Write your answer here, replacing this text.
3. The death penalty
Some US states have the death penalty, and others don't, and laws have changed over time. In addition to changes in murder rates, we will also consider whether the death penalty was in force in each state and each year.
Using this information, we would like to investigate how the death penalty affects the murder rate of a state.
Question 3.1. Describe this investigation in terms of an experiment. What population are we studying? What is the control group? What is the treatment group? What outcome are we measuring?
Write your answers below.
Population
Step15: Question 3.3. Assign death_penalty_murder_rates to a table with the same columns and data as murder_rates, but that has only the rows for states that had the death penalty in 1971.
The first 2 rows of your table should look like this
Step16: The null hypothesis doesn't specify how the murder rate changes; it only talks about increasing or decreasing. So, we will use the same test statistic you defined in section 2.
Question 3.4. Assign changes_72 to the value of the test statistic for the years 1971 to 1973 and the states in death_penalty_murder_rates.
Hint
Step17: Look at the data (or perhaps a random sample!) to verify that your answer is correct.
Question 3.5.
Step18: Conclusion
Question 3.6. Complete the analysis as follows
Step20: 4. Further evidence
So far, we have discovered evidence that when executions were outlawed, the murder rate increased in many more states than we would expect from random chance. We have also seen that across all states and all recent years, the murder rate goes up about as much as it goes down over two-year periods.
These discoveries seem to support the claim that eliminating the death penalty increases the murder rate. Should we be convinced? Let's conduct some more tests to strengthen our claim.
Conducting a test for this data set required the following steps
Step21: The rest of the states
We found a dramatic increase in murder rates for those states affected by the 1972 Supreme Court ruling, but what about the rest of the states? There were six states that had already outlawed execution at the time of the ruling.
Question 4.2. Create a table called non_death_penalty_murder_rates with the same columns as murder_rates but only containing rows for the six states without the death penalty in 1971. Perform the same test on this table. Then, in one sentence, conclude whether their murder rates were also more likely to increase from 1971 to 1973.
Step22: Write your answer here, replacing this text.
Step23: The death penalty reinstated
In 1976, the Supreme Court repealed its ban on the death penalty in its rulings on a series of cases including Gregg v. Georgia, so the death penalty was reinstated where it was previously banned. This generated a second natural experiment. To the extent that the death penalty deters murder, reinstating it should decrease murder rates, just as banning it should increase them. Let's see what happened.
Step24: Hint
Step25: Question 5.2. Describe in one short sentence a high-level takeaway from the line plot below. Are the murder rates in these two groups of states related?
Step26: Write your answer here, replacing this text.
Let's bring in another source of information
Step27: The line plot we generated above is similar to a figure from the paper.
<img src="paper_plot.png"/>
Canada has not executed a criminal since 1962. Since 1967, the only crime that can be punished by execution in Canada is the murder of on-duty law enforcement personnel. The paper states, "The most striking finding is that the homicide rate in Canada has moved in
virtual lockstep with the rate in the United States."
Question 5.4. Complete their argument in 2-3 sentences; what features of these plots indicate that the death penalty is not an important factor in determining the murder rate? (If you're stuck, read the paper.)
Write your answer here, replacing this text.
Question 5.5. What assumption(s) did we make in Parts 1 through 4 of the project that led us to believe that the death penalty deterred murder, when in fact the line plots tell a different story?
Write your answer here, replacing this text.
You're done! Congratulations. | Python Code:
# Run this cell to set up the notebook, but please don't change it.
import numpy as np
from datascience import *
# These lines do some fancy plotting magic.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
from client.api.assignment import load_assignment
tests = load_assignment('project2.ok')
Explanation: Project 2: Inference and Capital Punishment
Welcome to Project 2! You will investigate the relationship between murder and capital punishment (the death penalty) in the United States. By the end of the project, you should know how to:
Test whether observed data appears to be a random sample from a distribution
Analyze a natural experiment
Implement and interpret a sign test
Create a function to run a general hypothesis test
Analyze visualizations and draw conclusions from them
Administrivia
Piazza
While collaboration is encouraged on this and other assignments, sharing answers is never okay. In particular, posting code or other assignment answers publicly on Piazza (or elsewhere) is academic dishonesty. It will result in a reduced project grade at a minimum. If you wish to ask a question and include code, you must make it a private post.
Partners
You may complete the project with up to one partner. Partnerships are an exception to the rule against sharing answers. If you have a partner, one person in the partnership should submit your project on Gradescope and include the other partner in the submission. (Gradescope will prompt you to fill this in.)
Your partner must be in your lab section. You can ask your TA to pair you with someone from your lab if you’re unable to find a partner. (That will happen in lab the week the project comes out.)
Due Date and Checkpoint
Part of the project will be due early. Parts 1 and 2 of the project (out of 5) are due Tuesday, November 1st at 7PM. Unlike the final submission, this early checkpoint will be graded for completion. It will be worth approximately 10% of the total project grade. Simply submit your partially-completed notebook as a PDF, as you would submit any other notebook. (See the note above on submitting with a partner.)
The entire project (parts 1, 2, 3, 4, and 5) will be due Tuesday, November 9th at 7PM. (Again, see the note above on submitting with a partner.)
On to the project!
Run the cell below to prepare the automatic tests. The automated tests for this project definitely don't catch all possible errors; they're designed to help you avoid some common mistakes. Merely passing the tests does not guarantee full credit on any question.
End of explanation
murder_rates = Table.read_table('crime_rates.csv').select('State', 'Year', 'Population', 'Murder Rate')
murder_rates.set_format("Population", NumberFormatter)
Explanation: 1. Murder rates
Punishment for crime has many philosophical justifications. An important one is that fear of punishment may deter people from committing crimes.
In the United States, some jurisdictions execute some people who are convicted of particularly serious crimes, like murder. This punishment is called the death penalty or capital punishment. The death penalty is controversial, and deterrence has been one focal point of the debate. There are other reasons to support or oppose the death penalty, but in this project we'll focus on deterrence.
The key question about deterrence is:
Does instituting a death penalty for murder actually reduce the number of murders?
You might have a strong intuition in one direction, but the evidence turns out to be surprisingly complex. Different sides have variously argued that the death penalty has no deterrent effect and that each execution prevents 8 murders, all using statistical arguments! We'll try to come to our own conclusion.
Here is a road map for this project:
In the rest of this section, we'll investigate the main dataset we'll be using.
In section 2, we'll see how to test null hypotheses like this: "For this set of U.S. states, the murder rate was equally likely to go up or down each year."
In section 3, we'll apply a similar test to see whether U.S. states that suddenly ended or reinstituted the death penalty were more likely to see murder rates increase than decrease.
In section 4, we will run some more tests to further claims we had been developing in previous sections.
In section 5, we'll try to answer our question about deterrence using a visualization rather than a formal hypothesis test.
The data
The main data source for this project comes from a paper by three researchers, Dezhbakhsh, Rubin, and Shepherd. The dataset contains rates of various violent crimes for every year 1960-2003 (44 years) in every US state. The researchers compiled their data from the FBI's Uniform Crime Reports.
Since crimes are committed by people, not states, we need to account for the number of people in each state when we're looking at state-level data. Murder rates are calculated as follows:
$$\text{murder rate for state X in year Y} = \frac{\text{number of murders in state X in year Y}}{\text{population in state X in year Y}}*100000$$
(Murder is rare, so we multiply by 100,000 just to avoid dealing with tiny numbers.)
End of explanation
# The next lines are provided for you. They create a table
# containing only the Alaska information and one containing
# only the Minnesota information.
ak = murder_rates.where('State', 'Alaska').drop('State', 'Population').relabeled(1, 'Murder rate in Alaska')
mn = murder_rates.where('State', 'Minnesota').drop('State', 'Population').relabeled(1, 'Murder rate in Minnesota')
# Fill in this line to make a table like the one pictured above.
ak_mn = ...
ak_mn.plot('Year')
_ = tests.grade('q1_1_2')
Explanation: So far, this looks like a dataset that lends itself to an observational study. In fact, these data aren't even enough to demonstrate an association between the existence of the death penalty in a state in a year and the murder rate in that state and year!
Question 1.1. What additional information will we need before we can check for that association?
Write your answer here, replacing this text.
Murder rates vary over time, and different states exhibit different trends. The rates in some states change dramatically from year to year, while others are quite stable. Let's plot a couple, just to see the variety.
Question 1.2. Draw a line plot with years on the horizontal axis and murder rates on the
vertical axis. Include two lines: one for Alaska murder rates and one for Minnesota murder rates. Create this plot using a single call, ak_mn.plot('Year').
Hint: To create two lines, you will need create the table ak_mn with two columns of murder rates, in addition to a column of years. You can use join to create this table, which will have the following structure:
| Year | Murder rate in Alaska | Murder rate in Minnesota |
|------|-----------------------|--------------------------|
| 1960 | 10.2 | 1.2 |
| 1961 | 11.5 | 1 |
| 1962 | 4.5 | 0.9 |
End of explanation
def most_murderous(year):
# Fill in this line so that the next 2 lines do what the function
# is supposed to do. most should be a table.
most = ...
most.barh('State', 'Murder Rate')
return most.column('State')
most_murderous(1990)
_ = tests.grade('q1_1_3')
Explanation: A reminder about tests
The automated tests check for basic errors (like the number of rows in your ak_mn table, or whether you defined a function named most_murderous for the next question), but they aren't comprehensive.
If you're not sure that your answer is correct, think about how you can check it. For example, if a table has the right number of rows and columns, and a few randomly-selected values from each column are correct, then you can be somewhat confident you've computed it correctly. For the previous question, try checking some of the values in ak_mn manually, by searching through the murder_rates table.
Question 1.3. Implement the function most_murderous, which takes a year (an integer) as its argument. It does two things:
1. It draws a horizontal bar chart of the 5 states that had the highest murder rate in that year.
2. It returns an array of the names of these states in order of increasing murder rate.
If the argument isn't a year in murder_rates, your function can do anything.
End of explanation
ca = murder_rates.where('State', are.equal_to('California'))
ca_change = ...
np.round(ca_change)
_ = tests.grade('q1_1_4')
Explanation: Question 1.4. How many more people were murdered in California in 1988 than in 1975? Assign ca_change to the answer.
Hint: Consider using the formula in the beginning of the section to answer this question.
End of explanation
def diff_n(values, n):
return np.array(values)[n:] - np.array(values)[:-n]
diff_n(make_array(1, 10, 100, 1000, 10000), 2)
Explanation: Certain mistakes would make your answer to the previous question way too small or way too big, and the automatic tests don't check that. Make sure your answer looks reasonable after carefully reading the question.
2. Changes in Murder Rates
Murder rates vary widely across states and years, presumably due to the vast array of differences among states and across US history. Rather than attempting to analyze rates themselves, here we will restrict our analysis to whether or not murder rates increased or decreased over certain time spans. We will not concern ourselves with how much rates increased or decreased; only the direction of the change - whether they increased or decreased.
The np.diff function takes an array of values and computes the differences between adjacent items of a list or array. Instead, we may wish to compute the difference between items that are two positions apart. For example, given a 5-element array, we may want:
[item 2 - item 0 , item 3 - item 1 , item 4 - item 2]
The diff_n function below computes this result. Don't worry if the implementation doesn't make sense to you, as long as you understand its behavior.
End of explanation
def two_year_changes(rates):
"Return the number of increases minus the number of decreases after two years."
...
print('Alaska:', two_year_changes(ak.column('Murder rate in Alaska')))
print('Minnesota:', two_year_changes(mn.column('Murder rate in Minnesota')))
_ = tests.grade('q1_2_1')
Explanation: Question 2.1. Implement the function two_year_changes that takes an array of murder rates for a state, ordered by increasing year. For all two-year periods (e.g., from 1960 to 1962), it computes and returns the number of increases minus the number of decreases.
For example, the rates r = make_array(10, 7, 12, 9, 13, 9, 11) contain three increases (10 to 12, 7 to 9, and 12 to 13), one decrease (13 to 11), and one change that is neither an increase or decrease (9 to 9). Therefore, two_year_changes(r) would return 2, the difference between three increases and 1 decrease.
End of explanation
changes_by_state = ...
# Here is a histogram of the two-year changes for the states.
# Since there are 50 states, each state contributes 2% to one
# bar.
changes_by_state.hist("Murder Rate two_year_changes", bins=np.arange(-11, 12, 2))
_ = tests.grade('q1_2_2')
Explanation: We can use two_year_changes to summarize whether rates are mostly increasing or decreasing over time for some state or group of states. Let's see how it varies across the 50 US states.
Question 2.2. Assign changes_by_state to a table with one row per state that has two columns: the State name and the Murder Rate two_year_changes statistic computed across all years in our data set for that state. Its first 2 rows should look like this:
|State|Murder Rate two_year_changes|
|-|-|
|Alabama|-6|
|Alaska|-5||
End of explanation
total_changes = ...
print('Total increases minus total decreases, across all states and years:', total_changes)
Explanation: Some states have more increases than decreases (a positive number), while some have more decreases than increases (a negative number).
Question 2.3. Assign total_changes to the total increases minus the total decreases for all two-year periods and all states in our data set.
End of explanation
num_changes = ...
num_changes
_ = tests.grade('q1_2_4')
Explanation: "More increases than decreases," one student exclaims, "Murder rates tend to go up across two-year periods. What dire times we live in."
"Not so fast," another student replies, "Even if murder rates just moved up and down uniformly at random, there would be some difference between the increases and decreases. There were a lot of states and a lot of years, so there were many chances for changes to happen. Perhaps this difference we observed is a typical value when so many changes are observed if the state murder rates increase and decrease at random!"
Question 2.4. Set num_changes to the number of different two-year periods in the entire data set that could result in a change of a state's murder rate. Include both those periods where a change occurred and the periods where a state's rate happened to stay the same.
For example, 1968 to 1970 of Alaska would count as one distinct two-year period.
End of explanation
uniform = Table().with_columns(
"Change", make_array('Increase', 'Decrease'),
"Chance", make_array(0.5, 0.5))
uniform.sample_from_distribution('Chance', 100)
Explanation: We now have enough information to perform a hypothesis test.
Null Hypothesis: State murder rates increase and decrease over two-year periods as if
"increase" or "decrease" were sampled at random from a uniform distribution, like a fair coin flip.
Since it's possible that murder rates are more likely to go up or more likely to go down, our alternative hypothesis should contemplate either case:
Alternative Hypothesis: State murder rates are either more likely or less likely to increase than decrease over two-year periods.
Technical note: These changes in murder rates are not random samples from any population. They describe all murders in all states over all recent years. However, we can imagine that history could have been different, and that the observed changes are the values observed in only one possible world: the one that happened to occur. In this sense, we can evaluate whether the observed "total increases minus total decreases" is consistent with a hypothesis that increases and decreases are drawn at random from a uniform distribution.
Question 2.5 Given these null and alternative hypotheses, define a good test statistic.
Important requirements for your test statistic: Choose a test statistic for which large positive values are evidence in favor of the alternative hypothesis, and other values are evidence in favor of the null hypothesis. Your test statistic should depend only on whether murder rates increased or decreased, not on the size of any change.
Write your answer here, replacing this text.
The cell below samples increases and decreases at random from a uniform distribution 100 times. The final column of the resulting table gives the number of increases and decreases that resulted from sampling in this way.
End of explanation
def simulate_under_null(num_chances_to_change):
Simulates some number changing several times, with an equal
chance to increase or decrease. Returns the value of your
test statistic for these simulated changes.
num_chances_to_change is the number of times the number changes.
...
uniform_samples = make_array()
for i in np.arange(5000):
uniform_samples = np.append(uniform_samples, simulate_under_null(...))
# Feel free to change the bins if they don't make sense for your test statistic.
Table().with_column('Test statistic under null', uniform_samples).hist(0, bins=np.arange(-100, 400+25, 25))
Explanation: Question 2.6. Complete the simulation below, which samples num_changes increases/decreases at random many times and forms an empirical distribution of your test statistic under the null hypothesis. Your job is to
* fill in the function simulate_under_null, which simulates a single sample under the null hypothesis, and
* fill in its argument when it's called below.
End of explanation
non_death_penalty_states = make_array('Alaska', 'Hawaii', 'Maine', 'Michigan', 'Wisconsin', 'Minnesota')
def had_death_penalty_in_1971(state):
Returns True if the argument is the name of a state that had the death penalty in 1971.
# The implementation of this function uses a bit of syntax
# we haven't seen before. Just trust that it behaves as its
# documentation claims.
return state not in non_death_penalty_states
states = murder_rates.group('State').select('State')
death_penalty = states.with_column('Death Penalty', states.apply(had_death_penalty_in_1971, 0))
death_penalty
num_death_penalty_states = death_penalty.where("Death Penalty", are.equal_to(True)).num_rows
num_death_penalty_states
Explanation: Question 2.7. Looking at this histogram, draw a conclusion about whether murder rates basically increase as often as they decrease. (You do not need to compute a P-value for this question.)
Write your answer here, replacing this text.
3. The death penalty
Some US states have the death penalty, and others don't, and laws have changed over time. In addition to changes in murder rates, we will also consider whether the death penalty was in force in each state and each year.
Using this information, we would like to investigate how the death penalty affects the murder rate of a state.
Question 3.1. Describe this investigation in terms of an experiment. What population are we studying? What is the control group? What is the treatment group? What outcome are we measuring?
Write your answers below.
Population: ...
Control Group: ...
Treatment Group: ...
Outcome: ...
Question 3.2. We want to know whether the death penalty causes a change in the murder rate. Why is it not sufficient to compare murder rates in places and times when the death penalty was in force with places and times when it wasn't?
Write your answer here, replacing this text.
A Natural Experiment
In order to attempt to investigate the causal relationship between the death penalty and murder rates, we're going to take advantage of a natural experiment. A natural experiment happens when something other than experimental design applies a treatment to one group and not to another (control) group, and we can reasonably expect that the treatment and control groups don't have any other systematic differences.
Our natural experiment is this: in 1972, a Supreme Court decision called Furman v. Georgia banned the death penalty throughout the US. Suddenly, many states went from having the death penalty to not having the death penalty.
As a first step, let's see how murder rates changed before and after the court decision. We'll define the test as follows:
Population: All the states that had the death penalty before the 1972 abolition. (There is no control group for the states that already lacked the death penalty in 1972, so we must omit them.) This includes all US states except Alaska, Hawaii, Maine, Michigan, Wisconsin, and Minnesota.
Treatment group: The states in that population, in the year after 1972.
Control group: The states in that population, in the year before 1972.
Null hypothesis: Each state's murder rate was equally likely to be higher or lower in the treatment period than in the control period. (Whether the murder rate increased or decreased in each state was like the flip of a fair coin.)
Alternative hypothesis: The murder rate was more likely to increase or more likely to decrease.
Technical Note: It's not clear that the murder rates were a "sample" from any larger population. Again, it's useful to imagine that our data could have come out differently and to test the null hypothesis that the murder rates were equally likely to move up or down.
The death_penalty table below describes whether each state allowed the death penalty in 1971.
End of explanation
# The staff solution used 3 lines of code.
death_penalty_murder_rates = ...
death_penalty_murder_rates
Explanation: Question 3.3. Assign death_penalty_murder_rates to a table with the same columns and data as murder_rates, but that has only the rows for states that had the death penalty in 1971.
The first 2 rows of your table should look like this:
|State|Year|Population|Murder Rate|
|-----|----|----------|-----------|
|Alabama|1960|3266740|12.4|
|Alabama|1961|3302000|12.9|
End of explanation
# The staff solution took 5 lines of code.
test_stat_72 = ...
print('Increases minus decreases from 1971 to 1973:', test_stat_72)
Explanation: The null hypothesis doesn't specify how the murder rate changes; it only talks about increasing or decreasing. So, we will use the same test statistic you defined in section 2.
Question 3.4. Assign changes_72 to the value of the test statistic for the years 1971 to 1973 and the states in death_penalty_murder_rates.
Hint: You have already written nearly the same code in a previous part of this project.
End of explanation
samples = make_array()
for i in np.arange(10000):
samples = ...
# Feel free to change the bins if they don't make sense for your test statistic.
Table().with_column('Test statistic under null', samples).hist(bins=np.arange(-4, 28+2, 2))
_ = tests.grade('q1_3_5')
Explanation: Look at the data (or perhaps a random sample!) to verify that your answer is correct.
Question 3.5.: Draw an empirical histogram of the statistic under the null hypothesis by simulating the test statistic 5,000 times.
Hint: In a previous part of this project, you have already written a function that runs such a simulation once.
End of explanation
# Use this cell to compute the P-value, if you wish.
Explanation: Conclusion
Question 3.6. Complete the analysis as follows:
1. Compute a P-value.
2. Draw a conclusion about the null and alternative hypotheses.
3. Describe your findings using simple, non-technical language. Be careful not to claim that the statistical analysis has established more than it really has.
P-value: ...
Conclusion about the hypotheses: ...
Findings: ...
End of explanation
def run_test(rates, start_year):
Return a P-value for the observed difference between increases and decreases.
end_year = start_year + 2
observed_test_statistic = ...
print('Test statistic', start_year, 'to', end_year, ':', observed_test_statistic)
num_states = rates.group('State').num_rows
samples = make_array()
for i in np.arange(5000):
samples = ...
...
run_test(death_penalty_murder_rates, 1971)
_ = tests.grade('q1_4_1')
Explanation: 4. Further evidence
So far, we have discovered evidence that when executions were outlawed, the murder rate increased in many more states than we would expect from random chance. We have also seen that across all states and all recent years, the murder rate goes up about as much as it goes down over two-year periods.
These discoveries seem to support the claim that eliminating the death penalty increases the murder rate. Should we be convinced? Let's conduct some more tests to strengthen our claim.
Conducting a test for this data set required the following steps:
Select a table containing murder rates for certain states and all years,
Choose two years and compute the observed value of the test statistic,
Simulate the test statistic under the null hypothesis that increases and decreases are drawn uniformly at random, then
Compare the observed difference to the empirical distribution to compute a P-value.
This entire process can be expressed in a single function, called run_test.
Question 4.1. Implement run_test, which takes the following arguments:
A table of murder rates for certain states, sorted by state and year like murder_rates, and
the year when the analysis starts. (The comparison group is two years later.)
It prints out the observed test statistic and returns the P-value for this statistic under the null hypothesis.
Hint 1: You can complete most of this question by copying code you wrote earlier.
Hint 2: This problem might seem daumting. Start by writing out the different steps involved in running a test.
End of explanation
non_death_penalty_murder_rates = ...
run_test(non_death_penalty_murder_rates, 1971)
Explanation: The rest of the states
We found a dramatic increase in murder rates for those states affected by the 1972 Supreme Court ruling, but what about the rest of the states? There were six states that had already outlawed execution at the time of the ruling.
Question 4.2. Create a table called non_death_penalty_murder_rates with the same columns as murder_rates but only containing rows for the six states without the death penalty in 1971. Perform the same test on this table. Then, in one sentence, conclude whether their murder rates were also more likely to increase from 1971 to 1973.
End of explanation
_ = tests.grade('q1_4_2')
Explanation: Write your answer here, replacing this text.
End of explanation
print("Increases minus decreases from 1975 to 1977 (when the death penalty was reinstated) among death penalty states:",
sum(death_penalty_murder_rates.where('Year', are.between_or_equal_to(1975, 1977))
.group('State', two_year_changes)
.column("Murder Rate two_year_changes")))
run_test(death_penalty_murder_rates, 1975)
Explanation: The death penalty reinstated
In 1976, the Supreme Court repealed its ban on the death penalty in its rulings on a series of cases including Gregg v. Georgia, so the death penalty was reinstated where it was previously banned. This generated a second natural experiment. To the extent that the death penalty deters murder, reinstating it should decrease murder rates, just as banning it should increase them. Let's see what happened.
End of explanation
# For reference, our solution used 5 method calls
average_murder_rates = ...
average_murder_rates
Explanation: Hint: To check your results, figure out what your test statistic should be when there are 18 more decreases than increases, and verify that that's the test statistic that was printed. Also, you should have found a P-value near 0.01. If your P-value is very different, go back and inspect your run_test implementation and your test statistic to make sure that it correctly produces low P-values when there are many more decreases than increases.
Question 4.3. Now we've analyzed states where the death penalty went away and came back, as well as states where the death penalty was outlawed all along. What do you conclude from the results of the tests we have conducted so far? Does all the evidence consistently point toward one conclusion, or is there a contradiction?
Write your answer here, replacing this text.
5. Visualization
While our analysis appears to support the conclusion that the death penalty deters murder, a 2006 Stanford Law Review paper argues the opposite: that historical murder rates do not provide evidence that the death penalty deters murderers.
To understand their argument, we will draw a picture. In fact, we've gone at this whole analysis rather backward; typically we should draw a picture first and ask precise statistical questions later!
What plot should we draw?
We know that we want to compare murder rates of states with and without the death penalty. We know we should focus on the period around the two natural experiments of 1972 and 1976, and we want to understand the evolution of murder rates over time for those groups of states. It might be useful to look at other time periods, so let's plot them all for good measure.
Question 5.1. Create a table called average_murder_rates with 1 row for each year in murder_rates. It should have 3 columns:
* Year, the year,
* Death penalty states, the average murder rate of the states that had the death penalty in 1971, and
* No death penalty states, the average murder rate of the other states.
average_murder_rates should be sorted in increasing order by year. Its first three rows should look like:
|Year|Death penalty states|No death penalty states|
|-|-|-|
|1960| | |
|1961| | |
|1962| | ||
Hint: Use pivot. To compute average murder rates across states, just average the murder rates; you do not need to account for differences in population.
End of explanation
average_murder_rates.plot('Year')
Explanation: Question 5.2. Describe in one short sentence a high-level takeaway from the line plot below. Are the murder rates in these two groups of states related?
End of explanation
canada = Table.read_table('canada.csv')
murder_rates_with_canada = average_murder_rates.join("Year", canada.select("Year", "Homicide").relabeled("Homicide", "Canada"))
murder_rates_with_canada.plot('Year')
Explanation: Write your answer here, replacing this text.
Let's bring in another source of information: Canada.
End of explanation
# For your convenience, you can run this cell to run all the tests at once!
import os
print("Running all tests...")
_ = [tests.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')]
print("Finished running all tests.")
Explanation: The line plot we generated above is similar to a figure from the paper.
<img src="paper_plot.png"/>
Canada has not executed a criminal since 1962. Since 1967, the only crime that can be punished by execution in Canada is the murder of on-duty law enforcement personnel. The paper states, "The most striking finding is that the homicide rate in Canada has moved in
virtual lockstep with the rate in the United States."
Question 5.4. Complete their argument in 2-3 sentences; what features of these plots indicate that the death penalty is not an important factor in determining the murder rate? (If you're stuck, read the paper.)
Write your answer here, replacing this text.
Question 5.5. What assumption(s) did we make in Parts 1 through 4 of the project that led us to believe that the death penalty deterred murder, when in fact the line plots tell a different story?
Write your answer here, replacing this text.
You're done! Congratulations.
End of explanation |
15,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Expressions
Rational expressions, or expressions for short, denote (rational) languages in a compact way. Since Vcsn supports weighted expressions, they actually can denoted rational series.
This page documents the syntax and transformations (called identities) that are applied to every expression. The list of available algorithms using expression is in the Algorithms page.
Syntax
The syntax for rational expressions is as follows (with increasing precedence)
Step1: The following helping routine takes a list of expressions as text (*es), converts them into genuine expression objects (ctx.expression(e, id)) for each id, formats them into LaTeX, and puts them in a DataFrame for display.
Step2: A few more examples, with weights in $\mathbb{Q}$
Step3: Try it!
The following piece of code defines an interactive function for you to try your own expression. Enter an expression in the text area, then click on the "Run" button. | Python Code:
import vcsn
import pandas as pd
pd.options.display.max_colwidth = 0
Explanation: Expressions
Rational expressions, or expressions for short, denote (rational) languages in a compact way. Since Vcsn supports weighted expressions, they actually can denoted rational series.
This page documents the syntax and transformations (called identities) that are applied to every expression. The list of available algorithms using expression is in the Algorithms page.
Syntax
The syntax for rational expressions is as follows (with increasing precedence):
- \z, the empty expression.
- \e, the empty word.
- a, the letter a.
- 'l', the label l (useful, for instance, when labels are words, or to denote a letter which is an operator: '+' denotes the + letter).
- [abcd], letter class, equivalent to (a+b+c+d).
- [a-d], one letter of the current alphabet between a and d. If the alphabet is ${a, d, e}$, [a-d] denotes [ad], not [abcd].
- [^a-dz], one letter of the current alphabet that is not part of [a-dz].
- [^], any letter of the current alphabet ("any letter other that none").
- (e), e.
- e+f, the addition (disjunction, union) of e and f (note the use of +, | is not accepted).
- e&f, the conjunction (intersection) of e and f.
- e:f, the shuffle product (interleaving) of e and f.
- e&:f, the infiltration of e and f.
- ef and e.f, the multiplication (concatenation) of e and f.
- <k>e, the left exterior product (left-scalar product) of e by k.
- e<k>, the right exterior product (right-scalar product) of e by k.
- e* and e{*}, any number of repetitions of e (the Kleene closure of e).
- e{n}, the power (repeated multiplication) of e n times: ee ... e.
- e{n,m}, any repetition of e between n and m, i.e., the sum of the powers of e between n and m: e{n}+e{n+1}+ ... +e{m}.
- e{n,}, the sum of powers of e at least n times: e{n}e*.
- e{,m}, at most m repetitions of e: e{0,m}.
- e{+}, at least one e: e{1,}.
- e?, e{?}, e optional: e{0,1}.
- e{c}, the complement of e.
where e and f denote expressions, a a label, k a weight, and n and m natural numbers.
Please note that contrary to "regexps" (as in grep, perl, etc.):
- spaces are ignored
- + denotes the choice, not |
- . denotes the concatenation, use [^] to mean "any letter"
Identities
Some rewriting rules are applied on the expressions to "simplify" them. The strength of this simplification is graduated.
none: no identities at all. Some algorithms, such as derived_term, might not terminate.
trivial: the trivial identities only are applied.
associative: the associative identities are added.
linear: the linear identities are added. This is the default.
distributive: the distributive identities are added.
Trivial Identities
$$
\newcommand{\eword}{\varepsilon}
\newcommand{\lmul}[2]{\bra{#1}{#2}}
\newcommand{\rmul}[2]{#1\bra{#2}}
\newcommand{\lmulq}[2]{\bra{#1}^?{#2}}
\newcommand{\rmulq}[2]{#1\bra{#2}^?}
\newcommand{\bra}[1]{\langle#1\rangle}
\newcommand{\K}{\mathbb{K}}
\newcommand{\zed}{\mathsf{0}}
\newcommand{\und}{\mathsf{1}}
\newcommand{\zeK}{0_{\K}}
\newcommand{\unK}{1_{\K}}
\newcommand{\Ed}{\mathsf{E}}
\newcommand{\Fd}{\mathsf{F}}
\newcommand{\Gd}{\mathsf{G}}
\begin{gather}
% \tag{add}
\Ed+\zed \Rightarrow \Ed
\quad
\zed+\Ed \Rightarrow \Ed
\[.7ex] %\tag{kmul}
\begin{aligned}[t]
\lmul{\zeK}{\Ed} & \Rightarrow \zed &
\lmul{\unK}{\Ed} & \Rightarrow \Ed &
\lmul{k}{\zed} & \Rightarrow \zed &
\lmul{k}{\lmul{h}{\Ed}} &\Rightarrow \lmul{kh}{\Ed}
\
\rmul{\Ed}{\zeK} & \Rightarrow \zed &
\rmul{\Ed}{\unK} & \Rightarrow \Ed &
\rmul{\zed}{k} & \Rightarrow \zed &
\rmul{\rmul{\Ed}{k}}{h} &\Rightarrow \rmul{\Ed}{kh}
\end{aligned}\
\rmul{(\lmul{k}{\Ed})}{h} \Rightarrow \lmul{k}{(\rmul{\Ed}{h})} \quad
\rmul{\ell}{k} \Rightarrow \lmul{k}{\ell}
\ %\tag{mul}
\Ed \cdot \zed \Rightarrow \zed \quad
\zed \cdot \Ed \Rightarrow \zed
\
(\lmulq{k}{\und}) \cdot \Ed \Rightarrow \lmulq{k}{\Ed}
\quad
\Ed \cdot (\lmulq{k}{\und}) \Rightarrow \rmulq{\Ed}{k}
\ %\tag{star}
\zed^\star \Rightarrow \und
\
\zed^c \& \Ed \Rightarrow \Ed
\quad
\Ed \& \zed^c \Rightarrow \Ed
\
(\lmul{k}{\Ed})^{c} \Rightarrow \Ed^{c} \quad (\rmul{\Ed}{k})^{c} \Rightarrow \Ed^{c}
\
{\Ed^c}^c \Rightarrow \Ed \text{ if the weights are Boolean ($\mathbb{B}$ or $\mathbb{F}_2$)}
\end{gather}
$$
where $\Ed$ stands for any rational expression, $a \in A$~is any letter,
$\ell\in A \cup {\eword}$, $k, h\in \K$ are weights, and $\lmulq{k}{\ell}$
denotes either $\lmul{k}{\ell}$, or $\ell$ in which case $k = \unK$ in the
right-hand side. Any subexpression of a form listed to the left of a
'$\Rightarrow$' is rewritten as indicated on the right.
Associative Identities
In addition to the trivial identities, the binary operators (addition, conjunction, multiplication) are made associative. Actually, they become variadic instead of being strictly binary.
$$
\begin{align}
\Ed+(\Fd+\Gd) & \Rightarrow \Ed+\Fd+\Gd\
\Ed(\Fd\Gd) & \Rightarrow \Ed\Fd\Gd\
\Ed\&(\Fd\&\Gd) & \Rightarrow \Ed\&\Fd\&\Gd\
\end{align}
$$
Linear Identities
In addition to the associative identities, the addition is made commutative. Actually, members of sums are now sorted, and weights of equal terms are added. Some identities requires the weightset to be a commutative semiring (i.e., the product of scalars is commutative).
$$
\begin{align}
\Fd+\Ed & \Rightarrow \Ed+\Fd && \text{if $\Ed < \Fd$} \
\lmul{k}{\Ed}+\lmul{h}{\Ed} & \Rightarrow \lmul{k+h}{\Ed}\
\rmul{\Ed}{k} & \Rightarrow \lmul{k}{\Ed} && \text{if commutative} \
\lmul{k}{\Ed}\lmul{h}{\Fd} & \Rightarrow \lmul{kh}{(\Ed\Fd)} && \text{if commutative} \
\end{align}
$$
Distributive Identities
In addition to the linear identities, the multiplication and multiplication by a scalar are distributed on the sum.
$$
\begin{gather}
\lmul{k}{(\Ed+\Fd)} \Rightarrow \lmul{k}{\Ed} + \lmul{k}{\Fd} \
\Ed(\Fd+\Gd) \Rightarrow \Ed\Fd + \Ed\Gd \qquad
(\Ed+\Fd)\Gd \Rightarrow \Ed\Gd + \Fd\Gd \
\end{gather}
$$
Examples
End of explanation
ids = ['trivial', 'associative', 'linear', 'distributive']
ctx = vcsn.context('lal_char(a-z), b')
def example(*es):
res = []
for e in es:
res.append([e] + ['$' + ctx.expression(e, id).format('latex') + '$' for id in ids])
return pd.DataFrame(res, columns=['Input'] + list(map(str.title, ids)))
example('a', 'a+b+c', 'a+(b+c)', 'a+b+c+d', 'b+a', '[ab][ab]')
Explanation: The following helping routine takes a list of expressions as text (*es), converts them into genuine expression objects (ctx.expression(e, id)) for each id, formats them into LaTeX, and puts them in a DataFrame for display.
End of explanation
ctx = vcsn.Q
example('a', 'a+a+a', 'a+a+b', 'a+b+a', '<2>(a+b)', '([ab]+[ab]){2}', '<2>ab<3>cd<5>')
Explanation: A few more examples, with weights in $\mathbb{Q}$:
End of explanation
from ipywidgets import interact_manual
from IPython.display import display
es = []
@interact_manual
def interactive_example(expression = "[ab]{3,}"):
es.append(expression)
display(example(*es))
Explanation: Try it!
The following piece of code defines an interactive function for you to try your own expression. Enter an expression in the text area, then click on the "Run" button.
End of explanation |
15,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Distributed training with TensorFlow
Learning Objectives
1. Create MirroredStrategy
2. Integrate tf.distribute.Strategy with tf.keras
3. Create the input dataset and call tf.distribute.Strategy.experimental_distribute_dataset
Introduction
tf.distribute.Strategy is a TensorFlow API to distribute training
across multiple GPUs, multiple machines or TPUs. Using this API, you can distribute your existing models and training code with minimal code changes.
tf.distribute.Strategy has been designed with these key goals in mind
Step1: This notebook uses TF2.x. Please check your tensorflow version using the cell below.
Step2: Types of strategies
tf.distribute.Strategy intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are
Step3: This will create a MirroredStrategy instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.
If you wish to use only some of the GPUs on your machine, you can do so like this
Step4: If you wish to override the cross device communication, you can do so using the cross_device_ops argument by supplying an instance of tf.distribute.CrossDeviceOps. Currently, tf.distribute.HierarchicalCopyAllReduce and tf.distribute.ReductionToOneDevice are two options other than tf.distribute.NcclAllReduce which is the default.
Step5: TPUStrategy
tf.distribute.TPUStrategy lets you run your TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the TensorFlow Research Cloud and Cloud TPU.
In terms of distributed training architecture, TPUStrategy is the same MirroredStrategy - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in TPUStrategy.
Here is how you would instantiate TPUStrategy
Step6: MultiWorkerMirroredStrategy has two implementations for cross-device communications. CommunicationImplementation.RING is RPC-based and supports both CPU and GPU. CommunicationImplementation.NCCL uses Nvidia's NCCL and provides the state of art performance on GPU, but it doesn't support CPU. CollectiveCommunication.AUTO defers the choice to Tensorflow.
One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. The TF_CONFIG environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. Learn more about setting up TF_CONFIG.
ParameterServerStrategy
Parameter server training is a common data-parallel method to scale up model training on multiple machines. A parameter server training cluster consists of workers and parameter servers. Variables are created on parameter servers and they are read and updated by workers in each step. Please see the parameter server training tutorial for details.
TensorFlow 2 parameter server training uses a central-coordinator based architecture via the tf.distribute.experimental.coordinator.ClusterCoordinator class.
In this implementation the worker and parameter server tasks run tf.distribute.Servers that listen for tasks from the coordinator. The coordinator creates resources, dispatches training tasks, writes checkpoints, and deals with task failures.
In the programming running on the coordinator, you will use a ParameterServerStrategy object to define a training step and use a ClusterCoordinator to dispatch training steps to remote workers. Here is the simplest way to create them
Step7: This will create a CentralStorageStrategy instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggregated before being applied to variables.
Note
Step8: This strategy serves two main purposes
Step9: Similar to library code, it can be used to write end users' programs to work with and without distribution strategy, without requiring conditional logic. A sample code snippet illustrating this
Step10: OneDeviceStrategy
tf.distribute.OneDeviceStrategy is a strategy to place all variables and computation on a single specified device.
strategy = tf.distribute.OneDeviceStrategy(device="/gpu
Step11: This example usees MirroredStrategy so you can run this on a machine with multiple GPUs. strategy.scope() indicates to Keras which strategy to use to distribute the training. Creating models/optimizers/metrics inside this scope allows us to create distributed variables instead of regular variables. Once this is set up, you can fit your model like you would normally. MirroredStrategy takes care of replicating the model's training on the available GPUs, aggregating gradients, and more.
Step12: Here a tf.data.Dataset provides the training and eval input. You can also use numpy arrays
Step13: In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using MirroredStrategy with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use strategy.num_replicas_in_sync to get the number of replicas.
Step14: What's supported now?
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | ParameterServerStrategy | CentralStorageStrategy |
|---------------- |--------------------- |----------------------- |----------------------------------- |----------------------------------- |--------------------------- |
| Keras APIs | Supported | Supported | Experimental support | Experimental support | Experimental support |
Examples and Tutorials
Here is a list of tutorials and examples that illustrate the above integration end to end with Keras
Step15: Next, create the input dataset and call tf.distribute.Strategy.experimental_distribute_dataset to distribute the dataset based on the strategy.
Step16: Then, define one step of the training. Use tf.GradientTape to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, put it in a function train_step and pass it to tf.distrbute.Strategy.run along with the dataset inputs you got from the dist_dataset created before
Step17: A few other things to note in the code above
Step18: In the example above, you iterated over the dist_dataset to provide input to your training. We also provide the tf.distribute.Strategy.make_experimental_numpy_dataset to support numpy inputs. You can use this API to create a dataset before calling tf.distribute.Strategy.experimental_distribute_dataset.
Another way of iterating over your data is to explicitly use iterators. You may want to do this when you want to run for a given number of steps as opposed to iterating over the entire dataset.
The above iteration would now be modified to first create an iterator and then explicitly call next on it to get the input data. | Python Code:
# Import TensorFlow
import tensorflow as tf
Explanation: Distributed training with TensorFlow
Learning Objectives
1. Create MirroredStrategy
2. Integrate tf.distribute.Strategy with tf.keras
3. Create the input dataset and call tf.distribute.Strategy.experimental_distribute_dataset
Introduction
tf.distribute.Strategy is a TensorFlow API to distribute training
across multiple GPUs, multiple machines or TPUs. Using this API, you can distribute your existing models and training code with minimal code changes.
tf.distribute.Strategy has been designed with these key goals in mind:
Easy to use and support multiple user segments, including researchers, ML engineers, etc.
Provide good performance out of the box.
Easy switching between strategies.
tf.distribute.Strategy can be used with a high-level API like Keras, and can also be used to distribute custom training loops (and, in general, any computation using TensorFlow).
In TensorFlow 2.x, you can execute your programs eagerly, or in a graph using tf.function. tf.distribute.Strategy intends to support both these modes of execution, but works best with tf.function. Eager mode is only recommended for debugging purpose and not supported for TPUStrategy. Although training is the focus of this guide, this API can also be used for distributing evaluation and prediction on different platforms.
You can use tf.distribute.Strategy with very few changes to your code, because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
In this guide, we explain various types of strategies and how you can use them in different situations. To learn how to debug performance issues, see the Optimize TensorFlow GPU Performance guide.
Note: For a deeper understanding of the concepts, please watch this deep-dive presentation. This is especially recommended if you plan to write your own training loop.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
# Show the currently installed version of TensorFlow
print(tf.__version__)
Explanation: This notebook uses TF2.x. Please check your tensorflow version using the cell below.
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy()
Explanation: Types of strategies
tf.distribute.Strategy intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:
Synchronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.
Hardware platform: You may want to scale your training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.
In order to support these use cases, there are six strategies available. The next section explains which of these are supported in which scenarios in TF. Here is a quick overview:
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:----------------------- |:------------------- |:--------------------- |:--------------------------------- |:--------------------------------- |:-------------------------- |
| Keras API | Supported | Supported | Supported | Experimental support | Supported planned post 2.4 |
| Custom training loop | Supported | Supported | Supported | Experimental support | Experimental support |
| Estimator API | Limited Support | Not supported | Limited Support | Limited Support | Limited Support |
Note: Experimental support means the APIs are not covered by any compatibilities guarantees.
Note: Estimator support is limited. Basic training and evaluation are experimental, and advanced features—such as scaffold—are not implemented. We recommend using Keras or custom training loops if a use case is not covered.
MirroredStrategy
tf.distribute.MirroredStrategy supports synchronous distributed training on multiple GPUs on one machine. It creates one replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called MirroredVariable. These variables are kept in sync with each other by applying identical updates.
Efficient all-reduce algorithms are used to communicate the variable updates across the devices.
All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device.
It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation. You can choose from a few other options, or write your own.
Here is the simplest way of creating MirroredStrategy:
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
Explanation: This will create a MirroredStrategy instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.
If you wish to use only some of the GPUs on your machine, you can do so like this:
End of explanation
# TODO 1 - Here is your code.
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
Explanation: If you wish to override the cross device communication, you can do so using the cross_device_ops argument by supplying an instance of tf.distribute.CrossDeviceOps. Currently, tf.distribute.HierarchicalCopyAllReduce and tf.distribute.ReductionToOneDevice are two options other than tf.distribute.NcclAllReduce which is the default.
End of explanation
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
Explanation: TPUStrategy
tf.distribute.TPUStrategy lets you run your TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the TensorFlow Research Cloud and Cloud TPU.
In terms of distributed training architecture, TPUStrategy is the same MirroredStrategy - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in TPUStrategy.
Here is how you would instantiate TPUStrategy:
Note: To run this code in Colab, you should select TPU as the Colab runtime. See TensorFlow TPU Guide.
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=tpu_address)
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
tpu_strategy = tf.distribute.TPUStrategy(cluster_resolver)
The TPUClusterResolver instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it.
If you want to use this for Cloud TPUs:
- You must specify the name of your TPU resource in the tpu argument.
- You must initialize the tpu system explicitly at the start of the program. This is required before TPUs can be used for computation. Initializing the tpu system also wipes out the TPU memory, so it's important to complete this step first in order to avoid losing state.
MultiWorkerMirroredStrategy
tf.distribute.MultiWorkerMirroredStrategy is very similar to MirroredStrategy. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to tf.distribute.MirroredStrategy, it creates copies of all variables in the model on each device across all workers.
Here is the simplest way of creating MultiWorkerMirroredStrategy:
End of explanation
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
Explanation: MultiWorkerMirroredStrategy has two implementations for cross-device communications. CommunicationImplementation.RING is RPC-based and supports both CPU and GPU. CommunicationImplementation.NCCL uses Nvidia's NCCL and provides the state of art performance on GPU, but it doesn't support CPU. CollectiveCommunication.AUTO defers the choice to Tensorflow.
One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. The TF_CONFIG environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. Learn more about setting up TF_CONFIG.
ParameterServerStrategy
Parameter server training is a common data-parallel method to scale up model training on multiple machines. A parameter server training cluster consists of workers and parameter servers. Variables are created on parameter servers and they are read and updated by workers in each step. Please see the parameter server training tutorial for details.
TensorFlow 2 parameter server training uses a central-coordinator based architecture via the tf.distribute.experimental.coordinator.ClusterCoordinator class.
In this implementation the worker and parameter server tasks run tf.distribute.Servers that listen for tasks from the coordinator. The coordinator creates resources, dispatches training tasks, writes checkpoints, and deals with task failures.
In the programming running on the coordinator, you will use a ParameterServerStrategy object to define a training step and use a ClusterCoordinator to dispatch training steps to remote workers. Here is the simplest way to create them:
Python
strategy = tf.distribute.experimental.ParameterServerStrategy(
tf.distribute.cluster_resolver.TFConfigClusterResolver(),
variable_partitioner=variable_partitioner)
coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(
strategy)
Note you will need to configure TF_CONFIG environment variable if you use TFConfigClusterResolver. It is similar to TF_CONFIG in MultiWorkerMirroredStrategy but has additional caveats.
In TF 1, ParameterServerStrategy is available only with estimator via tf.compat.v1.distribute.experimental.ParameterServerStrategy symbol.
Note: This strategy is experimental as it is currently under active development.
CentralStorageStrategy
tf.distribute.experimental.CentralStorageStrategy does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU.
Create an instance of CentralStorageStrategy by:
End of explanation
default_strategy = tf.distribute.get_strategy()
Explanation: This will create a CentralStorageStrategy instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggregated before being applied to variables.
Note: This strategy is experimental as it is currently a work in progress.
Other strategies
In addition to the above strategies, there are two other strategies which might be useful for prototyping and debugging when using tf.distribute APIs.
Default Strategy
Default strategy is a distribution strategy which is present when no explicit distribution strategy is in scope. It implements the tf.distribute.Strategy interface but is a pass-through and provides no actual distribution. For instance, strategy.run(fn) will simply call fn. Code written using this strategy should behave exactly as code written without any strategy. You can think of it as a "no-op" strategy.
Default strategy is a singleton - and one cannot create more instances of it. It can be obtained using tf.distribute.get_strategy() outside any explicit strategy's scope (the same API that can be used to get the current strategy inside an explicit strategy's scope).
End of explanation
# In optimizer or other library code
# Get currently active strategy
strategy = tf.distribute.get_strategy()
strategy.reduce("SUM", 1., axis=None) # reduce some values
Explanation: This strategy serves two main purposes:
It allows writing distribution aware library code unconditionally. For example, in tf.optimizers can use tf.distribute.get_strategy() and use that strategy for reducing gradients - it will always return a strategy object on which we can call the reduce API.
End of explanation
if tf.config.list_physical_devices('GPU'):
strategy = tf.distribute.MirroredStrategy()
else: # use default strategy
strategy = tf.distribute.get_strategy()
with strategy.scope():
# do something interesting
print(tf.Variable(1.))
Explanation: Similar to library code, it can be used to write end users' programs to work with and without distribution strategy, without requiring conditional logic. A sample code snippet illustrating this:
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy()
# TODO 2 - Here is your code.
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
Explanation: OneDeviceStrategy
tf.distribute.OneDeviceStrategy is a strategy to place all variables and computation on a single specified device.
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
This strategy is distinct from the default strategy in a number of ways. In default strategy, the variable placement logic remains unchanged when compared to running TensorFlow without any distribution strategy. But when using OneDeviceStrategy, all variables created in its scope are explicitly placed on the specified device. Moreover, any functions called via OneDeviceStrategy.run will also be placed on the specified device.
Input distributed through this strategy will be prefetched to the specified device. In default strategy, there is no input distribution.
Similar to the default strategy, this strategy could also be used to test your code before switching to other strategies which actually distribute to multiple devices/machines. This will exercise the distribution strategy machinery somewhat more than default strategy, but not to the full extent as using MirroredStrategy or TPUStrategy etc. If you want code that behaves as if no strategy, then use default strategy.
So far you've seen the different strategies available and how you can instantiate them. The next few sections show the different ways in which you can use them to distribute your training.
Using tf.distribute.Strategy with tf.keras.Model.fit
tf.distribute.Strategy is integrated into tf.keras which is TensorFlow's implementation of the
Keras API specification. tf.keras is a high-level API to build and train models. By integrating into tf.keras backend, we've made it seamless for you to distribute your training written in the Keras training framework using model.fit.
Here's what you need to change in your code:
Create an instance of the appropriate tf.distribute.Strategy.
Move the creation of Keras model, optimizer and metrics inside strategy.scope.
We support all types of Keras models - sequential, functional and subclassed.
Here is a snippet of code to do this for a very simple Keras model with one dense layer:
End of explanation
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
Explanation: This example usees MirroredStrategy so you can run this on a machine with multiple GPUs. strategy.scope() indicates to Keras which strategy to use to distribute the training. Creating models/optimizers/metrics inside this scope allows us to create distributed variables instead of regular variables. Once this is set up, you can fit your model like you would normally. MirroredStrategy takes care of replicating the model's training on the available GPUs, aggregating gradients, and more.
End of explanation
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
Explanation: Here a tf.data.Dataset provides the training and eval input. You can also use numpy arrays:
End of explanation
# Compute global batch size using number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
Explanation: In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using MirroredStrategy with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use strategy.num_replicas_in_sync to get the number of replicas.
End of explanation
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
Explanation: What's supported now?
| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | ParameterServerStrategy | CentralStorageStrategy |
|---------------- |--------------------- |----------------------- |----------------------------------- |----------------------------------- |--------------------------- |
| Keras APIs | Supported | Supported | Experimental support | Experimental support | Experimental support |
Examples and Tutorials
Here is a list of tutorials and examples that illustrate the above integration end to end with Keras:
Tutorial to train MNIST with MirroredStrategy.
Tutorial to train MNIST using MultiWorkerMirroredStrategy.
Guide on training MNIST using TPUStrategy.
Tutorial for parameter server training in TensorFlow 2 with ParameterServerStrategy.
TensorFlow Model Garden repository containing collections of state-of-the-art models implemented using various strategies.
Using tf.distribute.Strategy with custom training loops
As you've seen, using tf.distribute.Strategy with Keras model.fit requires changing only a couple lines of your code. With a little more effort, you can also use tf.distribute.Strategy with custom training loops.
If you need more flexibility and control over your training loops than is possible with Estimator or Keras, you can write custom training loops. For instance, when using a GAN, you may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training.
The tf.distribute.Strategy classes provide a core set of methods through to support custom training loops. Using these may require minor restructuring of the code initially, but once that is done, you should be able to switch between GPUs, TPUs, and multiple machines simply by changing the strategy instance.
Here we will show a brief snippet illustrating this use case for a simple training example using the same Keras model as before.
First, create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables.
End of explanation
# TODO 3 - Here is your code.
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(
global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
Explanation: Next, create the input dataset and call tf.distribute.Strategy.experimental_distribute_dataset to distribute the dataset based on the strategy.
End of explanation
loss_object = tf.keras.losses.BinaryCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=global_batch_size)
def train_step(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
predictions = model(features, training=True)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
@tf.function
def distributed_train_step(dist_inputs):
per_replica_losses = mirrored_strategy.run(train_step, args=(dist_inputs,))
return mirrored_strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
Explanation: Then, define one step of the training. Use tf.GradientTape to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, put it in a function train_step and pass it to tf.distrbute.Strategy.run along with the dataset inputs you got from the dist_dataset created before:
End of explanation
for dist_inputs in dist_dataset:
print(distributed_train_step(dist_inputs))
Explanation: A few other things to note in the code above:
It used tf.nn.compute_average_loss to compute the loss. tf.nn.compute_average_loss sums the per example loss and divide the sum by the global_batch_size. This is important because later after the gradients are calculated on each replica, they are aggregated across the replicas by summing them.
It used the tf.distribute.Strategy.reduce API to aggregate the results returned by tf.distribute.Strategy.run. tf.distribute.Strategy.run returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can reduce them to get an aggregated value. You can also do tf.distribute.Strategy.experimental_local_results to get the list of values contained in the result, one per local replica.
When apply_gradients is called within a distribution strategy scope, its behavior is modified. Specifically, before applying gradients on each parallel instance during synchronous training, it performs a sum-over-all-replicas of the gradients.
Finally, once you have defined the training step, we can iterate over dist_dataset and run the training in a loop:
End of explanation
iterator = iter(dist_dataset)
for _ in range(10):
print(distributed_train_step(next(iterator)))
Explanation: In the example above, you iterated over the dist_dataset to provide input to your training. We also provide the tf.distribute.Strategy.make_experimental_numpy_dataset to support numpy inputs. You can use this API to create a dataset before calling tf.distribute.Strategy.experimental_distribute_dataset.
Another way of iterating over your data is to explicitly use iterators. You may want to do this when you want to run for a given number of steps as opposed to iterating over the entire dataset.
The above iteration would now be modified to first create an iterator and then explicitly call next on it to get the input data.
End of explanation |
15,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem 1
Random images
Step1: Problem 2
Mean images
Step2: Problem 3
Randomize data
Step3: Problem 4
Number per class
Step4: OK, so there are about 50000 in each class in the training set
Step5: And about 1870 in each class in the test set
Problem 5
How much overlap is there between training, validation and test samples?
Step6: What about near duplicates between datasets? (images that are almost identical)
Step7: Problem 6
Train a logistic regressor on the image data using 50, 100, 1000 and 5000 training samples. | Python Code:
label_map = list('abcdefghij')
fig,axes = pl.subplots(3,3,figsize=(5,5),sharex=True,sharey=True)
with h5py.File(cache_file, 'r') as f:
for i in range(9):
ax = axes.flat[i]
idx = np.random.randint(f['test']['images'].shape[0])
ax.imshow(f['test']['images'][idx],
cmap='Greys', interpolation='nearest')
ax.set_title(label_map[int(f['test']['labels'][idx])])
Explanation: Problem 1
Random images
End of explanation
# Solution:
with h5py.File(cache_file, 'r') as f:
# get a unique list of the classes
classes = np.unique(f['test']['labels'])
classes.sort()
nclasses = len(classes)
images = f['test']['images'][:]
for i,cls in enumerate(classes):
fig,ax = pl.subplots(1,1,figsize=(2,2))
mean_img = images[f['test']['labels'][:] == cls].mean(axis=0) # select all images for a given class, take mean
ax.imshow(mean_img, cmap='Greys', interpolation='nearest') # greyscale colormap, no interpolation
ax.set_title(label_map[i])
Explanation: Problem 2
Mean images
End of explanation
def randomize(data, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_data = data[permutation]
shuffled_labels = labels[permutation]
return shuffled_data, shuffled_labels
with h5py.File(cache_file, 'r') as f:
train_dataset, train_labels = randomize(f['train']['images'][:], f['train']['labels'][:])
test_dataset, test_labels = randomize(f['test']['images'][:], f['test']['labels'][:])
Explanation: Problem 3
Randomize data
End of explanation
np.histogram(train_labels, bins=np.arange(0,nclasses+1,1))
Explanation: Problem 4
Number per class
End of explanation
np.histogram(test_labels, bins=np.arange(0,nclasses+1,1))
Explanation: OK, so there are about 50000 in each class in the training set
End of explanation
n_overlaps = []
# the data has been randomize, so let's just check the first 100 images and assume that
# is a representative sample
for test_img in test_dataset[:100]:
diff = (train_dataset - test_img[None]).sum(axis=-1).sum(axis=-1)
n_overlap = (diff == 0).sum()
n_overlaps.append(n_overlap)
print("Typical overlap:", np.median(n_overlaps))
pl.hist(n_overlaps)
Explanation: And about 1870 in each class in the test set
Problem 5
How much overlap is there between training, validation and test samples?
End of explanation
n_overlaps = []
threshold = 1E-2 # define an arbitrary threshold -- play with this
# the data has been randomize, so let's just check the first 100 images and assume that
# is a representative sample
for test_img in test_dataset[:100]:
diff = (train_dataset - test_img[None]).sum(axis=-1).sum(axis=-1)
n_overlap = (np.abs(diff) < threshold).sum()
n_overlaps.append(n_overlap)
Explanation: What about near duplicates between datasets? (images that are almost identical)
End of explanation
model = LogisticRegression()
image_size = train_dataset.shape[-1]
subset = 50 # replace with 100, 1000, 5000
idx = np.random.choice(np.arange(train_dataset.shape[0]), size=subset)
train_subset_data = train_dataset[idx].reshape(subset, image_size*image_size)
train_subset_labels = train_labels[idx]
model.fit(train_subset_data, train_subset_labels)
predict_labels = model.predict(test_dataset.reshape(test_dataset.shape[0], image_size*image_size))
(predict_labels != test_labels).sum() / float(test_labels.size)
Explanation: Problem 6
Train a logistic regressor on the image data using 50, 100, 1000 and 5000 training samples.
End of explanation |
15,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loopless FBA
The goal of this procedure is identification of a thermodynamically consistent flux state without loops, as implied by the name.
Usually, the model has the following constraints.
$$ S \cdot v = 0 $$
$$ lb \le v \le ub $$
However, this will allow for thermodynamically infeasible loops (referred to as type 3 loops) to occur, where flux flows around a cycle without any net change of metabolites. For most cases, this is not a major issue, as solutions with these loops can usually be converted to equivalent solutions without them. However, if a flux state is desired which does not exhibit any of these loops, loopless FBA can be used. The formulation used here is modified from Schellenberger et al.
We can make the model irreversible, so that all reactions will satisfy
$$ 0 \le lb \le v \le ub \le \max(ub) $$
We will add in boolean indicators as well, such that
$$ \max(ub) \cdot i \ge v $$
$$ i \in {0, 1} $$
We also want to ensure that an entry in the row space of S also exists with negative values wherever v is nonzero. In this expression, $1-i$ acts as a not to indicate inactivity of a reaction.
$$ S^\mathsf T x - (1 - i) (\max(ub) + 1) \le -1 $$
We will construct an LP integrating both constraints.
$$ \left(
\begin{matrix}
S & 0 & 0\
-I & \max(ub)I & 0 \
0 & (\max(ub) + 1)I & S^\mathsf T
\end{matrix}
\right)
\cdot
\left(
\begin{matrix}
v \
i \
x
\end{matrix}
\right)
\begin{matrix}
&=& 0 \
&\ge& 0 \
&\le& \max(ub)
\end{matrix}$$
Note that these extra constraints are not applied to boundary reactions which bring metabolites in and out of the system.
Step1: We will demonstrate with a toy model which has a simple loop cycling A -> B -> C -> A, with A allowed to enter the system and C allowed to leave. A graphical view of the system is drawn below
Step2: While this model contains a loop, a flux state exists which has no flux through reaction v3, and is identified by loopless FBA.
Step3: If there there is no forced flux through a loopless reaction, parsimonious FBA will also have no flux through the loop.
Step4: However, if flux is forced through v3, then there is no longer a feasible loopless solution, but the parsimonious solution will still exist.
Step5: Loopless FBA is also possible on genome scale models, but it requires a capable MILP solver. If one is installed, cobrapy can detect it automatically using the get_solver_name function | Python Code:
%matplotlib inline
import plot_helper
import cobra.test
from cobra import Reaction, Metabolite, Model
from cobra.flux_analysis.loopless import construct_loopless_model
from cobra.flux_analysis import optimize_minimal_flux
from cobra.solvers import get_solver_name
Explanation: Loopless FBA
The goal of this procedure is identification of a thermodynamically consistent flux state without loops, as implied by the name.
Usually, the model has the following constraints.
$$ S \cdot v = 0 $$
$$ lb \le v \le ub $$
However, this will allow for thermodynamically infeasible loops (referred to as type 3 loops) to occur, where flux flows around a cycle without any net change of metabolites. For most cases, this is not a major issue, as solutions with these loops can usually be converted to equivalent solutions without them. However, if a flux state is desired which does not exhibit any of these loops, loopless FBA can be used. The formulation used here is modified from Schellenberger et al.
We can make the model irreversible, so that all reactions will satisfy
$$ 0 \le lb \le v \le ub \le \max(ub) $$
We will add in boolean indicators as well, such that
$$ \max(ub) \cdot i \ge v $$
$$ i \in {0, 1} $$
We also want to ensure that an entry in the row space of S also exists with negative values wherever v is nonzero. In this expression, $1-i$ acts as a not to indicate inactivity of a reaction.
$$ S^\mathsf T x - (1 - i) (\max(ub) + 1) \le -1 $$
We will construct an LP integrating both constraints.
$$ \left(
\begin{matrix}
S & 0 & 0\
-I & \max(ub)I & 0 \
0 & (\max(ub) + 1)I & S^\mathsf T
\end{matrix}
\right)
\cdot
\left(
\begin{matrix}
v \
i \
x
\end{matrix}
\right)
\begin{matrix}
&=& 0 \
&\ge& 0 \
&\le& \max(ub)
\end{matrix}$$
Note that these extra constraints are not applied to boundary reactions which bring metabolites in and out of the system.
End of explanation
plot_helper.plot_loop()
test_model = Model()
test_model.add_metabolites([Metabolite(i) for i in "ABC"])
test_model.add_reactions([Reaction(i) for i in
["EX_A", "DM_C", "v1", "v2", "v3"]])
test_model.reactions.EX_A.add_metabolites({"A": 1})
test_model.reactions.DM_C.add_metabolites({"C": -1})
test_model.reactions.DM_C.objective_coefficient = 1
test_model.reactions.v1.add_metabolites({"A": -1, "B": 1})
test_model.reactions.v2.add_metabolites({"B": -1, "C": 1})
test_model.reactions.v3.add_metabolites({"C": -1, "A": 1})
Explanation: We will demonstrate with a toy model which has a simple loop cycling A -> B -> C -> A, with A allowed to enter the system and C allowed to leave. A graphical view of the system is drawn below:
End of explanation
solution = construct_loopless_model(test_model).optimize()
print("loopless solution: status = " + solution.status)
print("loopless solution: v3 = %.1f" % solution.x_dict["v3"])
Explanation: While this model contains a loop, a flux state exists which has no flux through reaction v3, and is identified by loopless FBA.
End of explanation
solution = optimize_minimal_flux(test_model)
print("parsimonious solution: status = " + solution.status)
print("parsimonious solution: v3 = %.1f" % solution.x_dict["v3"])
Explanation: If there there is no forced flux through a loopless reaction, parsimonious FBA will also have no flux through the loop.
End of explanation
test_model.reactions.v3.lower_bound = 1
solution = construct_loopless_model(test_model).optimize()
print("loopless solution: status = " + solution.status)
solution = optimize_minimal_flux(test_model)
print("parsimonious solution: status = " + solution.status)
print("parsimonious solution: v3 = %.1f" % solution.x_dict["v3"])
Explanation: However, if flux is forced through v3, then there is no longer a feasible loopless solution, but the parsimonious solution will still exist.
End of explanation
mip_solver = get_solver_name(mip=True)
print(mip_solver)
salmonella = cobra.test.create_test_model("salmonella")
construct_loopless_model(salmonella).optimize(solver=mip_solver)
ecoli = cobra.test.create_test_model("ecoli")
construct_loopless_model(ecoli).optimize(solver=mip_solver)
Explanation: Loopless FBA is also possible on genome scale models, but it requires a capable MILP solver. If one is installed, cobrapy can detect it automatically using the get_solver_name function
End of explanation |
15,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Warsztaty modelowania w nanofizyce
Zachowania atomów w zależności od ich rodzaju i położenia
Paweł T. Jochym
Zakład Komputerowych Badań Materiałów
Instytut Fizyki Jądrowej PAN, Kraków
Analiza przeprowadzona w ćwiczeniu Amplitudy wskazuje, że atomy w nanocząstce Pt-12Fe-42Pt zachowują się bardzo różnie w zależności od tego czy są to atomy żelaza czy platyny. Postarajmy się zbadać jak wygląda to zachowanie rysując struktury selekcjonując atomy jednego rodzaju. Zadanie selekcji wykona funkcja wyświetlania struktury.
Poniższy zeszyt demonstruje proste i bardziej złożone operacje jakie można zrealizować przy pomocy odpowiedniej wizualizacji. Kod zawarty poniżej można wykorzystać jako przykład do dalszej modyfikacji pozwalający przedstawić potrzebne cechy badanego układu.
Step1: Wczytanie wyliczonych trajektorii z dysku z jednoczesnym wycentrowaniem nanocząstek w komórce elementarnej.
Step2: Funkcje wizualizujące. Proste wyświetlanie struktury oraz bardziej złożony wizualizator trajektorii z możliwością wyboru rodzaju pokazywanych atomów oraz wyświetlanego kroku symulacji.
Step3: Wyświetlenie struktur przy użyciu zdefiniowanych powyżej narzędzi. | Python Code:
# Import potrzebnych modułów
%matplotlib inline
import numpy as np
from ase import Atoms, units
import ase.io
from ase.io.trajectory import Trajectory
from ipywidgets import HBox, VBox, Checkbox, Dropdown, IntSlider, FloatSlider
from io import BytesIO
import nglview
import glob
def recenter(a):
'''
Normalizacja położenia nanocząstki do stałej pozycji środka masy.
Przemieszczenie centrum masy do środka komórki.
Uwaga: do działania konieczna jest istotna próżnia wokoło struktury.
'''
# Kopia struktury a
c=Atoms(numbers=a.get_atomic_numbers(),
positions=a.get_positions(),
cell=a.get_cell(),
pbc=a.get_pbc())
c.translate((c.get_cell()/2).sum(axis=0)-c.get_center_of_mass())
c.set_scaled_positions(c.get_scaled_positions())
c.translate((c.get_cell()/2).sum(axis=0)-c.get_center_of_mass())
c.set_scaled_positions(c.get_scaled_positions())
return c
Explanation: Warsztaty modelowania w nanofizyce
Zachowania atomów w zależności od ich rodzaju i położenia
Paweł T. Jochym
Zakład Komputerowych Badań Materiałów
Instytut Fizyki Jądrowej PAN, Kraków
Analiza przeprowadzona w ćwiczeniu Amplitudy wskazuje, że atomy w nanocząstce Pt-12Fe-42Pt zachowują się bardzo różnie w zależności od tego czy są to atomy żelaza czy platyny. Postarajmy się zbadać jak wygląda to zachowanie rysując struktury selekcjonując atomy jednego rodzaju. Zadanie selekcji wykona funkcja wyświetlania struktury.
Poniższy zeszyt demonstruje proste i bardziej złożone operacje jakie można zrealizować przy pomocy odpowiedniej wizualizacji. Kod zawarty poniżej można wykorzystać jako przykład do dalszej modyfikacji pozwalający przedstawić potrzebne cechy badanego układu.
End of explanation
mdtraj={}
print('Czytanie trajektorii:', end=' ')
for fn in glob.glob('data/md_T_*.traj'):
# Użyj części nazwy pliku jako identyfikacji temperatury
T=int(fn.split('/')[-1][5:-5])
# Wczytaj trajektorię z pliku
print(T, end=' ')
mdtraj[T]=[recenter(a) for a in Trajectory(fn)]
print()
Explanation: Wczytanie wyliczonych trajektorii z dysku z jednoczesnym wycentrowaniem nanocząstek w komórce elementarnej.
End of explanation
def show_cryst(struct, uc=True, re_center=False):
'''
Proste wyświetlenie struktury krystalicznej.
Opcjonalnie wraz z komórką elementarną i re-centrowaniem nanocząstki.
'''
view = nglview.show_ase(recenter(struct) if re_center else struct)
view.parameters=dict(clipDist=-100)
if uc : view.add_unitcell()
view.camera='orthographic'
view.add_spacefill(radiusType='covalent', scale=0.5)
view.center()
return view
class TrajDisplay:
'''
Wyświetlacz trajektorii z możliwością selekcji atomów
w.g. pierwiastka oraz trajektorii z całego zbioru przekazanego
jako słownik. Opcjonalnie całość trajektorii może zostać wycentrowana
w komórce periodycznej.
'''
def __init__(self, trajectories, re_center=False):
if re_center :
print('Re-centering...')
self.trajectories = {k:[recenter(a) for a in trj]
for k, trj in trajectories.items()}
else :
self.trajectories=trajectories
self.temperatures=sorted(self.trajectories.keys())
self.T=self.temperatures[0]
self.trj=self.trajectories[self.T]
self.v=nglview.show_asetraj(self.trj)
self.v._remote_call("setSize", target="Widget", args=["500px", "500px"])
self.v.add_spacefill()
self.v.parameters=dict(clipDist=-100)
self.v.camera='orthographic'
self.v.update_spacefill(radiusType='covalent', scale=0.7)
self.v.center()
self.tsel=Dropdown(options=self.temperatures,
value=self.T,
description='Temperatura:')
self.frm=IntSlider(value=0, min=0, max=len(self.trj)-1)
self.asel=Dropdown(options=['All']+list(set(self.trj[0].get_chemical_symbols())),
value='All', description='Wyświetl')
self.rad=FloatSlider(value=0.8, min=0.0, max=1.5, step=0.01, description='Rozmiar')
self.tsel.observe(self._update_trj, 'value')
self.frm.observe(self._update_frame)
self.asel.observe(self._select_atom)
self.rad.observe(self._update_repr)
self._update_trj(None)
self.hbox = HBox([self.v, VBox([self.tsel,
self.asel,
self.rad,
self.frm])])
def _update_repr(self, chg=None):
self.v.update_spacefill(radiusType='covalent', scale=self.rad.value)
def _update_trj(self, chg=None):
'''Zmiana trajektorii zależnie od wybranej temperatury'''
self.T=int(self.tsel.value)
self.trj=self.trajectories[self.T]
self.frm.max = len(self.trj)-1
v=self.v
while v._ngl_component_ids :
v.remove_component(v._ngl_component_ids[0])
v.add_trajectory(nglview.ASETrajectory(self.trj))
v.add_unitcell()
v.center()
self._select_atom()
self._update_frame()
return
def _update_frame(self, chg=None):
self.v.frame=self.frm.value
return
def _select_atom(self, chg=None):
sel=self.asel.value
self.v.remove_spacefill()
if sel == 'All':
self.v.add_spacefill(selection='All')
else :
self.v.add_spacefill(selection=[n for n,e in enumerate(self.trj[0].get_chemical_symbols()) if e == sel])
self._update_repr()
Explanation: Funkcje wizualizujące. Proste wyświetlanie struktury oraz bardziej złożony wizualizator trajektorii z możliwością wyboru rodzaju pokazywanych atomów oraz wyświetlanego kroku symulacji.
End of explanation
show_cryst(mdtraj[1400][700])
dsp=TrajDisplay(mdtraj)
dsp.hbox
Explanation: Wyświetlenie struktur przy użyciu zdefiniowanych powyżej narzędzi.
End of explanation |
15,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fidelio demo notebook
Step1: Choose an alphabet
Before sending any messages, we must agree on a way to represent characters as numbers.
Fidelio comes with 3 pre-defined character encodings
Step2: Convert a string to a list of integers and back
Characters not in the selected alphabet will be discarded.
Step3: Classic Caesar cipher
Step4: ROT13 cipher
Step5: Fancy Caesar
Step6: Cracking Caesar ciphers
The shift amount is effectively the password for a Caesar cipher.
There aren't many possible passwords, so Caesar ciphers are vulnerable to brute-force attacks.
Step7: Dodgson cipher (aka Vigenère cipher, Bellaso cipher)
This polyalphabetic cipher shifts characters using modular arithmetic, but characters are not all shifted by the same amount. There are many possible passwords, so brute-force attacks are much harder.
Choose a password, and be sure to use characters which are in the selected alphabet. The password is then repeated until it is the same length as the plaintext. Each integer $m_k$ in the plaintext is shifted
$$
m_k \rightarrow (m_k+x_k) \% 26
$$
where $x_k$ is the corresponding integer in the extended password. | Python Code:
from fidelio_functions import *
Explanation: Fidelio demo notebook
End of explanation
print(ALL_CAPS)
for key, val in sorted(char_to_num(ALL_CAPS).items()):
print(key,val)
Explanation: Choose an alphabet
Before sending any messages, we must agree on a way to represent characters as numbers.
Fidelio comes with 3 pre-defined character encodings: ALL_CAPS, CAPS_PLUS, and DEFAULT_100.
Each of these is a tuple for converting int -> char.
char_to_num() creates a dictionary for converting char -> int.
End of explanation
message = "WHERE IS RPT WHERE IS TASK FORCE THIRTY FOUR RR THE WORLD WONDERS?"
print(message)
# Default alphabet
ints = text_to_ints(message)
print(ints,'\n')
test_text = ints_to_text(ints)
print(test_text)
# ALL_CAPS alphabet has capital letters only, no punctuation or spaces
ints = text_to_ints(message,ALL_CAPS)
print(ints,'\n')
test_text = ints_to_text(ints,ALL_CAPS)
print(test_text)
Explanation: Convert a string to a list of integers and back
Characters not in the selected alphabet will be discarded.
End of explanation
ciphertext = caesar(message,-3,ALL_CAPS)
print(ciphertext)
plaintext = caesar(ciphertext,3,ALL_CAPS)
print(plaintext)
Explanation: Classic Caesar cipher: subtract 3 (mod 26)
The Caesar cipher shifts each letter in the alphabet back 3 places.
The alphabet "wraps around," meaning the letters ABC are mapped to XYZ.
To reproduce the Caesar cipher with Fidelio, first use ALL_CAPS to convert text to integers.
Then subtract 3 using base 26 modular arithmetic and convert back to text.
To decrypt, do the same, but with a shift of +3 instead of -3.
End of explanation
ciphertext = caesar(message,13,ALL_CAPS)
print(ciphertext)
plaintext = caesar(ciphertext,-13,ALL_CAPS)
print(plaintext)
# With a 26-character alphabet, +13 and -13 are the same shift
plaintext = caesar(ciphertext,13,ALL_CAPS)
print(plaintext)
# With the default 100-character alphabet, ROT50 is its own inverse
caesar(caesar(message,50),50)
Explanation: ROT13 cipher: add 13 (mod 26)
ROT13 is like the classic Caesar cipher, but it shifts each letter 13 characters forward: $m \rightarrow (m + 13) \% 26$.
Shifting each letter 13 characters backward gives the same effect: $m \rightarrow (m-13)\%26 = (m+13)\%26$.
The ROT13 transformation is its own inverse.
End of explanation
ciphertext = caesar(message,13,CAPS_PLUS)
print(ciphertext)
plaintext = caesar(ciphertext,-13,CAPS_PLUS)
print(plaintext)
Explanation: Fancy Caesar: add $x$ (mod $n$)
Fidelio can create Caesar-type ciphers with any shift value and any of its alphabets.
End of explanation
for x in range(42):
print( caesar(ciphertext,x,CAPS_PLUS) )
Explanation: Cracking Caesar ciphers
The shift amount is effectively the password for a Caesar cipher.
There aren't many possible passwords, so Caesar ciphers are vulnerable to brute-force attacks.
End of explanation
ints = text_to_ints('MEETMEATDAWN',ALL_CAPS)
print(ints,'\n')
extended_password = text_to_ints('FIDELIOFIDEL',ALL_CAPS)
print(extended_password,'\n')
cipher = [ (ints[k] + extended_password[k]) % 26 for k in range(len(ints)) ]
print(cipher,'\n')
ciphertext = ints_to_text(cipher,ALL_CAPS)
print(ciphertext,'\n')
decipher = [ (cipher[k] - extended_password[k]) % 26 for k in range(len(cipher)) ]
print(decipher,'\n')
plaintext = ints_to_text(decipher,ALL_CAPS)
print(plaintext)
# Try the original message and default alphabet
ciphertext = dodgson(message,'FIDELIO')
print(ciphertext)
plaintext = dodgson(ciphertext,'FIDELIO',decrypt=True)
print(plaintext)
# Let's try guessing the password
dodgson(ciphertext,'12345',decrypt=True)
# Caution: a partially-correct password can recover parts of the message
ciphertext = dodgson(message,'Passw0rd123')
print(ciphertext)
plaintext = dodgson(ciphertext,'password123',decrypt=True)
print(plaintext)
Explanation: Dodgson cipher (aka Vigenère cipher, Bellaso cipher)
This polyalphabetic cipher shifts characters using modular arithmetic, but characters are not all shifted by the same amount. There are many possible passwords, so brute-force attacks are much harder.
Choose a password, and be sure to use characters which are in the selected alphabet. The password is then repeated until it is the same length as the plaintext. Each integer $m_k$ in the plaintext is shifted
$$
m_k \rightarrow (m_k+x_k) \% 26
$$
where $x_k$ is the corresponding integer in the extended password.
End of explanation |
15,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data analysis in Python with pandas
What is pandas?
pandas
Step1: How do I read a tabular data file into pandas?
Tabular data file
Step2: Tip
Step3: Tip
Step4: Why do some pandas commands end with parentheses, and other commands don't?
Step5: Tip
Step6: Tip
Step7: How do I remove columns from a pandas DataFrame?
Step8: Tip
Step9: How do I sort a pandas DataFrame or Series?
Step10: Tip
Step11: How do I filter rows of a pandas DataFrame by column value?
Step12: Tip
Step13: How do I apply multiple filter criteria to a pandas DataFrame?
Step14: Tip
Step15: How do I change the data type of a pandas Series?
Step16: How do I explore a pandas Series?
Step17: Tip
Step18: How do I handle missing values in pandas?
Step19: How do I apply a function to a pandas Series or a DataFrame?
Step20: How do I avoid a SettingWithCopyWarning in pandas?
Step21: How do I change display options in pandas?
Step22: How do I create a pandas DataFrame from another object?
Step23: How do I create dummy variables in pandas?
Step24: How do I find and remove duplicate rows in pandas?
Step25: How do I make my pandas DataFrame smaller and faster?
Step26: How do I select multiple rows and columns from a pandas DataFrame?
Step27: How do I use pandas with scikit-learn to create Kaggle submissions?
Step28: How do I use string methods in pandas?
Step29: How do I use the "axis" parameter in pandas?
Step30: How do I work with dates and times in pandas?
Step31: What do I need to know about the pandas index?
Step32: When should I use a "groupby" in pandas?
Step33: When should I use "inplace" parameter in pandas? | Python Code:
import pandas as pd
Explanation: Data analysis in Python with pandas
What is pandas?
pandas: Open source library in Python for data analysis, data manipulation, and data visualisation.
Pros:
1. Tons of functionality
2. Well supported by community
3. Active development
4. Lot of documentation
5. Plays well with other packages, for e.g NumPy, Scikit-learn
End of explanation
orders = pd.read_table('http://bit.ly/chiporders')
orders.head()
user_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']
users = pd.read_table('http://bit.ly/movieusers', delimiter='|', header=None, names=user_cols)
users.head()
Explanation: How do I read a tabular data file into pandas?
Tabular data file: By default tab separated file (tsv)
End of explanation
ufo = pd.read_csv('http://bit.ly/uforeports')
ufo.head()
type(ufo)
type(ufo['City'])
city = ufo.City
city.head()
Explanation: Tip: skiprows and skipfooter params are useful to omit extra data in a file heading or ending.
How do I select a pandas Series from a DataFrame?
Two basic data structures in pandas
1. DataFrame: Table with rows and columns
2. Series: Each columns is known as pandas Series
End of explanation
ufo['location'] = ufo.City + ', ' + ufo.State
ufo.head()
Explanation: Tip: Create a new Series in a DataFrame
End of explanation
movies = pd.read_csv('http://bit.ly/imdbratings')
movies.head()
movies.describe()
movies.shape
movies.dtypes
type(movies)
movies.describe(include=['object'])
Explanation: Why do some pandas commands end with parentheses, and other commands don't?
End of explanation
ufo.head()
ufo.columns
ufo.rename(columns={'Colors Reported': 'Colors_Reported', 'Shape Reported': 'Shape_Reported'}, inplace=True)
ufo.columns
ufo_cols = ['city', 'colors reported', 'state reported', 'state', 'time', 'location']
ufo.columns = ufo_cols
ufo.head()
ufo_cols = ['City', 'Colors Reported', 'State Reported', 'State', 'Time']
ufo = pd.read_csv('http://bit.ly/uforeports', names=ufo_cols, header=0)
ufo.head()
Explanation: Tip: Hit "Shift+Tab" inside a method parentheses to get list of arguments
How to rename columns in pandas DataFrame?
End of explanation
ufo.columns = ufo.columns.str.replace(' ', '_')
ufo.columns
Explanation: Tip: Use str.replace method to drop the space from columns names
End of explanation
ufo = pd.read_csv('http://bit.ly/uforeports')
ufo.head()
ufo.shape
ufo.drop('Colors Reported', axis=1, inplace=True)
ufo.head()
ufo.drop(labels=['City', 'State'], axis=1, inplace=True)
ufo.head()
Explanation: How do I remove columns from a pandas DataFrame?
End of explanation
ufo.drop([0, 1], axis=0, inplace=True)
ufo.shape
Explanation: Tip: To remove rows instead of columns, choose axis=0
End of explanation
movies = pd.read_csv('http://bit.ly/imdbratings')
movies.head()
movies.title.sort_values()
movies.sort_values('title')
Explanation: How do I sort a pandas DataFrame or Series?
End of explanation
movies.sort_values(['content_rating', 'duration'])
Explanation: Tip: Sort by multiple columns
End of explanation
movies = pd.read_csv('http://bit.ly/imdbratings')
movies.head()
movies.shape
movies[movies.duration >= 200]
Explanation: How do I filter rows of a pandas DataFrame by column value?
End of explanation
movies.loc[movies.duration >= 200, 'genre']
Explanation: Tip: Filtered DataFrame is also a DataFrame
End of explanation
movies = pd.read_csv('http://bit.ly/imdbratings')
movies.head()
movies[movies.duration >= 200]
movies[(movies.duration >= 200) & (movies.genre == 'Drama')]
Explanation: How do I apply multiple filter criteria to a pandas DataFrame?
End of explanation
movies[movies.genre.isin(['Crime', 'Drama', 'Action'])]
Explanation: Tip: Multiple condition on a single column
End of explanation
drinks = pd.read_csv('http://bit.ly/drinksbycountry')
drinks.dtypes
drinks.beer_servings = drinks.beer_servings.astype(float)
drinks.dtypes
drinks = pd.read_csv('http://bit.ly/drinksbycountry', dtype={'beer_servings': float})
drinks.dtypes
orders = pd.read_table('http://bit.ly/chiporders')
orders.head()
orders.dtypes
orders.item_price = orders.item_price.str.replace('$', '').astype(float)
orders.head()
orders.item_name.str.contains('Chicken').astype(int).head()
Explanation: How do I change the data type of a pandas Series?
End of explanation
movies = pd.read_csv('http://bit.ly/imdbratings')
movies.head()
movies.dtypes
movies.genre.describe()
movies.genre.value_counts()
movies.genre.value_counts(normalize=True)
movies.genre.unique()
movies.genre.nunique()
pd.crosstab(movies.genre, movies.content_rating)
movies.duration.describe()
movies.duration.mean()
Explanation: How do I explore a pandas Series?
End of explanation
%matplotlib inline
movies.duration.plot(kind='hist')
movies.genre.value_counts().plot(kind='bar')
Explanation: Tip: Visualisation
End of explanation
ufo = pd.read_csv('http://bit.ly/uforeports')
ufo.tail()
ufo.isnull().tail()
ufo.notnull().tail()
ufo.isnull().sum()
ufo[ufo.City.isnull()]
ufo.shape
ufo.dropna(how='any').shape
ufo.dropna(how='all').shape
ufo.dropna(subset=['City', 'Shape Reported'], how='any').shape
ufo['Shape Reported'].value_counts(dropna=False)
ufo['Shape Reported'].fillna(value='VARIOUS', inplace=True)
ufo['Shape Reported'].value_counts()
Explanation: How do I handle missing values in pandas?
End of explanation
train = pd.read_csv('http://bit.ly/kaggletrain')
train.head()
train['Sex_num'] = train.Sex.map({'female': 0, 'male': 1})
train.loc[0:4, ['Sex', 'Sex_num']]
train['Name_length'] = train.Name.apply(len)
train.loc[0:4, ['Name', 'Name_length']]
import numpy as np
train['Fare_ceil'] = train.Fare.apply(np.ceil)
train.loc[0:4, ['Fare', 'Fare_ceil']]
train.Name.str.split(',').head()
def get_element(my_list, position):
return my_list[position]
train['Last_name'] = train.Name.str.split(',').apply(get_element, position=0)
train['Last_name'] = train.Name.str.split(',').apply(lambda x: x[0])
train.loc[0:4, ['Name', 'Last_name']]
drinks = pd.read_csv('http://bit.ly/drinksbycountry')
drinks.head()
drinks.loc[:, 'beer_servings':'wine_servings'].apply(np.argmax, axis=1)
drinks.loc[:, 'beer_servings':'wine_servings'].applymap(float)
Explanation: How do I apply a function to a pandas Series or a DataFrame?
End of explanation
movies = pd.read_csv('http://bit.ly/imdbratings')
movies.head()
movies.content_rating.isnull().sum()
movies[movies.content_rating.isnull()]
movies.content_rating.value_counts()
movies.loc[movies.content_rating == 'NOT RATED', 'content_rating'] = np.nan
movies.content_rating.isnull().sum()
top_movies = movies.loc[movies.star_rating >= 9, :]
top_movies
top_movies.loc[0, 'duration'] = 150
top_movies
top_movies = movies.loc[movies.star_rating >= 9, :].copy()
top_movies
top_movies.loc[0, 'duration'] = 150
top_movies
Explanation: How do I avoid a SettingWithCopyWarning in pandas?
End of explanation
drinks = pd.read_csv('http://bit.ly/drinksbycountry')
drinks
pd.get_option('display.max_rows')
pd.set_option('display.max_rows', None)
drinks
pd.reset_option('display.max_rows')
pd.get_option('display.max_rows')
train = pd.read_csv('http://bit.ly/kaggletrain')
train
pd.get_option('display.max_colwidth')
pd.set_option('display.max_colwidth', 1000)
pd.get_option('display.precision')
pd.set_option('display.precision', 2)
drinks.head()
drinks['x'] = drinks.wine_servings * 1000
drinks['y'] = drinks.total_litres_of_pure_alcohol * 1000
drinks.head()
pd.set_option('display.float_format', '{:,}'.format)
pd.describe_option('rows')
pd.reset_option('all')
Explanation: How do I change display options in pandas?
End of explanation
df = pd.DataFrame({'id': [100, 101, 102], 'color': ['red', 'blue', 'red']}, columns=['id', 'color'], index=['A', 'B', 'C'])
pd.DataFrame([[100, 'red'], [101, 'blue'], [102, 'red']], columns=['id', 'color'])
import numpy as np
arr = np.random.rand(4, 2)
arr
pd.DataFrame(arr, columns=['one', 'two'])
pd.DataFrame({'student': np.arange(100, 110, 1), 'test': np.random.randint(60, 101, 10)}).set_index('student')
s = pd.Series(['round', 'square'], index=['C', 'B'], name='shape')
s
df
pd.concat([df, s], axis=1)
Explanation: How do I create a pandas DataFrame from another object?
End of explanation
train = pd.read_csv('http://bit.ly/kaggletrain')
train
train['Sex_male'] = train.Sex.map({'female': 0, 'male': 1})
train.head()
pd.get_dummies(train.Sex, prefix='Sex').iloc[:, 1:]
train.Embarked.value_counts()
embarked_dummies = pd.get_dummies(train.Embarked, prefix='Embarked').iloc[:, 1:]
train = pd.concat([train, embarked_dummies], axis=1)
train.head()
pd.get_dummies(train, columns=['Sex', 'Embarked'], drop_first=True)
Explanation: How do I create dummy variables in pandas?
End of explanation
user_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']
users = pd.read_table('http://bit.ly/movieusers', delimiter='|', header=None, names=user_cols, index_col='user_id')
users.head()
users.shape
users.zip_code.duplicated().sum()
users.duplicated().sum()
users.loc[users.duplicated(keep=False), :]
users.drop_duplicates(keep='first').shape
users.duplicated(subset=['age', 'zip_code']).sum()
Explanation: How do I find and remove duplicate rows in pandas?
End of explanation
drinks = pd.read_csv('http://bit.ly/drinksbycountry')
drinks.head()
drinks.info()
drinks.info(memory_usage='deep')
drinks.memory_usage(deep=True)
sorted(drinks.continent.unique())
drinks['continent'] = drinks.continent.astype('category')
drinks.dtypes
drinks.continent.cat.codes.head()
drinks.memory_usage(deep=True)
drinks['country'] = drinks.country.astype('category')
drinks.memory_usage(deep=True)
df = pd.DataFrame({'ID': [100, 101, 102, 103], 'quality': ['good', 'very good', 'good', 'excellent']})
df
Explanation: How do I make my pandas DataFrame smaller and faster?
End of explanation
ufo = pd.read_csv('http://bit.ly/uforeports')
ufo.head(3)
ufo.loc[0, :]
ufo.loc[0:2, :]
ufo.loc[0, 'City':'State']
ufo.loc[ufo.City=='Oakland', :]
ufo.iloc[:, [0, 3]]
Explanation: How do I select multiple rows and columns from a pandas DataFrame?
End of explanation
train = pd.read_csv('http://bit.ly/kaggletrain')
train.head()
feature_cols = ['Pclass', 'Parch']
X = train.loc[:, feature_cols]
y = train.Survived
from sklearn import linear_model
logreg = linear_model.LogisticRegression()
logreg.fit(X, y)
test = pd.read_csv('http://bit.ly/kaggletest')
X_new = test.loc[:, feature_cols]
y_predict = logreg.predict(X_new)
y_predict.shape
pd.DataFrame({'PassengerId': test.PassengerId, 'Survived': y_predict}).set_index('PassengerId').to_csv('sub.csv')
train.to_pickle('train.pkl')
pd.read_pickle('train.pkl')
Explanation: How do I use pandas with scikit-learn to create Kaggle submissions?
End of explanation
orders = pd.read_table('http://bit.ly/chiporders')
orders.head()
orders.item_name.str.upper()
orders[orders.item_name.str.contains('Chicken')]
Explanation: How do I use string methods in pandas?
End of explanation
drinks = pd.read_csv('http://bit.ly/drinksbycountry')
drinks.head()
drinks.drop(2, axis=0).head()
drinks.mean(axis=1).head()
Explanation: How do I use the "axis" parameter in pandas?
End of explanation
ufo = pd.read_csv('http://bit.ly/uforeports')
ufo.head()
ufo.dtypes
ufo.Time.str.slice(-5, -3).astype(int).head()
ufo['Time'] = pd.to_datetime(ufo.Time)
ufo.dtypes
ufo.head()
ufo.Time.dt.dayofyear.head()
ts = pd.to_datetime('1/1/1999')
ufo.loc[ufo.Time > ts, :].head()
%matplotlib inline
ufo['Year'] = ufo.Time.dt.year
ufo.head()
ufo.Year.value_counts().sort_index().plot()
ufo.sample(3)
ufo.sample(frac=0.001)
Explanation: How do I work with dates and times in pandas?
End of explanation
drinks = pd.read_csv('http://bit.ly/drinksbycountry')
drinks.head()
drinks.index
drinks.loc[23, :]
drinks.set_index('country', inplace=True)
drinks.head()
drinks.loc['Brazil', 'beer_servings']
drinks.index.name = None
drinks.head()
drinks.reset_index(inplace=True)
drinks.head()
drinks = pd.read_csv('http://bit.ly/drinksbycountry')
drinks.head()
drinks.set_index('country', inplace=True)
drinks.head()
drinks.continent.head()
drinks.continent.value_counts()['Africa']
drinks.continent.value_counts().sort_index().head()
people = pd.Series([3000000, 85000], index=['Albania', 'Andorra'], name='population')
people
drinks.beer_servings * people
pd.concat([drinks, people], axis=1).head()
Explanation: What do I need to know about the pandas index?
End of explanation
drinks = pd.read_csv('http://bit.ly/drinksbycountry')
drinks.head()
drinks.beer_servings.mean()
drinks.groupby('continent').beer_servings.mean()
drinks[drinks.continent=='Africa'].beer_servings.mean()
drinks.groupby('continent').beer_servings.agg(['count', 'min', 'max', 'mean'])
%matplotlib inline
drinks.groupby('continent').mean().plot(kind='bar')
Explanation: When should I use a "groupby" in pandas?
End of explanation
ufo = pd.read_csv('http://bit.ly/uforeports')
ufo.head()
ufo.drop('City', axis=1, inplace=True)
ufo.head()
ufo.dropna(how='any').shape
ufo.shape
Explanation: When should I use "inplace" parameter in pandas?
End of explanation |
15,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Acme
Step2: Install dm_control
The next cell will install environments provided by dm_control if you have an institutional MuJoCo license. This is not necessary, but without this you won't be able to use the dm_cartpole environment below and can instead follow this colab using gym environments. To do so simply expand the following cell, paste in your license file, and run the cell.
Alternatively, Colab supports using a Jupyter kernel on your local machine which can be accomplished by following the guidelines here
Step3: Import Modules
Now we can import all the relevant modules.
Step4: Load an environment
We can now load an environment. In what follows we'll create an environment in order to generate and visualize a single state from that environment. Just select the environment you want to use and run the cell.
Step5: Environment Spec
We will later interact with the environment in a loop corresponding to the following diagram
Step6: Build a policy network that maps observations to actions.
The most important part of a reinforcement learning algorithm is potentially the policy that maps environment observations to actions. We can use a simple neural network to create a policy, in this case a simple feedforward MLP with layer norm. For our TensorFlow agents we make use of the sonnet library to specify networks or modules; all of the networks we will work with also have an initial batch dimension which allows for batched inference/learning.
It is possible that the the observations returned by the environment are nested in some way
Step7: Create an actor
An Actor is the part of our framework that directly interacts with an environment by generating actions. In more detail the earlier diagram can be expanded to show exactly how this interaction occurs
Step8: All actors have the following public methods and attributes
Step10: Evaluate the random actor's policy.
Although we have instantiated an actor with a policy, the policy has not yet learned to achieve any task reward, and is essentially just acting randomly. However this is a perfect opportunity to see how the actor and environment interact. Below we define a simple helper function to display a video given frames from this interaction, and we show 500 steps of the actor taking actions in the world.
Step11: Storing actor experiences in a replay buffer
Many RL agents utilize a data structure such as a replay buffer to store data from the environment (e.g. observations) along with actions taken by the actor. This data will later be fed into a learning process in order to update the policy. Again we can expand our earlier diagram to include this step
Step12: We could interact directly with Reverb in order to add data to replay. However in Acme we have an additional layer on top of this data-storage that allows us to use the same interface no matter what kind of data we are inserting.
This layer in Acme corresponds to an Adder which adds experience to a data table. We provide several adders that differ based on the type of information that is desired to be stored in the table, however in this case we will make use of an NStepTransitionAdder which stores simple transitions (if N=1) or accumulates N-steps to form an aggregated transition.
Step13: We can either directly use the adder to add transitions to replay directly using the add() and add_first() methods as follows
Step14: Since this is a common enough way to observe data, Actors in Acme generally take an Adder instance that they use to define their observation methods. We saw earlier that the FeedForwardActor like all Actors defines observe and observe_first methods. If we give the actor an Adder instance at init then it will use this adder to make observations.
Step15: Below we repeat the same process, but using actor and its observe methods. We note these subtle changes below.
Step16: Learning from experiences in replay
Acme provides multiple learning algorithms/agents. Here, we will use the Acme's D4PG learning algorithm to learn from the data collected by the actor. To do so, we first create a TensorFlow dataset from the Reverb table using the make_dataset function.
Step17: In what follows we'll make use of D4PG, an actor-critic learning algorithm. D4PG is a somewhat complicated algorithm, so we'll leave a full explanation of this method to the accompanying paper (see the documentation).
However, since D4PG is an actor-critic algorithm we will have to specify a critic for it (a value function). In this case D4PG uses a distributional critic as well. D4PG also makes use of online and target networks so we need to create copies of both the policy_network (from earlier) and the new critic network we are about to create.
To build our critic networks, we use a multiplexer, which is simply a neural network module that takes multiple inputs and processes them in different ways before combining them and processing further. In the case of Acme's CriticMultiplexer, the inputs are observations and actions, each with their own network torso. There is then a critic network module that processes the outputs of the observation network and the action network and outputs a tensor.
Finally, in order to optimize these networks the learner must receive networks with the variables created. We have utilities in Acme to handle exactly this, and we do so in the final lines of the following code block.
Step18: We can now create a learner that uses these networks. Note that here we're using the same discount factor as was used in the transition adder. The rest of the parameters are reasonable defaults.
Note however that we will log output to the terminal at regular intervals. We have also turned off checkpointing of the network weights (i.e. saving them). This is usually used by default but can cause issues with interactive colab sessions.
Step19: Inspecting the learner's public methods we see that it primarily exists to expose its variables and update them. IE this looks remarkably similar to supervised learning.
Step20: The learner's step() method samples a batch of data from the replay dataset given to it, and performs optimization using the optimizer, logging loss metrics along the way. Note
Step21: Training loop
Finally, we can put all of the pieces together and run some training steps in the environment, alternating the actor's experience gathering with learner's learning.
This is a simple training loop that runs for num_training_episodes episodes where the actor and learner take turns generating and learning from experience respectively
Step22: Putting it all together
Step23: Of course we could have just used the agents.D4PG agent directly which sets
all of this up for us. We'll stick with this agent we've just created, but most of the steps outlined in this tutorial can be skipped by just making use of a
prebuilt agent and the environment loop.
Training the full agent
To simplify collecting and storing experiences, you can also directly use Acme's EnvironmentLoop which runs the environment loop for a specified number of episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely.
Step24: Evaluate the D4PG agent
We can now evaluate the agent. Note that this will use the noisy behavior policy, and so won't quite be optimal. If we wanted to be absolutely precise we could easily replace this with the noise-free policy. Note that the optimal policy can get about 1000 reward in this environment. D4PG should generally get to that within 50-100 learner steps. We've cut it off at 50 and not dropped the behavior noise just to simplify this tutorial. | Python Code:
#@title Install necessary dependencies.
!sudo apt-get install -y xvfb ffmpeg
!pip install 'gym==0.10.11'
!pip install imageio
!pip install PILLOW
!pip install 'pyglet==1.3.2'
!pip install pyvirtualdisplay
!pip install dm-acme
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
!pip install dm-acme[envs]
from IPython.display import clear_output
clear_output()
Explanation: Acme: Tutorial
<a href="https://colab.research.google.com/github/deepmind/acme/blob/master/examples/tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This colab provides an overview of how Acme's modules can be stacked together to create reinforcement learning agents. It shows how to fit networks to environment specs, create actors, learners, replay buffers, datasets, adders, and full agents. It also highlights where you can swap out certain modules to create your own Acme based agents.
Installation
In the first few cells we'll start by installing all of the necessary dependencies (and a few optional ones).
End of explanation
#@title Add your License
#@test {"skip": true}
mjkey =
.strip()
mujoco_dir = "$HOME/.mujoco"
# Install OpenGL dependencies
!apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-glx libosmesa6 libglew2.0
# Get MuJoCo binaries
!wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip
!unzip -o -q mujoco.zip -d "$mujoco_dir"
# Copy over MuJoCo license
!echo "$mjkey" > "$mujoco_dir/mjkey.txt"
# Install dm_control
!pip install dm_control
# Configure dm_control to use the OSMesa rendering backend
%env MUJOCO_GL=osmesa
# Check that the installation succeeded
try:
from dm_control import suite
env = suite.load('cartpole', 'swingup')
pixels = env.physics.render()
except Exception as e:
raise e from RuntimeError(
'Something went wrong during installation. Check the shell output above '
'for more information.')
else:
from IPython.display import clear_output
clear_output()
del suite, env, pixels
Explanation: Install dm_control
The next cell will install environments provided by dm_control if you have an institutional MuJoCo license. This is not necessary, but without this you won't be able to use the dm_cartpole environment below and can instead follow this colab using gym environments. To do so simply expand the following cell, paste in your license file, and run the cell.
Alternatively, Colab supports using a Jupyter kernel on your local machine which can be accomplished by following the guidelines here: https://research.google.com/colaboratory/local-runtimes.html. This will allow you to install dm_control by following instructions in https://github.com/deepmind/dm_control and using a personal MuJoCo license.
End of explanation
#@title Import modules.
#python3
%%capture
import copy
import pyvirtualdisplay
import imageio
import base64
import IPython
from acme import environment_loop
from acme.tf import networks
from acme.adders import reverb as adders
from acme.agents.tf import actors as actors
from acme.datasets import reverb as datasets
from acme.wrappers import gym_wrapper
from acme import specs
from acme import wrappers
from acme.agents.tf import d4pg
from acme.agents import agent
from acme.tf import utils as tf2_utils
from acme.utils import loggers
import gym
import dm_env
import matplotlib.pyplot as plt
import numpy as np
import reverb
import sonnet as snt
import tensorflow as tf
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
Explanation: Import Modules
Now we can import all the relevant modules.
End of explanation
environment_name = 'gym_mountaincar' # @param ['dm_cartpole', 'gym_mountaincar']
# task_name = 'balance' # @param ['swingup', 'balance']
def make_environment(domain_name='cartpole', task='balance'):
from dm_control import suite
env = suite.load(domain_name, task)
env = wrappers.SinglePrecisionWrapper(env)
return env
if 'dm_cartpole' in environment_name:
environment = make_environment('cartpole')
def render(env):
return env._physics.render(camera_id=0) #pylint: disable=protected-access
elif 'gym_mountaincar' in environment_name:
environment = gym_wrapper.GymWrapper(gym.make('MountainCarContinuous-v0'))
environment = wrappers.SinglePrecisionWrapper(environment)
def render(env):
return env.environment.render(mode='rgb_array')
else:
raise ValueError('Unknown environment: {}.'.format(environment_name))
# Show the frame.
frame = render(environment)
plt.imshow(frame)
plt.axis('off')
Explanation: Load an environment
We can now load an environment. In what follows we'll create an environment in order to generate and visualize a single state from that environment. Just select the environment you want to use and run the cell.
End of explanation
environment_spec = specs.make_environment_spec(environment)
print('actions:\n', environment_spec.actions, '\n')
print('observations:\n', environment_spec.observations, '\n')
print('rewards:\n', environment_spec.rewards, '\n')
print('discounts:\n', environment_spec.discounts, '\n')
Explanation: Environment Spec
We will later interact with the environment in a loop corresponding to the following diagram:
<img src="https://github.com/deepmind/acme/raw/master/docs/diagrams/environment_loop.png" width="500" />
But before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g. observations) or consumes (e.g. actions). The environment_spec will show you the form of the observations, rewards and discounts that the environment exposes and the form of the actions that can be taken.
End of explanation
# Calculate how big the last layer should be based on total # of actions.
action_spec = environment_spec.actions
action_size = np.prod(action_spec.shape, dtype=int)
exploration_sigma = 0.3
# In order the following modules:
# 1. Flatten the observations to be [B, ...] where B is the batch dimension.
# 2. Define a simple MLP which is the guts of this policy.
# 3. Make sure the output action matches the spec of the actions.
policy_modules = [
tf2_utils.batch_concat,
networks.LayerNormMLP(layer_sizes=(300, 200, action_size)),
networks.TanhToSpec(spec=environment_spec.actions)]
policy_network = snt.Sequential(policy_modules)
# We will also create a version of this policy that uses exploratory noise.
behavior_network = snt.Sequential(
policy_modules + [networks.ClippedGaussian(exploration_sigma),
networks.ClipToSpec(action_spec)])
Explanation: Build a policy network that maps observations to actions.
The most important part of a reinforcement learning algorithm is potentially the policy that maps environment observations to actions. We can use a simple neural network to create a policy, in this case a simple feedforward MLP with layer norm. For our TensorFlow agents we make use of the sonnet library to specify networks or modules; all of the networks we will work with also have an initial batch dimension which allows for batched inference/learning.
It is possible that the the observations returned by the environment are nested in some way: e.g. environments from the dm_control suite are frequently returned as dictionaries containing position and velocity entries. Our network is allowed to arbitarily map this dictionary to produce an action, but in this case we will simply concatenate these observations before feeding it through the MLP. We can do so using Acme's batch_concat utility to flatten the nested observation into a single dimension for each batch. If the observation is already flat this will be a no-op.
Similarly, the output of the MLP may have a different range of values than the action spec dictates. For this, we can use Acme's TanhToSpec module to rescale our actions to meet the spec.
End of explanation
actor = actors.FeedForwardActor(policy_network)
Explanation: Create an actor
An Actor is the part of our framework that directly interacts with an environment by generating actions. In more detail the earlier diagram can be expanded to show exactly how this interaction occurs:
<img src="https://github.com/deepmind/acme/raw/master/docs/diagrams/actor_loop.png" width="500" />
While you can always write your own actor, in Acme we also provide a number of useful premade versions. For the network we specified above we will make use of a FeedForwardActor that wraps a single feed forward network and knows how to do things like handle any batch dimensions or record observed transitions.
End of explanation
[method_or_attr for method_or_attr in dir(actor) # pylint: disable=expression-not-assigned
if not method_or_attr.startswith('_')]
Explanation: All actors have the following public methods and attributes:
End of explanation
def display_video(frames, filename='temp.mp4'):
Save and display video.
# Write video
with imageio.get_writer(filename, fps=60) as video:
for frame in frames:
video.append_data(frame)
# Read video and display the video
video = open(filename, 'rb').read()
b64_video = base64.b64encode(video)
video_tag = ('<video width="320" height="240" controls alt="test" '
'src="data:video/mp4;base64,{0}">').format(b64_video.decode())
return IPython.display.HTML(video_tag)
# Run the actor in the environment for desired number of steps.
frames = []
num_steps = 500
timestep = environment.reset()
for _ in range(num_steps):
frames.append(render(environment))
action = actor.select_action(timestep.observation)
timestep = environment.step(action)
# Save video of the behaviour.
display_video(np.array(frames))
Explanation: Evaluate the random actor's policy.
Although we have instantiated an actor with a policy, the policy has not yet learned to achieve any task reward, and is essentially just acting randomly. However this is a perfect opportunity to see how the actor and environment interact. Below we define a simple helper function to display a video given frames from this interaction, and we show 500 steps of the actor taking actions in the world.
End of explanation
# Create a table with the following attributes:
# 1. when replay is full we remove the oldest entries first.
# 2. to sample from replay we will do so uniformly at random.
# 3. before allowing sampling to proceed we make sure there is at least
# one sample in the replay table.
# 4. we use a default table name so we don't have to repeat it many times below;
# if we left this off we'd need to feed it into adders/actors/etc. below.
replay_buffer = reverb.Table(
name=adders.DEFAULT_PRIORITY_TABLE,
max_size=1000000,
remover=reverb.selectors.Fifo(),
sampler=reverb.selectors.Uniform(),
rate_limiter=reverb.rate_limiters.MinSize(min_size_to_sample=1))
# Get the server and address so we can give it to the modules such as our actor
# that will interact with the replay buffer.
replay_server = reverb.Server([replay_buffer], port=None)
replay_server_address = 'localhost:%d' % replay_server.port
Explanation: Storing actor experiences in a replay buffer
Many RL agents utilize a data structure such as a replay buffer to store data from the environment (e.g. observations) along with actions taken by the actor. This data will later be fed into a learning process in order to update the policy. Again we can expand our earlier diagram to include this step:
<img src="https://github.com/deepmind/acme/raw/master/docs/diagrams/batch_loop.png" width="500" />
In order to make this possible, Acme leverages Reverb which is an efficient and easy-to-use data storage and transport system designed for Machine Learning research. Below we will create the replay buffer before interacting with it.
End of explanation
# Create a 5-step transition adder where in between those steps a discount of
# 0.99 is used (which should be the same discount used for learning).
adder = adders.NStepTransitionAdder(
client=reverb.Client(replay_server_address),
n_step=5,
discount=0.99)
Explanation: We could interact directly with Reverb in order to add data to replay. However in Acme we have an additional layer on top of this data-storage that allows us to use the same interface no matter what kind of data we are inserting.
This layer in Acme corresponds to an Adder which adds experience to a data table. We provide several adders that differ based on the type of information that is desired to be stored in the table, however in this case we will make use of an NStepTransitionAdder which stores simple transitions (if N=1) or accumulates N-steps to form an aggregated transition.
End of explanation
num_episodes = 2 #@param
for episode in range(num_episodes):
timestep = environment.reset()
adder.add_first(timestep)
while not timestep.last():
action = actor.select_action(timestep.observation)
timestep = environment.step(action)
adder.add(action=action, next_timestep=timestep)
Explanation: We can either directly use the adder to add transitions to replay directly using the add() and add_first() methods as follows:
End of explanation
actor = actors.FeedForwardActor(policy_network=behavior_network, adder=adder)
Explanation: Since this is a common enough way to observe data, Actors in Acme generally take an Adder instance that they use to define their observation methods. We saw earlier that the FeedForwardActor like all Actors defines observe and observe_first methods. If we give the actor an Adder instance at init then it will use this adder to make observations.
End of explanation
num_episodes = 2 #@param
for episode in range(num_episodes):
timestep = environment.reset()
actor.observe_first(timestep) # Note: observe_first.
while not timestep.last():
action = actor.select_action(timestep.observation)
timestep = environment.step(action)
actor.observe(action=action, next_timestep=timestep) # Note: observe.
Explanation: Below we repeat the same process, but using actor and its observe methods. We note these subtle changes below.
End of explanation
# This connects to the created reverb server; also note that we use a transition
# adder above so we'll tell the dataset function that so that it knows the type
# of data that's coming out.
dataset = datasets.make_reverb_dataset(
server_address=replay_server_address,
batch_size=256,
environment_spec=environment_spec,
transition_adder=True)
Explanation: Learning from experiences in replay
Acme provides multiple learning algorithms/agents. Here, we will use the Acme's D4PG learning algorithm to learn from the data collected by the actor. To do so, we first create a TensorFlow dataset from the Reverb table using the make_dataset function.
End of explanation
critic_network = snt.Sequential([
networks.CriticMultiplexer(
observation_network=tf2_utils.batch_concat,
action_network=tf.identity,
critic_network=networks.LayerNormMLP(
layer_sizes=(400, 300),
activate_final=True)),
# Value-head gives a 51-atomed delta distribution over state-action values.
networks.DiscreteValuedHead(vmin=-150., vmax=150., num_atoms=51)])
# Create the target networks
target_policy_network = copy.deepcopy(policy_network)
target_critic_network = copy.deepcopy(critic_network)
# We must create the variables in the networks before passing them to learner.
tf2_utils.create_variables(network=policy_network,
input_spec=[environment_spec.observations])
tf2_utils.create_variables(network=critic_network,
input_spec=[environment_spec.observations,
environment_spec.actions])
tf2_utils.create_variables(network=target_policy_network,
input_spec=[environment_spec.observations])
tf2_utils.create_variables(network=target_critic_network,
input_spec=[environment_spec.observations,
environment_spec.actions])
Explanation: In what follows we'll make use of D4PG, an actor-critic learning algorithm. D4PG is a somewhat complicated algorithm, so we'll leave a full explanation of this method to the accompanying paper (see the documentation).
However, since D4PG is an actor-critic algorithm we will have to specify a critic for it (a value function). In this case D4PG uses a distributional critic as well. D4PG also makes use of online and target networks so we need to create copies of both the policy_network (from earlier) and the new critic network we are about to create.
To build our critic networks, we use a multiplexer, which is simply a neural network module that takes multiple inputs and processes them in different ways before combining them and processing further. In the case of Acme's CriticMultiplexer, the inputs are observations and actions, each with their own network torso. There is then a critic network module that processes the outputs of the observation network and the action network and outputs a tensor.
Finally, in order to optimize these networks the learner must receive networks with the variables created. We have utilities in Acme to handle exactly this, and we do so in the final lines of the following code block.
End of explanation
learner = d4pg.D4PGLearner(policy_network=policy_network,
critic_network=critic_network,
target_policy_network=target_policy_network,
target_critic_network=target_critic_network,
dataset=dataset,
discount=0.99,
target_update_period=100,
policy_optimizer=snt.optimizers.Adam(1e-4),
critic_optimizer=snt.optimizers.Adam(1e-4),
# Log learner updates to console every 10 seconds.
logger=loggers.TerminalLogger(time_delta=10.),
checkpoint=False)
Explanation: We can now create a learner that uses these networks. Note that here we're using the same discount factor as was used in the transition adder. The rest of the parameters are reasonable defaults.
Note however that we will log output to the terminal at regular intervals. We have also turned off checkpointing of the network weights (i.e. saving them). This is usually used by default but can cause issues with interactive colab sessions.
End of explanation
[method_or_attr for method_or_attr in dir(learner) # pylint: disable=expression-not-assigned
if not method_or_attr.startswith('_')]
Explanation: Inspecting the learner's public methods we see that it primarily exists to expose its variables and update them. IE this looks remarkably similar to supervised learning.
End of explanation
learner.step()
Explanation: The learner's step() method samples a batch of data from the replay dataset given to it, and performs optimization using the optimizer, logging loss metrics along the way. Note: in order to sample from the replay dataset, there must be at least 1000 elements in the replay buffer (which should already have from the actor's added experiences.)
End of explanation
num_training_episodes = 10 # @param {type: "integer"}
min_actor_steps_before_learning = 1000 # @param {type: "integer"}
num_actor_steps_per_iteration = 100 # @param {type: "integer"}
num_learner_steps_per_iteration = 1 # @param {type: "integer"}
learner_steps_taken = 0
actor_steps_taken = 0
for episode in range(num_training_episodes):
timestep = environment.reset()
actor.observe_first(timestep)
episode_return = 0
while not timestep.last():
# Get an action from the agent and step in the environment.
action = actor.select_action(timestep.observation)
next_timestep = environment.step(action)
# Record the transition.
actor.observe(action=action, next_timestep=next_timestep)
# Book-keeping.
episode_return += next_timestep.reward
actor_steps_taken += 1
timestep = next_timestep
# See if we have some learning to do.
if (actor_steps_taken >= min_actor_steps_before_learning and
actor_steps_taken % num_actor_steps_per_iteration == 0):
# Learn.
for learner_step in range(num_learner_steps_per_iteration):
learner.step()
learner_steps_taken += num_learner_steps_per_iteration
# Log quantities.
print('Episode: %d | Return: %f | Learner steps: %d | Actor steps: %d'%(
episode, episode_return, learner_steps_taken, actor_steps_taken))
Explanation: Training loop
Finally, we can put all of the pieces together and run some training steps in the environment, alternating the actor's experience gathering with learner's learning.
This is a simple training loop that runs for num_training_episodes episodes where the actor and learner take turns generating and learning from experience respectively:
Actor acts in environment & adds experience to replay for min_actor_steps_per_iteration steps<br>
Learner samples from replay data and learns from it for num_learner_steps_per_iteration steps<br>
Note: Since the learner and actor are sharing a policy network, any learning done on the learner, automatically is transferred to the actor's policy.
End of explanation
d4pg_agent = agent.Agent(actor=actor,
learner=learner,
min_observations=1000,
observations_per_step=8.)
Explanation: Putting it all together: an Acme agent
<img src="https://github.com/deepmind/acme/raw/master/docs/diagrams/agent_loop.png" width="500" />
Now that we've used all of the pieces and seen how they can interact, there's one more way we can put it all together. In the Acme design scheme, an agent is an entity with both a learner and an actor component that will piece together their interactions internally. An agent handles the interchange between actor adding experiences to the replay buffer and learner sampling from it and learning and in turn, sharing its weights back with the actor.
Similar to how we used num_actor_steps_per_iteration and num_learner_steps_per_iteration parameters in our custom training loop above, the agent parameters min_observations and observations_per_step specify the structure of the agent's training loop.
* min_observations specifies how many actor steps need to have happened to start learning.
* observations_per_step specifies how many actor steps should occur in between each learner step.
End of explanation
# This may be necessary if any of the episodes were cancelled above.
adder.reset()
# We also want to make sure the logger doesn't write to disk because that can
# cause issues in colab on occasion.
logger = loggers.TerminalLogger(time_delta=10.)
loop = environment_loop.EnvironmentLoop(environment, d4pg_agent, logger=logger)
loop.run(num_episodes=50)
Explanation: Of course we could have just used the agents.D4PG agent directly which sets
all of this up for us. We'll stick with this agent we've just created, but most of the steps outlined in this tutorial can be skipped by just making use of a
prebuilt agent and the environment loop.
Training the full agent
To simplify collecting and storing experiences, you can also directly use Acme's EnvironmentLoop which runs the environment loop for a specified number of episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely.
End of explanation
# Run the actor in the environment for desired number of steps.
frames = []
num_steps = 500
timestep = environment.reset()
for _ in range(num_steps):
frames.append(render(environment))
action = d4pg_agent.select_action(timestep.observation)
timestep = environment.step(action)
# Save video of the behaviour.
display_video(np.array(frames))
Explanation: Evaluate the D4PG agent
We can now evaluate the agent. Note that this will use the noisy behavior policy, and so won't quite be optimal. If we wanted to be absolutely precise we could easily replace this with the noise-free policy. Note that the optimal policy can get about 1000 reward in this environment. D4PG should generally get to that within 50-100 learner steps. We've cut it off at 50 and not dropped the behavior noise just to simplify this tutorial.
End of explanation |
15,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I've seen a couple of nice kernels here, but no one explained the importance of a morphological pre-processing of the data. So I decided to compare two approaches of a morphological normalization
Step1: A visible example of how do they work
Step2: So, what approach will be better for the given task? Let's see.
First of all, we need to load modules for linear algebra and data analysis as well as gensim (for training a Word2Vec, a classic algorithm for obtaining word embeddings). We also need some stuff from scikit-learn to teach and evaluate the classifier and pyplot to draw plots. seaborn will make the plots more beautiful.
Step3: And a little bit more of the linguistic tools! We will use a tokenization( breaking a stream of text up into meaningful elements called tokens, for instance, words) and a stop-word dictionary for English.
Step4: And check if the .csv-files with the data are okay.
Step5: So let's write some code. First of all, let's train a Word2Vec model. We will use the training set as a training corpus (Previously I used the test set, but it uses much more memory while the model trained on it has the same efficiency; thanks to @Gian12 for the notion). This set contains some NaN values, but we can just drop them since in our task their lack is not meaningful.
Step6: Let's make a list of sentences by merging the questions.
Step7: Okay, now we are up to the key method of preprocessing comparation. It provides lemmatization or stemming depending on the given flag.
Step8: And then we can make two different corpora to train the model
Step9: Now let's train the models. I've pre-defined these hyperparameters since models on them have the best performance. You can also try to play with them yourself.
Step10: Let's check the result of one of the models.
Step11: Great! The most similar words seem to be pretty meaningful. So, we have three trained models, we can encode the text data with the vectors - let's make some experiments! Let's make data sets from the loaded data frame. I take a chunk of the traning data because the run of the script on the full data takes too much time.
Step12: A little bit modified preprocess. Now it returns only words which model's vocabulary contains.
Step13: This method will help to obtaining a bag of means by vectorising the messages.
Step14: And now we can obtain the features matrices.
Step15: That's almost all! Now we can train the classifier and evaluate it's performance. It's better to use a metric classifier because we are performing operations in the vector space, so I choose a Logistic Regression. But of course you can try a something different and see what can change.
I also use cross-validation to train and to evaluate on the same data set.
Step16: So, the lemmatized model outperformed the "clear" model! And the stemmed model showed the worst result. Why does it happen?
Well, any morphological pre-processing of the training data for the model reduces the amount of information that model can obtain from the corpus. Some of the information, like the difference in morphological roots of the same words, seems to be not necessary, so it is better to remove it. This removal is a mush-have in synthetic languages (languages with high morpheme-per-word ratio, like Russian), and, as we can see, it is also pretty helpful in our task.
The same thing about stemming. Stemming further reduces the amount of information, making one stem for the different word forms. Sometimes this is helpful, but sometimes this can bring noise to the model since some stems of the different words can be ambiguous, and the model can't be able to separate "playstation" and, say, "play".
In other words, there is no silver bullet, and you should always check various option of pre-processing if you want to reach the best performance. However, lemmatisation nine times out of ten will increase the performance of your model.
However, the logarithmic loss of my approach is not very high, but you can use this notebook as a baseline and try to beat it's score yourself! Just download it and uncomment the commented strings (because Kaggle doesn't allow to use so much memory) | Python Code:
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem import LancasterStemmer
stemmer = LancasterStemmer()
lemmer = WordNetLemmatizer()
Explanation: I've seen a couple of nice kernels here, but no one explained the importance of a morphological pre-processing of the data. So I decided to compare two approaches of a morphological normalization: stemming and lemmatization. Both of them reduce the word to the regularized form, but a stemming reduces the word to the word stem, and a lemmatization reduces the word to it's morphological root with the help of dictionary lookup.
I evaluate the efficiency of these approaches by comparison their performance with the naive Bag of Means method: every word is encoded with a word embedding vector, and then the common vector of two messages is computed as a mean vector of these vectors. Some of the researches proved that such approach can be a very strong baseline (Faruqui et al., 2014; Yu et al., 2014; Gershman and Tenenbaum, 2015; Kenter and de Rijke, 2015). Then I use obtained vectors as feature vectors to train the classifiers.
I will also make a comparison with a default approach (no morphological pre-processing).
Okay, let's load NLTK and try to implement these two approaches with a Lancaster Stemmer (one of the most popular stemming algorithms) and a WordNet Lemmatizer (based on WordNet’s built-in morphy function):
End of explanation
print(stemmer.stem('dictionaries'))
print(lemmer.lemmatize('dictionaries'))
Explanation: A visible example of how do they work:
End of explanation
from gensim import models
import numpy as np
from pandas import DataFrame, Series
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
from gensim import models
import matplotlib.pyplot as plt
import seaborn
Explanation: So, what approach will be better for the given task? Let's see.
First of all, we need to load modules for linear algebra and data analysis as well as gensim (for training a Word2Vec, a classic algorithm for obtaining word embeddings). We also need some stuff from scikit-learn to teach and evaluate the classifier and pyplot to draw plots. seaborn will make the plots more beautiful.
End of explanation
from nltk.corpus import stopwords
from nltk.tokenize import wordpunct_tokenize, RegexpTokenizer
stop = stopwords.words('english')
alpha_tokenizer = RegexpTokenizer('[A-Za-z]\w+')
Explanation: And a little bit more of the linguistic tools! We will use a tokenization( breaking a stream of text up into meaningful elements called tokens, for instance, words) and a stop-word dictionary for English.
End of explanation
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
Explanation: And check if the .csv-files with the data are okay.
End of explanation
df_train = DataFrame.from_csv('../input/train.csv').dropna()
Explanation: So let's write some code. First of all, let's train a Word2Vec model. We will use the training set as a training corpus (Previously I used the test set, but it uses much more memory while the model trained on it has the same efficiency; thanks to @Gian12 for the notion). This set contains some NaN values, but we can just drop them since in our task their lack is not meaningful.
End of explanation
texts = np.concatenate([df_train.question1.values, df_train.question2.values])
Explanation: Let's make a list of sentences by merging the questions.
End of explanation
def process_sent(words, lemmatize=False, stem=False):
words = words.lower()
tokens = alpha_tokenizer.tokenize(words)
for index, word in enumerate(tokens):
if lemmatize:
tokens[index] = lemmer.lemmatize(word)
elif stem:
tokens[index] = stemmer.stem(word)
else:
tokens[index] = word
return tokens
Explanation: Okay, now we are up to the key method of preprocessing comparation. It provides lemmatization or stemming depending on the given flag.
End of explanation
corpus_lemmatized = [process_sent(sent, lemmatize=True, stem=False) for sent in texts]
corpus_stemmed = [process_sent(sent, lemmatize=False, stem=True) for sent in texts]
corpus = [process_sent(sent) for sent in texts]
Explanation: And then we can make two different corpora to train the model: stemmed corpus and lemmatized corpus. We will also make a "clean" corpus for sure.
End of explanation
VECTOR_SIZE = 100
min_count = 10
size = VECTOR_SIZE
window = 10
model_lemmatized = models.Word2Vec(corpus_lemmatized, min_count=min_count,
size=size, window=window)
model_stemmed = models.Word2Vec(corpus_stemmed, min_count=min_count,
size=size, window=window)
model = models.Word2Vec(corpus, min_count=min_count,
size=size, window=window)
Explanation: Now let's train the models. I've pre-defined these hyperparameters since models on them have the best performance. You can also try to play with them yourself.
End of explanation
model_lemmatized.most_similar('playstation')
Explanation: Let's check the result of one of the models.
End of explanation
q1 = df_train.question1.values[200000:]
q2 = df_train.question2.values[200000:]
Y = np.array(df_train.is_duplicate.values)[200000:]
Explanation: Great! The most similar words seem to be pretty meaningful. So, we have three trained models, we can encode the text data with the vectors - let's make some experiments! Let's make data sets from the loaded data frame. I take a chunk of the traning data because the run of the script on the full data takes too much time.
End of explanation
def preprocess_check(words, lemmatize=False, stem=False):
words = words.lower()
tokens = alpha_tokenizer.tokenize(words)
model_tokens = []
for index, word in enumerate(tokens):
if lemmatize:
lem_word = lemmer.lemmatize(word)
if lem_word in model_lemmatized.wv.vocab:
model_tokens.append(lem_word)
elif stem:
stem_word = stemmer.stem(word)
if stem_word in model_stemmed.wv.vocab:
model_tokens.append(stem_word)
else:
if word in model.wv.vocab:
model_tokens.append(word)
return model_tokens
Explanation: A little bit modified preprocess. Now it returns only words which model's vocabulary contains.
End of explanation
old_err_state = np.seterr(all='raise')
def vectorize(words, words_2, model, num_features, lemmatize=False, stem=False):
features = np.zeros((num_features), dtype='float32')
words_amount = 0
words = preprocess_check(words, lemmatize, stem)
words_2 = preprocess_check(words_2, lemmatize, stem)
for word in words:
words_amount = words_amount + 1
features = np.add(features, model[word])
for word in words_2:
words_amount = words_amount + 1
features = np.add(features, model[word])
try:
features = np.divide(features, words_amount)
except FloatingPointError:
features = np.zeros(num_features, dtype='float32')
return features
Explanation: This method will help to obtaining a bag of means by vectorising the messages.
End of explanation
X_lem = []
for index, sentence in enumerate(q1):
X_lem.append(vectorize(sentence, q2[index], model_lemmatized, VECTOR_SIZE, True, False))
X_lem = np.array(X_lem)
X_stem = []
for index, sentence in enumerate(q1):
X_stem.append(vectorize(sentence, q2[index], model_stemmed, VECTOR_SIZE, False, True))
X_stem = np.array(X_stem)
X = []
for index, sentence in enumerate(q1):
X.append(vectorize(sentence, q2[index], model, VECTOR_SIZE))
X = np.array(X)
Explanation: And now we can obtain the features matrices.
End of explanation
results = []
title_font = {'size':'10', 'color':'black', 'weight':'normal',
'verticalalignment':'bottom'}
axis_font = {'size':'10'}
plt.figure(figsize=(10, 5))
plt.xlabel('Training examples', **axis_font)
plt.ylabel('Accuracy', **axis_font)
plt.tick_params(labelsize=10)
for X_set, name, lstyle in [(X_lem, 'Lemmatizaton', 'dotted'),
(X_stem, 'Stemming', 'dashed'),
(X, 'Default', 'dashdot'),
]:
estimator = LogisticRegression(C = 1)
cv = ShuffleSplit(n_splits=6, test_size=0.01, random_state=0)
train_sizes=np.linspace(0.01, 0.99, 6)
train_sizes, train_scores, test_scores = learning_curve(estimator, X_set, Y, cv=cv, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
results.append({'preprocessing' : name, 'score' : train_scores_mean[-1]})
plt.plot(train_sizes, train_scores_mean, label=name, linewidth=5, linestyle=lstyle)
plt.legend(loc='best')
Explanation: That's almost all! Now we can train the classifier and evaluate it's performance. It's better to use a metric classifier because we are performing operations in the vector space, so I choose a Logistic Regression. But of course you can try a something different and see what can change.
I also use cross-validation to train and to evaluate on the same data set.
End of explanation
clf = LogisticRegression(C = 1)
clf.fit(X, Y)
#df_test = DataFrame.from_csv('../input/test.csv').fillna('None')
q1 = df_train.question1.values[:100]
q2 = df_train.question2.values[:100]
#q1 = df_test.question1.values
#q2 = df_test.question2.values
X_test = []
for index, sentence in enumerate(q1):
X_test.append(vectorize(sentence, q2[index], model, VECTOR_SIZE))
X_test = np.array(X_test)
result = clf.predict(X_test)
sub = DataFrame()
sub['is_duplicate'] = result
sub.to_csv('submission.csv', index=False)
Explanation: So, the lemmatized model outperformed the "clear" model! And the stemmed model showed the worst result. Why does it happen?
Well, any morphological pre-processing of the training data for the model reduces the amount of information that model can obtain from the corpus. Some of the information, like the difference in morphological roots of the same words, seems to be not necessary, so it is better to remove it. This removal is a mush-have in synthetic languages (languages with high morpheme-per-word ratio, like Russian), and, as we can see, it is also pretty helpful in our task.
The same thing about stemming. Stemming further reduces the amount of information, making one stem for the different word forms. Sometimes this is helpful, but sometimes this can bring noise to the model since some stems of the different words can be ambiguous, and the model can't be able to separate "playstation" and, say, "play".
In other words, there is no silver bullet, and you should always check various option of pre-processing if you want to reach the best performance. However, lemmatisation nine times out of ten will increase the performance of your model.
However, the logarithmic loss of my approach is not very high, but you can use this notebook as a baseline and try to beat it's score yourself! Just download it and uncomment the commented strings (because Kaggle doesn't allow to use so much memory)
End of explanation |
15,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collapsed Gibbs sampler for supervised latent Dirichlet allocation
<div style="display
Step1: Generate topics
We assume a vocabulary of 25 terms, and create ten "topics", where each topic assigns exactly 5 consecutive terms equal probability.
Step2: Generate documents from topics
We generate 1,000 documents from these 10 topics by sampling 1,000 topic distributions, one for each document, from a Dirichlet distribution with parameter $\alpha = (1, \ldots, 1)$.
Step3: Generate responses
Step4: Estimate parameters
Step5: Predict response of test documents
Create 1,000 test documents using the same generative process as our training documents, and compute their actual responses.
Step6: Estimate their topic distributions using the trained model, then calculate the predicted responses using the mean of our samples of $\eta$ after burn-in as an estimate for $\eta$.
Step7: Measure the goodness of our prediction using root mean square error.
Step8: Two-step learning
Step9: Unregularized linear regression
train linear regression on training data
calculate response on test data
measure the goodness of prediction using root mean square error.
Step10: L2-regularized linear regression
train ridge regression on training data
calculate response on test data
measure the goodness of prediction using root mean square error.
Step11: Gradient boosted regression trees
train gradient boosted regressor on training data
calculate response on test data
measure the goodness of prediction using root mean square error.
Step12: Conclusion
SLDA is slightly better than unregularized linear regression, and better than ridge regression or gradient boosted regression trees. The similar performance to unregularized linear regression is likely due to the fact that this test was set up as an exact problem - all parameters used in training, except $\beta$ (because we hand-picked the topics) and $\eta$ (because that was the one parameter we wanted to learn), were those used to generate documents, and in SLDA the likelihood of $y$ introduces a regularization on $\eta$ in the full-conditional for $z$, while unregularized linear regression does not enforce such a penalty.
Test with fewer topics
We now redo the previous test, but this time with fewer topics, in order to determine whether
the supervised portion of SLDA will produce topics different from those produced by LDA, and
prediction with SLDA is improved over LDA-and-a-regression.
Because we will no longer be solving an exact problem (the number of topics, and hence $\alpha$, will both be different from the document generation process), we expect SLDA to do better than LDA-and-a-regression, including unregularized linear regression.
Step13: We plot the SLDA topics again and note that they are indeed different!
Step14: Unregularized linear regression
Step15: L2-regularized linear regression
Step16: Gradient boosted regression trees
Step17: Unregularized linear regression with SLDA topics
Step18: L2-regularized linear regression with SLDA topics
Step19: Gradient boosted regression trees with SLDA topics | Python Code:
%matplotlib inline
from modules.helpers import plot_images
from functools import partial
from sklearn.metrics import (mean_squared_error)
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
imshow = partial(plt.imshow, cmap='gray', interpolation='nearest', aspect='auto')
rmse = lambda y_true, y_pred: np.sqrt(mean_squared_error(y_true, y_pred))
sns.set(style='white')
Explanation: Collapsed Gibbs sampler for supervised latent Dirichlet allocation
<div style="display:none">
$
\newcommand{\dir}{\mathop{\rm Dirichlet}\nolimits}
\newcommand{\dis}{\mathop{\rm Discrete}\nolimits}
\newcommand{\normal}{\mathop{\rm Normal}\nolimits}
\newcommand{\ber}{\mathop{\rm Bernoulli}\nolimits}
\newcommand{\btheta}{\mathbf{\theta}}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\cp}[2]{p \left( #1 \middle| #2 \right)}
\newcommand{\cN}[2]{\mathscr{N} \left( #1 \middle| #2 \right)}
\newcommand{\Betaf}{\mathop{\rm B}\nolimits}
\newcommand{\Gammaf}{\mathop{\Gamma}\nolimits}
\newcommand{\etd}[1]{\mathbf{z}^{(#1)}}
\newcommand{\sumetd}{\mathbf{z}}
\newcommand{\one}{\mathbf{1}}
$
</div>
Here is the collapsed Gibbs sampler for Blei and McAuliffe's supervised topic models. I am building on the collapsed Gibbs sampler I wrote for latent Dirichlet allocation.
The generative model for is as follows:
$$\begin{align}
\theta^{(d)} &\sim \dir(\alpha) &\text{(topic distribution for document $d \in {1, \ldots, D}$)} \
\phi^{(k)} &\sim \dir(\beta) &\text{(term distribution for topic $k \in {1, \ldots, K}$)} \
z_n^{(d)} \mid \theta^{(d)} &\sim \dis \left( \theta^{(d)} \right) &\text{(topic of $n$th token of document $d$, $n \in {1, \ldots, N^{(d)}}$)} \
w_n^{(d)} \mid \phi^{(z_n^{(d)})} &\sim \dis \left( \phi^{(z_n^{(d)})} \right) &\text{(term of $n$th token of document $d$, $n \in {1, \ldots, N^{(d)}}$)} \
\eta_k &\sim \normal \left( \mu, \nu^2 \right) &\text{(regression coefficient for topic $k \in {1, \ldots, K}$)} \
y^{(d)} \mid \eta, \etd{d} &\sim \normal \left( \eta \cdot \etd{d}, \sigma^2 \right) &\text{(response value of document $d \in {1, \ldots, D}$)}
\end{align}$$
where each token can be any one of $V$ terms in our vocabulary, and $\etd{d}$ is the empirical topic distribution of document $d$.
<img src="http://www.mdpi.com/remotesensing/remotesensing-05-02275/article_deploy/html/images/remotesensing-05-02275f2-1024.png" width="400">
<p style='text-align: center; font-style: italic;'>
Plate notation for supervised latent Dirichlet allocation.
<br/>
This diagram should replace $\beta_k$ with $\phi^{(k)}$, and each $\phi^{(k)}$ should be dependent on a single $\beta$.
</p>
The joint probability distribution can be factored as follows:
$$\begin{align}
\cp{\theta, \phi, z, w, \eta, y}{\alpha, \beta, \mu, \nu^2, \sigma^2}
&=
\prod_{k=1}^{K} \cp{\phi^{(k)}}{\beta}
\prod_{d=1}^{D} \cp{\theta^{(d)}}{\alpha}
\prod_{n=1}^{N^{(d)}} \cp{z_n^{(d)}}{\theta^{(d)}} \cp{w_n^{(d)}}{\phi^{(z_n^{(d)})}}
\ & \quad \times \prod_{k'=1}^{K} \cp{\eta_{k'}}{\mu, \nu^2}
\prod_{d'=1}^D \cp{y^{(d')}}{\eta, \etd{d'}, \sigma^2}
\ &=
\prod_{k=1}^{K} \frac{\Betaf(b^{(k)} + \beta)}{\Betaf(\beta)} \cp{\phi^{(k)}}{b^{(k)} + \beta}
\prod_{d=1}^{D} \frac{\Betaf(a^{(d)} + \alpha)}{\Betaf(\alpha)} \cp{\theta^{(d)}}{a^{(d)} + \alpha}
\ &\quad \times
\prod_{k'=1}^{K} \cN{\eta_{k'}}{\mu, \nu^2}
\prod_{d'=1}^{D} \cN{y^{(d')}}{\eta \cdot \etd{d'}, \sigma^2}
\end{align}$$
where $a_k^{(d)}$ is the number of tokens in document $d$ assigned to topic $k$, $b_v^{(k)}$ is the number of tokens equal to term $v$ and assigned to topic $k$, and $\Betaf$ is the multivariate Beta function. Marginalizing out $\theta$ and $\phi$ by integrating with respect to each $\theta^{(d)}$ and $\phi^{(k)}$ over their respective sample spaces yields
$$\begin{align}
\cp{z, w, \eta, y}{\alpha, \beta, \mu, \nu^2, \sigma^2} &=
\prod_{k=1}^{K} \frac{\Betaf(b^{(k)} + \beta)}{\Betaf(\beta)}
\prod_{d=1}^{D} \frac{\Betaf(a^{(d)} + \alpha)}{\Betaf(\alpha)}
\prod_{k'=1}^{K} \cN{\eta_{k'}}{\mu, \nu^2}
\prod_{d'=1}^{D} \cN{y^{(d')}}{\eta \cdot \etd{d'}, \sigma^2} \ &=
\cp{w}{z, \beta} \cp{z}{\alpha} \cp{\eta}{\mu, \nu^2} \cp{y}{\eta, z, \sigma^2}.
\end{align}$$
See my LDA notebook for step-by-step details of the previous two calculations.
Our goal is to calculate the posterior distribution
$$\cp{z, \eta}{w, y, \alpha, \beta, \mu, \nu^2, \sigma^2} =
\frac{\cp{z, w, \eta, y}{\alpha, \beta, \mu, \nu^2, \sigma^2}}
{\sum_{z'} \int \cp{z', w, \eta', y}{\alpha, \beta, \mu, \nu^2, \sigma^2} d\eta'}$$
in order to infer the topic assignments $z$ and regression coefficients $\eta$ from the given term assignments $w$ and response data $y$. Since calculating this directly is infeasible, we resort to collapsed Gibbs sampling. The sampler is "collapsed" because we marginalized out $\theta$ and $\phi$, and will estimate them from the topic assignments $z$:
$$\hat\theta_k^{(d)} = \frac{a_k^{(d)} + \alpha_k}{\sum_{k'=1}^K \left(a_{k'}^{(d)} + \alpha_{k'} \right)},\quad
\hat\phi_v^{(k)} = \frac{b_v^{(k)} + \beta_v}{\sum_{v'=1}^V \left(b_{v'}^{(k)} + \beta_{v'} \right)}.$$
Gibbs sampling requires us to compute the full conditionals for each $z_n^{(d)}$ and $\eta_k$, i.e. we need to calculate, for all $n$, $d$ and $k$,
$$\begin{align}
\cp{z_n^{(d)} = k}{z \setminus z_n^{(d)}, w, \eta, y, \alpha, \beta, \mu, \nu^2, \sigma^2}
&\propto
\cp{z_n^{(d)} = k, z \setminus z_n^{(d)}, w, \eta, y}{\alpha, \beta, \mu, \nu^2, \sigma^2}
\ &\propto
\frac{b_{w_n^{(d)}}^{(k)} \setminus z_n^{(d)} + \beta_{w_n^{(d)}}}{ \sum_{v=1}^V \left( b_v^{(k)} \setminus z_n^{(d)} + \beta_v\right)}
\left( a_k^{(d)} \setminus z_n^{(d)} + \alpha_k \right)
\prod_{d'=1}^{D} \cN{y^{(d')}}{\eta \cdot \etd{d'}, \sigma^2}
\ &\propto
\frac{b_{w_n^{(d)}}^{(k)} \setminus z_n^{(d)} + \beta_{w_n^{(d)}}}{ \sum_{v=1}^V \left( b_v^{(k)} \setminus z_n^{(d)} + \beta_v\right)}
\left( a_k^{(d)} \setminus z_n^{(d)} + \alpha_k \right)
\exp \left( \frac{1}{2 \sigma^2} \frac{\eta_k}{N^{(d)}} \left( 2 \left[ y^{(d)} - \eta \cdot \left( \etd{d} \setminus z_n^{(d)} \right) \right] - \frac{\eta_k}{N^{(d)}} \right) \right)
\end{align}$$
where the "set-minus" notation $\cdot \setminus z_n^{(d)}$ denotes the variable the notation is applied to with the entry $z_n^{(d)}$ removed (again, see my LDA notebook for details). This final proportionality is true since
$$\begin{align}
\prod_{d'=1}^{D} \cN{y^{(d')}}{\eta \cdot \etd{d'}, \sigma^2}
&\propto
\prod_{d'=1}^{D} \exp \left( -\frac{ \left( y^{(d')} - \eta \cdot \etd{d'} \right)^2 }{2 \sigma^2} \right)
\ &\propto
\prod_{d'=1}^{D} \exp \left( \frac{ 2 y^{(d')} \eta \cdot \etd{d'} - \left( \eta \cdot \etd{d'} \right)^2 }{2 \sigma^2} \right)
\ &=
\prod_{d'=1}^{D} \exp \left( \frac{ 2 y^{(d')} \left( \eta \cdot \left( \etd{d'} \setminus z_n^{(d)} \right) + \delta_{d, d'} \frac{\eta_k}{N^{(d)}} \right) - \left( \eta \cdot \left( \etd{d'} \setminus z_n^{(d)} \right) + \delta_{d, d'} \frac{\eta_k}{N^{(d)}} \right)^2 }{2 \sigma^2} \right)
\ &=
\prod_{d'=1}^{D} \exp \left( \frac{ 2 y^{(d')} \eta \cdot \left( \etd{d'} \setminus z_n^{(d)} \right) - \left( \eta \cdot \left( \etd{d'} \setminus z_n^{(d)} \right) \right)^2 }{2 \sigma^2} \right)
\exp \left( \frac{1}{2 \sigma^2} \frac{\eta_k}{N^{(d)}} \left( 2 \left[ y^{(d)} - \eta \cdot \left( \etd{d} \setminus z_n^{(d)} \right) \right] - \frac{\eta_k}{N^{(d)}} \right) \right)
\ &\propto
\exp \left( \frac{1}{2 \sigma^2} \frac{\eta_k}{N^{(d)}} \left( 2 \left[ y^{(d)} - \eta \cdot \left( \etd{d} \setminus z_n^{(d)} \right) \right] - \frac{\eta_k}{N^{(d)}} \right) \right)
\end{align}$$
where $\delta_{d, d'}$ is the Kronecker delta.
We also need to calculate the full conditional for $\eta$. In order to do this, let $Z = (\etd{1} \cdots \etd{D})$ be the matrix whose columns are the empirical topic distributions $\etd{d}$, let $I$ be the identity matrix and $\one$ be the vector of ones, and note that
$$\prod_{k=1}^{K} \cN{\eta_{k}}{\mu, \nu^2} = \cN{\eta}{\mu \one, \nu^2 I}$$
$$\prod_{d=1}^{D} \cN{y^{(d)}}{\eta \cdot \etd{d'}, \sigma^2} = \cN{y}{Z^T \eta, \sigma^2 I}.$$
Therefore
$$\begin{align}
\cp{\eta}{z, w, y, \alpha, \beta, \mu, \nu^2, \sigma^2}
&\propto
\cp{z, w, \eta, y}{\alpha, \beta, \mu, \nu^2, \sigma^2}
\ &\propto
\cN{\eta}{\mu \one, \nu^2 I} \cN{y}{Z^T \eta, \sigma^2 I}
\ &\propto
\cN{\eta}{\Sigma \left( \nu^{-2} \mu \one + \sigma^{-2} Zy \right), \Sigma}
\end{align}$$
where $\Sigma^{-1} = \nu^{-2} I + \sigma^{-2} ZZ^T$ (see Section 9.3 of Kevin Murphy's notes for a derivation of Bayes rule for linear Gaussian systems). It is interesting to consider the mean and variance of $\eta$ in the two variance regimes $\sigma \gg \nu$ and $\sigma \ll \nu$. If $\sigma \gg \nu$, then
$$\Sigma^{-1} = \nu^{-2} \left( I + \left( \frac{\nu}{\sigma} \right)^2 ZZ^T \right)
\approx \nu^{-2} I$$
which implies that the covariance structure of $\eta$ is $\Sigma \approx \nu^2 I$ and the mean of $\eta$ is
$$\Sigma \left( \nu^{-2} \mu \one + \sigma^{-2} Zy \right)
\approx \mu \one + \left( \frac{\nu}{\sigma} \right)^2 Zy
\approx \mu \one,$$
thus $\eta$ is approximately distributed according to its prior distribution. On the other hand, if $\sigma \ll \nu$, then
$$\Sigma^{-1} = \sigma^{-2} \left( \left( \frac{\sigma}{\nu} \right)^2 I + ZZ^T \right)
\approx \sigma^{-2} ZZ^T.$$
Notice that $ZZ^T$ is almost surely positive definite, and hence almost surely invertible. Therefore $\Sigma \approx \sigma^2 (ZZ^T)^{-1}$ and
$$\Sigma \left( \nu^{-2} \mu \one + \sigma^{-2} Zy \right)
\approx \left( \frac{\sigma}{\nu} \right)^2 \mu (ZZ^T)^{-1} \one + (ZZ^T)^{-1} Zy
\approx (ZZ^T)^{-1} Zy,$$
thus $\eta$ is approximately distributed as the least-squares solution of $y = Z^T \eta$.
Graphical test
End of explanation
V = 25
K = 10
N = 100
D = 1000
topics = []
topic_base = np.concatenate((np.ones((1, 5)) * 0.2, np.zeros((4, 5))), axis=0).ravel()
for i in range(5):
topics.append(np.roll(topic_base, i * 5))
topic_base = np.concatenate((np.ones((5, 1)) * 0.2, np.zeros((5, 4))), axis=1).ravel()
for i in range(5):
topics.append(np.roll(topic_base, i))
topics = np.array(topics)
plt.figure(figsize=(10, 5))
plot_images(plt, topics, (5, 5), layout=(2, 5), figsize=(10, 5))
Explanation: Generate topics
We assume a vocabulary of 25 terms, and create ten "topics", where each topic assigns exactly 5 consecutive terms equal probability.
End of explanation
alpha = np.ones(K)
np.random.seed(42)
thetas = np.random.dirichlet(alpha, size=D)
topic_assignments = np.array([np.random.choice(range(K), size=100, p=theta)
for theta in thetas])
word_assignments = np.array([[np.random.choice(range(V), size=1, p=topics[topic_assignments[d, n]])[0]
for n in range(N)] for d in range(D)])
doc_term_matrix = np.array([np.histogram(word_assignments[d], bins=V, range=(0, V - 1))[0] for d in range(D)])
imshow(doc_term_matrix)
Explanation: Generate documents from topics
We generate 1,000 documents from these 10 topics by sampling 1,000 topic distributions, one for each document, from a Dirichlet distribution with parameter $\alpha = (1, \ldots, 1)$.
End of explanation
# choose parameter values
nu2 = 10
sigma2 = 1
np.random.seed(42)
eta = np.random.normal(scale=nu2, size=K)
y = [np.dot(eta, thetas[i]) for i in range(D)] + np.random.normal(scale=sigma2, size=D)
# plot histogram of responses
print(eta)
_ = plt.hist(y, bins=20)
Explanation: Generate responses
End of explanation
from slda.topic_models import SLDA
_K = 10
_alpha = alpha
_beta = np.repeat(0.01, V)
_mu = 0
_nu2 = nu2
_sigma2 = sigma2
n_iter = 500
slda = SLDA(_K, _alpha, _beta, _mu, _nu2, _sigma2, n_iter, seed=42)
%%time
slda.fit(doc_term_matrix, y)
plot_images(plt, slda.phi, (5, 5), (2, 5), figsize=(10, 5))
print(slda.phi)
print(np.sum(slda.phi, axis=0))
print(np.sum(slda.phi, axis=1))
topic_order = [1, 2, 4, 3, 9, 0, 7, 5, 6, 8]
plot_images(plt, slda.phi[topic_order], (5, 5), (2, 5), figsize=(10, 5))
imshow(slda.theta)
plt.plot(slda.loglikelihoods)
plt.plot(np.diff(slda.loglikelihoods)[-100:])
burn_in = max(n_iter - 100, int(n_iter / 2))
slda.loglikelihoods[burn_in:].mean()
eta_pred = slda.eta[burn_in:].mean(axis=0)
print(eta)
print(eta_pred[topic_order])
np.linalg.norm(eta - eta_pred[topic_order])
Explanation: Estimate parameters
End of explanation
np.random.seed(42^2)
thetas_test = np.random.dirichlet(np.ones(K), size=D)
topic_assignments_test = np.array([np.random.choice(range(K), size=100, p=theta)
for theta in thetas_test])
word_assignments_test = np.array([[np.random.choice(range(V), size=1, p=topics[topic_assignments_test[d, n]])[0]
for n in range(N)] for d in range(D)])
doc_term_matrix_test = np.array([np.histogram(word_assignments_test[d], bins=V, range=(0, V - 1))[0] for d in range(D)])
y_test = [np.dot(eta, thetas_test[i]) for i in range(D)]
imshow(doc_term_matrix_test)
Explanation: Predict response of test documents
Create 1,000 test documents using the same generative process as our training documents, and compute their actual responses.
End of explanation
thetas_test_slda = slda.transform(doc_term_matrix_test)
y_slda = [np.dot(eta_pred, thetas_test_slda[i]) for i in range(D)]
Explanation: Estimate their topic distributions using the trained model, then calculate the predicted responses using the mean of our samples of $\eta$ after burn-in as an estimate for $\eta$.
End of explanation
rmse(y_test, y_slda)
Explanation: Measure the goodness of our prediction using root mean square error.
End of explanation
from slda.topic_models import LDA
lda = LDA(_K, _alpha, _beta, n_iter, seed=42)
%%time
lda.fit(doc_term_matrix)
plot_images(plt, lda.phi, (5, 5), (2, 5), figsize=(10, 5))
topic_order_lda = [1, 2, 0, 3, 9, 4, 7, 5, 6, 8]
plot_images(plt, lda.phi[topic_order_lda], (5, 5), (2, 5), figsize=(10, 5))
imshow(lda.theta)
plt.plot(lda.loglikelihoods)
thetas_test_lda = lda.transform(doc_term_matrix_test)
Explanation: Two-step learning: learn topics, then learn regression
End of explanation
from sklearn.linear_model import LinearRegression
lr = LinearRegression(fit_intercept=False)
lr.fit(lda.theta, y)
y_lr = lr.predict(thetas_test_lda)
rmse(y_test, y_lr)
print(eta)
print(lr.coef_[topic_order_lda])
np.linalg.norm(eta - lr.coef_[topic_order_lda])
Explanation: Unregularized linear regression
train linear regression on training data
calculate response on test data
measure the goodness of prediction using root mean square error.
End of explanation
from sklearn.linear_model import Ridge
lrl2 = Ridge(alpha=1., fit_intercept=False)
lrl2.fit(lda.theta, y)
y_lrl2 = lrl2.predict(thetas_test_lda)
rmse(y_test, y_lrl2)
print(eta)
print(lrl2.coef_[topic_order_lda])
np.linalg.norm(eta - lrl2.coef_[topic_order_lda])
Explanation: L2-regularized linear regression
train ridge regression on training data
calculate response on test data
measure the goodness of prediction using root mean square error.
End of explanation
from sklearn.ensemble import GradientBoostingRegressor
gbr = GradientBoostingRegressor()
gbr.fit(lda.theta, y)
y_gbr = gbr.predict(thetas_test_lda)
rmse(y_test, y_gbr)
Explanation: Gradient boosted regression trees
train gradient boosted regressor on training data
calculate response on test data
measure the goodness of prediction using root mean square error.
End of explanation
_K = 5
_alpha = np.repeat(1. / _K, _K)
_beta = np.repeat(0.01, V)
_mu = 0
_nu2 = nu2
_sigma2 = sigma2
n_iter = 500
slda1 = SLDA(_K, _alpha, _beta, _mu, _nu2, _sigma2, n_iter, seed=42)
%%time
slda1.fit(doc_term_matrix, y)
plot_images(plt, slda1.phi, (5, 5), (1, 5), figsize=(10, 5))
imshow(slda1.theta)
plt.plot(slda1.loglikelihoods)
burn_in1 = max(n_iter - 100, int(n_iter / 2))
slda1.loglikelihoods[burn_in1:].mean()
eta_pred1 = slda1.eta[burn_in1:].mean(axis=0)
eta_pred1
thetas_test_slda1 = slda1.transform(doc_term_matrix_test)
y_slda1 = [np.dot(eta_pred1, thetas_test_slda1[i]) for i in range(D)]
rmse(y_test, y_slda1)
lda1 = LDA(_K, _alpha, _beta, n_iter, seed=42)
%%time
lda1.fit(doc_term_matrix)
plot_images(plt, lda1.phi, (5, 5), (1, 5), figsize=(10, 5))
Explanation: Conclusion
SLDA is slightly better than unregularized linear regression, and better than ridge regression or gradient boosted regression trees. The similar performance to unregularized linear regression is likely due to the fact that this test was set up as an exact problem - all parameters used in training, except $\beta$ (because we hand-picked the topics) and $\eta$ (because that was the one parameter we wanted to learn), were those used to generate documents, and in SLDA the likelihood of $y$ introduces a regularization on $\eta$ in the full-conditional for $z$, while unregularized linear regression does not enforce such a penalty.
Test with fewer topics
We now redo the previous test, but this time with fewer topics, in order to determine whether
the supervised portion of SLDA will produce topics different from those produced by LDA, and
prediction with SLDA is improved over LDA-and-a-regression.
Because we will no longer be solving an exact problem (the number of topics, and hence $\alpha$, will both be different from the document generation process), we expect SLDA to do better than LDA-and-a-regression, including unregularized linear regression.
End of explanation
plot_images(plt, slda1.phi, (5, 5), (1, 5), figsize=(10, 5))
imshow(lda1.theta)
plt.plot(lda1.loglikelihoods)
thetas_test_lda1 = lda1.transform(doc_term_matrix_test)
Explanation: We plot the SLDA topics again and note that they are indeed different!
End of explanation
lr1 = LinearRegression(fit_intercept=False)
lr1.fit(lda1.theta, y)
y_lr1 = lr1.predict(thetas_test_lda1)
rmse(y_test, y_lr1)
Explanation: Unregularized linear regression
End of explanation
lrl21 = Ridge(alpha=1., fit_intercept=False)
lrl21.fit(lda1.theta, y)
y_lrl21 = lrl21.predict(thetas_test_lda1)
rmse(y_test, y_lrl21)
Explanation: L2-regularized linear regression
End of explanation
gbr1 = GradientBoostingRegressor()
gbr1.fit(lda1.theta, y)
y_gbr1 = gbr1.predict(thetas_test_lda1)
rmse(y_test, y_gbr1)
Explanation: Gradient boosted regression trees
End of explanation
lr1_0 = LinearRegression(fit_intercept=False)
lr1_0.fit(slda1.theta, y)
y_lr1_0 = lr1_0.predict(thetas_test_slda1)
rmse(y_test, y_lr1_0)
Explanation: Unregularized linear regression with SLDA topics
End of explanation
lrl21_0 = Ridge(alpha=1., fit_intercept=False)
lrl21_0.fit(slda1.theta, y)
y_lrl21_0 = lrl21_0.predict(thetas_test_slda1)
rmse(y_test, y_lrl21_0)
Explanation: L2-regularized linear regression with SLDA topics
End of explanation
gbr1_0 = GradientBoostingRegressor()
gbr1_0.fit(slda1.theta, y)
y_gbr1_0 = gbr1_0.predict(thetas_test_slda1)
rmse(y_test, y_gbr1_0)
Explanation: Gradient boosted regression trees with SLDA topics
End of explanation |
15,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$A(t,T) = \Sigma_i A_i e^{-t/\tau_i} / (1 + e^{-T/2\tau_i})$
Step1: Simple exponential basis
$$ \mathbf{A}\mathbf{\alpha} = \mathbf{d}$$ | Python Code:
def AofT(time,T, ai, taui):
return ai*np.exp(-time/taui)/(1.+np.exp(-T/(2*taui)))
from SimPEG import *
import sys
sys.path.append("./DoubleLog/")
from plotting import mapDat
class LinearSurvey(Survey.BaseSurvey):
nD = None
def __init__(self, time, **kwargs):
self.time = time
self.nD = time.size
def projectFields(self, u):
return u
class LinearProblem(Problem.BaseProblem):
surveyPair = LinearSurvey
def __init__(self, mesh, G, **kwargs):
Problem.BaseProblem.__init__(self, mesh, **kwargs)
self.G = G
def fields(self, m, u=None):
return self.G.dot(m)
def Jvec(self, m, v, u=None):
return self.G.dot(v)
def Jtvec(self, m, v, u=None):
return self.G.T.dot(v)
Explanation: $A(t,T) = \Sigma_i A_i e^{-t/\tau_i} / (1 + e^{-T/2\tau_i})$
End of explanation
time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10), 1e-4*np.ones(10), 5e-4*np.ones(10), 1e-3*np.ones(10)])
# time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10),1e-4*np.ones(10), 5e-4*np.ones(10)])
M = 41
tau = np.logspace(-4.5, -1, M)
N = time.size
A = np.zeros((N, M))
for j in range(M):
A[:,j] = np.exp(-time/tau[j])//tau[j]
mtrue = np.zeros(M)
np.random.seed(1)
inds = np.random.random_integers(0, 41, size=5)
mtrue[inds] = np.r_[-10, 2, 1, 4, 5]
out = np.dot(A,mtrue)
fig = plt.figure(figsize=(6,4.5))
ax = plt.subplot(111)
for i, ind in enumerate(inds):
temp, dum, dum = mapDat(mtrue[inds][i]*np.exp(-time/tau[ind])/tau[j], 1e-1, stretch=2)
plt.semilogx(time, temp, 'k', alpha = 0.5)
outmap, ticks, tickLabels = mapDat(out, 1e-1, stretch=2)
ax.semilogx(time, outmap, 'k', lw=2)
ax.set_yticks(ticks)
ax.set_yticklabels(tickLabels)
# ax.set_ylim(ticks.min(), ticks.max())
ax.set_ylim(ticks.min(), ticks.max())
ax.set_xlim(time.min(), time.max())
ax.grid(True)
# from pymatsolver import MumpsSolver
abs(survey.dobs).min()
mesh = Mesh.TensorMesh([M])
prob = LinearProblem(mesh, A)
survey = LinearSurvey(time)
survey.pair(prob)
survey.makeSyntheticData(mtrue, std=0.01)
# survey.dobs = out
reg = Regularization.BaseRegularization(mesh)
dmis = DataMisfit.l2_DataMisfit(survey)
dmis.Wd = 1./(0.05*abs(survey.dobs)+0.05*300.)
opt = Optimization.ProjectedGNCG(maxIter=20)
# opt = Optimization.InexactGaussNewton(maxIter=20)
opt.lower = -1e-10
invProb = InvProblem.BaseInvProblem(dmis, reg, opt)
invProb.beta = 1e-4
beta = Directives.BetaSchedule()
beta.coolingFactor = 2
target = Directives.TargetMisfit()
inv = Inversion.BaseInversion(invProb, directiveList=[beta, target])
m0 = np.zeros_like(survey.mtrue)
mrec = inv.run(m0)
plt.semilogx(tau, mtrue, '.')
plt.semilogx(tau, mrec, '.')
fig = plt.figure(figsize=(6,4.5))
ax = plt.subplot(111)
obsmap, ticks, tickLabels = mapDat(survey.dobs, 1e0, stretch=2)
predmap, dum, dum = mapDat(invProb.dpred, 1e0, stretch=2)
ax.loglog(time, survey.dobs, 'k', lw=2)
ax.loglog(time, invProb.dpred, 'k.', lw=2)
# ax.set_yticks(ticks)
# ax.set_yticklabels(tickLabels)
# ax.set_ylim(ticks.min(), ticks.max())
# ax.set_ylim(ticks.min(), ticks.max())
ax.set_xlim(time.min(), time.max())
ax.grid(True)
time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10), 1e-4*np.ones(10), 5e-4*np.ones(10), 1e-3*np.ones(10)])
N = time.size
A = np.zeros((N, M))
for j in range(M):
A[:,j] = np.exp(-time/tau[j]) /tau[j]
mfund = mtrue.copy()
mfund[mfund<0.] = 0.
obs = np.dot(A, mtrue)
fund = np.dot(A, mfund)
pred = np.dot(A, mrec)
ip = obs-fund
ipobs = obs-pred
plt.loglog(time, obs, 'k.-', lw=2)
plt.loglog(time, -obs, 'k--', lw=2)
plt.loglog(time, fund, 'b.', lw=2)
plt.loglog(time, pred, 'b-', lw=2)
plt.loglog(time, -ip, 'r--', lw=2)
plt.loglog(time, abs(ipobs), 'r.', lw=2)
plt.ylim(abs(obs).min(), abs(obs).max())
plt.xlim(time.min(), time.max())
Explanation: Simple exponential basis
$$ \mathbf{A}\mathbf{\alpha} = \mathbf{d}$$
End of explanation |
15,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Permutation t-test on source data with spatio-temporal clustering
This example tests if the evoked response is significantly different between
two conditions across subjects. Here just for demonstration purposes
we simulate data from multiple subjects using one subject's data.
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Normally you would read in estimates across several subjects and morph
them to the same cortical space (e.g. fsaverage). For example purposes,
we will simulate this by just having each "subject" have the same
response (just noisy in source space) here.
<div class="alert alert-info"><h4>Note</h4><p>Note that for 7 subjects with a two-sided statistical test, the minimum
significance under a permutation test is only p = 1/(2 ** 6) = 0.015,
which is large.</p></div>
Step5: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 source space
with vertices 0
Step6: Finally, we want to compare the overall activity levels in each condition,
the diff is taken along the last axis (condition). The negative sign makes
it so condition1 > condition2 shows up as "red blobs" (instead of blue).
Step7: Compute statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial adjacency matrix (instead of spatio-temporal)
Step8: Visualize the clusters | Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
Explanation: Permutation t-test on source data with spatio-temporal clustering
This example tests if the evoked response is significantly different between
two conditions across subjects. Here just for demonstration purposes
we simulate data from multiple subjects using one subject's data.
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
Explanation: Read epochs for all channels, removing a bad one
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50, npad='auto')
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50, npad='auto')
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep * 1000 # convert to milliseconds
Explanation: Transform to source space
End of explanation
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph
them to the same cortical space (e.g. fsaverage). For example purposes,
we will simulate this by just having each "subject" have the same
response (just noisy in source space) here.
<div class="alert alert-info"><h4>Note</h4><p>Note that for 7 subjects with a two-sided statistical test, the minimum
significance under a permutation test is only p = 1/(2 ** 6) = 0.015,
which is large.</p></div>
End of explanation
# Read the source space we are morphing to
src = mne.read_source_spaces(src_fname)
fsave_vertices = [s['vertno'] for s in src]
morph_mat = mne.compute_source_morph(
src=inverse_operator['src'], subject_to='fsaverage',
spacing=fsave_vertices, subjects_dir=subjects_dir).morph_mat
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately (and you might want to use morph_data
instead), but here since all estimates are on 'sample' we can use one
morph matrix for all the heavy lifting.
End of explanation
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
Explanation: Finally, we want to compare the overall activity levels in each condition,
the diff is taken along the last axis (condition). The negative sign makes
it so condition1 > condition2 shows up as "red blobs" (instead of blue).
End of explanation
print('Computing adjacency.')
adjacency = mne.spatial_src_adjacency(src)
# Note that X needs to be a multi-dimensional array of shape
# samples (subjects) x time x space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.001
t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, adjacency=adjacency, n_jobs=1,
threshold=t_threshold, buffer_size=None,
verbose=True)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial adjacency matrix (instead of spatio-temporal)
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(
hemi='both', views='lateral', subjects_dir=subjects_dir,
time_label='temporal extent (ms)', size=(800, 800),
smoothing_steps=5, clim=dict(kind='value', pos_lims=[0, 1, 40]))
# brain.save_image('clusters.png')
Explanation: Visualize the clusters
End of explanation |
15,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Another Phi Identity
Developed from an email from D. B. Koski.
David writes, referring to his stumbling across the Fibonaccis left of zero
Step1: Lets evaluate the individual terms on either side of the + symbol. They're not both equal to 1 are they? That would mean we're just writing 1 + 1 == 2 every time. But we already know either term may be 0, so clearly we can't expect the same terms in each row.
Step2: Lets generalize by rewinding further back in the Fibonacci Numbers, starting with coefficients 610 and -144 respectively, and going forward from there. The printout below overlaps the one above, while showing that, indeed, we're free to press this identity in both directions.
Two terms with Fibonacci coefficients, three apart (the first ahead), and exponents of phi three apart (the first three higher), always add to give the number 2, starting with anchoring relationships such as | Python Code:
import math
import gmpy2
gmpy2.get_context().precision=200
def fibo(a=0, b=1):
while True:
yield a
a, b = b, a + b
fib_gen = fibo()
print("SEQ1:",[next(fib_gen) for _ in range(10)])
fib_gen = fibo(2, -1)
print("SEQ2:",[next(fib_gen) for _ in range(10)])
coeff0 = fibo()
coeff1 = fibo(2, -1)
Ø = (gmpy2.sqrt(5) + 1)/2
template = " 2 == {coeff0:>4} * Ø**({a:>3}) + {coeff1:>4} * Ø**({b:>3}) "
expr = "{coeff0:>3} * Ø**({a:>3}) + {coeff1:>3} * Ø**({b:>3})"
args = {}
for i in range(10):
args["a"] = 3 - i
args["b"] = -i
args["coeff0"] = next(coeff0)
args["coeff1"] = next(coeff1)
identity = template.format(**args)
print(identity, end=" --> ")
print("{:>20.18f}".format(eval(expr.format(**args))))
Explanation: Another Phi Identity
Developed from an email from D. B. Koski.
David writes, referring to his stumbling across the Fibonaccis left of zero:
Not sure if I ever explained how I found this out on my own. The Fibonnaci before zero. It had to do with UVW tetrahedra.
Setting the U at $2\phi^{0} + 0\phi^{-3}$, the next step up for the V was $3\phi^{0} + 1\phi^{-3}$ and the larger W at $5\phi^{0} + 1\phi^{3}$. This made the prior, lessor phi scaled W at $1\phi^{0} + 1\phi^{-3}$.
This made no sense since the U at $2\phi^{0} + 0\phi^{-3}$ had coefficients of 2 and 0; F[3] and F[0].
How could the lessor W have coefficients of F[2] and F[?] unless of course the Fibonnaci series had to be going on, "left of zero". The lessor W had coefficients of F[2] and F[-1]
[ $\LaTeX$ added]
Here's one way to express the identity I'm studying:
$$
2\left( n-i+1 \right) = \
\sum_{i}^{n} \left( F(i)\phi^{3-i} + F(i-3)\phi^{-i} \right)
$$
Where ...F(-3) = 2, F(-2) = -1, F(-1) = 1, F(0) = 0, F(1) = 1, F(2) = 1... F(i) i.e. the Fibonacci numbers F(i) where i can be a negative integer.
Index i ranges up to n inclusive, i.e. i and n set lower and upper bounds for consecutive enumeration. The two-term expression after the sigma ($\sum$) always yields 2. The difference between i and n therefore determines what multiple of 2 we have reached.
For example, if i = -5, n = -5
$$F(-5)\phi^{8} + F(-8)\phi^{5} == $$
$$5\phi^{8} + -21\phi^{5} == 2 $$
The video, embedded at the end, is incorrect in neglecting to make the 2nd exponent -i instead of i. The video is part 4 of 4. I provide links to the other three as well, which take us step by step through the development of this Notebook, starting with an email from David Koski.
David works with phi-scaled volumes, meaning edges will stretch or shrink by the golden mean and volumes will change accordingly, as a plus or minus 3rd power of phi.
What we aim to accomplish, in this exercise, is showing how the above equality might be treated as a claim, the sense of which needs to be established, followed by its truth or untruth. Instead of looking for a formal proof, we go with extended precision verification, which does not constitute a proof so much as the sense of what is claimed.
That sense would come before truth is part of our grammar, in the sameway that what is meaningless cannot be proved or disproved.
Anyway, our equality is far from meaningless and involves two terms always summing to make two, no matter the value of i. As i travels from its starting value to n, inclusive, it contributes that many 2s, i.e. (n - i + 1) of them. If i runs from 0 to 1, that's 2 + 2 = 4 and so on.
Winding the Fibonacci Numbers back to start them with 2, -1 reminds us the Fibonacci Numbers extend in both directions.
End of explanation
term1 = "{coeff0:>3} * Ø**({a:>3})"
term2 = "{coeff1:>3} * Ø**({b:>3})"
terms = "{t1:>12.9f} + {t2:>12.9f}"
args = {}
coeff0 = fibo()
coeff1 = fibo(2, -1)
for i in range(10):
args["a"] = 3 - i
args["b"] = -i
args["coeff0"] = next(coeff0)
args["coeff1"] = next(coeff1)
t1 = eval(term1.format(**args))
t2 = eval(term2.format(**args))
print(terms.format(t1=t1, t2=t2), end=" --> ")
print("{:>12.9f}".format(eval(expr.format(**args))))
def sequence(n):
coeff0 = fibo()
coeff1 = fibo(2, -1)
total = 0
for i in range(n+1):
args["a"] = 3 - i
args["b"] = -i
args["coeff0"] = next(coeff0)
args["coeff1"] = next(coeff1)
total += eval(expr.format(**args))
return total
sequence(0)
[sequence(n) for n in range(10)]
Explanation: Lets evaluate the individual terms on either side of the + symbol. They're not both equal to 1 are they? That would mean we're just writing 1 + 1 == 2 every time. But we already know either term may be 0, so clearly we can't expect the same terms in each row.
End of explanation
fib_gen = fibo(-144, 89)
print("SEQ1:",[next(fib_gen) for _ in range(10)])
fib_gen = fibo(610, -377)
print("SEQ2:",[next(fib_gen) for _ in range(10)])
coeff0 = fibo(-144, 89)
coeff1 = fibo(610, -377)
for i in range(-12,8):
args["a"] = 3 - i
args["b"] = -i
args["coeff0"] = next(coeff0)
args["coeff1"] = next(coeff1)
identity = template.format(**args)
print(identity, end=" --> ")
print("{:>20.18f}".format(eval(expr.format(**args))))
terms = "{t1:>18.9f} + {t2:>18.9f}"
coeff0 = fibo(-144, 89)
coeff1 = fibo(610, -377)
for i in range(-12,8):
args["a"] = 3 - i
args["b"] = -i
args["coeff0"] = next(coeff0)
args["coeff1"] = next(coeff1)
t1 = eval(term1.format(**args))
t2 = eval(term2.format(**args))
print(terms.format(t1=t1, t2=t2), end=" --> ")
print("{:>12.9f}".format(eval(expr.format(**args))))
from IPython.display import YouTubeVideo
YouTubeVideo("kXn-cvynsoE")
Explanation: Lets generalize by rewinding further back in the Fibonacci Numbers, starting with coefficients 610 and -144 respectively, and going forward from there. The printout below overlaps the one above, while showing that, indeed, we're free to press this identity in both directions.
Two terms with Fibonacci coefficients, three apart (the first ahead), and exponents of phi three apart (the first three higher), always add to give the number 2, starting with anchoring relationships such as:
$$
2 = fibo(i)\phi^{3-i} + fibo(i-3)\phi^{-i}
$$
We could rewrite the Sigma expression accordingly.
End of explanation |
15,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Huge Monty Hall Bayesian Network
authors
Step1: We'll create the discrete distribution for our friend first.
Step2: The emissions for our guest are completely random.
Step3: Then the distribution for the remaining cars.
Step4: The probability of whether the prize is randomized is dependent on the number of remaining cars.
Step5: Now the conditional probability table for the prize. This is dependent on the guest's friend and whether or not it is randomized.
Step6: Finally we can create the conditional probability table for our Monty. This is dependent on the guest and the prize.
Step7: Now we can create the states for our bayesian network.
Step8: Now we'll create our bayesian network with an instance of BayesianNetwork, then add the possible states.
Step9: Then the possible transitions.
Step10: With a "bake" to finalize the structure of our network.
Step11: Now let's create our network from the following data.
Step12: We can see the results below. Lets look at the distribution for our Friend first.
Step13: Then our Guest.
Step14: Now the remaining cars.
Step15: And the probability the prize is randomized.
Step16: Now the distribution of the Prize.
Step17: And finally our Monty. | Python Code:
import math
from pomegranate import *
Explanation: Huge Monty Hall Bayesian Network
authors:<br>
Jacob Schreiber [<a href="mailto:jmschreiber91@gmail.com">jmschreiber91@gmail.com</a>]<br>
Nicholas Farn [<a href="mailto:nicholasfarn@gmail.com">nicholasfarn@gmail.com</a>]
Lets expand the Bayesian network for the monty hall problem in order to make sure that training with all types of wild types works properly.
End of explanation
friend = DiscreteDistribution( { True: 0.5, False: 0.5 } )
Explanation: We'll create the discrete distribution for our friend first.
End of explanation
guest = ConditionalProbabilityTable(
[[ True, 'A', 0.50 ],
[ True, 'B', 0.25 ],
[ True, 'C', 0.25 ],
[ False, 'A', 0.0 ],
[ False, 'B', 0.7 ],
[ False, 'C', 0.3 ]], [friend] )
Explanation: The emissions for our guest are completely random.
End of explanation
remaining = DiscreteDistribution( { 0: 0.1, 1: 0.7, 2: 0.2, } )
Explanation: Then the distribution for the remaining cars.
End of explanation
randomize = ConditionalProbabilityTable(
[[ 0, True , 0.05 ],
[ 0, False, 0.95 ],
[ 1, True , 0.8 ],
[ 1, False, 0.2 ],
[ 2, True , 0.5 ],
[ 2, False, 0.5 ]], [remaining] )
Explanation: The probability of whether the prize is randomized is dependent on the number of remaining cars.
End of explanation
prize = ConditionalProbabilityTable(
[[ True, True, 'A', 0.3 ],
[ True, True, 'B', 0.4 ],
[ True, True, 'C', 0.3 ],
[ True, False, 'A', 0.2 ],
[ True, False, 'B', 0.4 ],
[ True, False, 'C', 0.4 ],
[ False, True, 'A', 0.1 ],
[ False, True, 'B', 0.9 ],
[ False, True, 'C', 0.0 ],
[ False, False, 'A', 0.0 ],
[ False, False, 'B', 0.4 ],
[ False, False, 'C', 0.6]], [randomize, friend] )
Explanation: Now the conditional probability table for the prize. This is dependent on the guest's friend and whether or not it is randomized.
End of explanation
monty = ConditionalProbabilityTable(
[[ 'A', 'A', 'A', 0.0 ],
[ 'A', 'A', 'B', 0.5 ],
[ 'A', 'A', 'C', 0.5 ],
[ 'A', 'B', 'A', 0.0 ],
[ 'A', 'B', 'B', 0.0 ],
[ 'A', 'B', 'C', 1.0 ],
[ 'A', 'C', 'A', 0.0 ],
[ 'A', 'C', 'B', 1.0 ],
[ 'A', 'C', 'C', 0.0 ],
[ 'B', 'A', 'A', 0.0 ],
[ 'B', 'A', 'B', 0.0 ],
[ 'B', 'A', 'C', 1.0 ],
[ 'B', 'B', 'A', 0.5 ],
[ 'B', 'B', 'B', 0.0 ],
[ 'B', 'B', 'C', 0.5 ],
[ 'B', 'C', 'A', 1.0 ],
[ 'B', 'C', 'B', 0.0 ],
[ 'B', 'C', 'C', 0.0 ],
[ 'C', 'A', 'A', 0.0 ],
[ 'C', 'A', 'B', 1.0 ],
[ 'C', 'A', 'C', 0.0 ],
[ 'C', 'B', 'A', 1.0 ],
[ 'C', 'B', 'B', 0.0 ],
[ 'C', 'B', 'C', 0.0 ],
[ 'C', 'C', 'A', 0.5 ],
[ 'C', 'C', 'B', 0.5 ],
[ 'C', 'C', 'C', 0.0 ]], [guest, prize] )
Explanation: Finally we can create the conditional probability table for our Monty. This is dependent on the guest and the prize.
End of explanation
s0 = State( friend, name="friend")
s1 = State( guest, name="guest" )
s2 = State( prize, name="prize" )
s3 = State( monty, name="monty" )
s4 = State( remaining, name="remaining" )
s5 = State( randomize, name="randomize" )
Explanation: Now we can create the states for our bayesian network.
End of explanation
network = BayesianNetwork( "test" )
network.add_states(s0, s1, s2, s3, s4, s5)
Explanation: Now we'll create our bayesian network with an instance of BayesianNetwork, then add the possible states.
End of explanation
network.add_transition( s0, s1 )
network.add_transition( s1, s3 )
network.add_transition( s2, s3 )
network.add_transition( s4, s5 )
network.add_transition( s5, s2 )
network.add_transition( s0, s2 )
Explanation: Then the possible transitions.
End of explanation
network.bake()
Explanation: With a "bake" to finalize the structure of our network.
End of explanation
data = [[ True, 'A', 'A', 'C', 1, True ],
[ True, 'A', 'A', 'C', 0, True ],
[ False, 'A', 'A', 'B', 1, False ],
[ False, 'A', 'A', 'A', 2, False ],
[ False, 'A', 'A', 'C', 1, False ],
[ False, 'B', 'B', 'B', 2, False ],
[ False, 'B', 'B', 'C', 0, False ],
[ True, 'C', 'C', 'A', 2, True ],
[ True, 'C', 'C', 'C', 1, False ],
[ True, 'C', 'C', 'C', 0, False ],
[ True, 'C', 'C', 'C', 2, True ],
[ True, 'C', 'B', 'A', 1, False ]]
network.fit( data )
Explanation: Now let's create our network from the following data.
End of explanation
print(friend)
Explanation: We can see the results below. Lets look at the distribution for our Friend first.
End of explanation
print(guest)
Explanation: Then our Guest.
End of explanation
print(remaining)
Explanation: Now the remaining cars.
End of explanation
print(randomize)
Explanation: And the probability the prize is randomized.
End of explanation
print(prize)
Explanation: Now the distribution of the Prize.
End of explanation
print(monty)
Explanation: And finally our Monty.
End of explanation |
15,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
리스트 공부할때 fruits라는 리스트 이름에 과일을 저장했어요.
이제 하나하나의 과일을 출력해 봅시다.
Step1: 사과, 바나나, 체리 순서로 출력이 됩니다.
for 다음에 한칸 띄우고 x라는 이름을 썼어요. 이건 아무거나 써도 되요 abc 이렇게 써도 되요
그리고 in 다음에 위의 리스트 fruits를 썼어요. 다시 해볼까요 ?
Step2: 어 이상하네. 결과가 이상하죠 ? 그건 print(x)에서 이 x를 고쳐주지 않아서 그래요 . 다시 고쳐볼께요.
Step3: 같은 결과네요. for 문은 fruits안에 갯수만큼 반복해서 작업을 한다는 걸 말해요.
자 1에서 10까지 출력을 해볼까요 ?
Step4: 이렇게 하면 되는데 너무 귀찮아요. 그래서 | Python Code:
fruits = ["apple", "banana", "cherry"]
for x in fruits:
print(x)
Explanation: 리스트 공부할때 fruits라는 리스트 이름에 과일을 저장했어요.
이제 하나하나의 과일을 출력해 봅시다.
End of explanation
fruits = ["apple", "banana", "cherry"]
for abc in fruits:
print(x)
Explanation: 사과, 바나나, 체리 순서로 출력이 됩니다.
for 다음에 한칸 띄우고 x라는 이름을 썼어요. 이건 아무거나 써도 되요 abc 이렇게 써도 되요
그리고 in 다음에 위의 리스트 fruits를 썼어요. 다시 해볼까요 ?
End of explanation
fruits = ["apple", "banana", "cherry"]
for abc in fruits:
print(abc)
Explanation: 어 이상하네. 결과가 이상하죠 ? 그건 print(x)에서 이 x를 고쳐주지 않아서 그래요 . 다시 고쳐볼께요.
End of explanation
print(1)
print(2)
print(3)
print(4)
print(5)
print(6)
print(7)
print(8)
print(9)
print(10)
Explanation: 같은 결과네요. for 문은 fruits안에 갯수만큼 반복해서 작업을 한다는 걸 말해요.
자 1에서 10까지 출력을 해볼까요 ?
End of explanation
for x in range(1,10):
print(x)
Explanation: 이렇게 하면 되는데 너무 귀찮아요. 그래서
End of explanation |
15,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Please find torch implementation of this notebook here
Step3: Data
We use the Penn Tree Bank (PTB), which is a small but commonly-used corpus derived from the Wall Stree Journal.
Step6: We make a vocabulary, replacing any word that occurs less than 10 times with unk.
Step8: Mikolov suggested keeping word $w$ with probability
$$
\sqrt{\theta / f(w)}
$$
where $\theta=10^{-4}$ is a threshold, and $f(w)=N(w)/N$ is the empirical frequency of word $w$.
Step9: We compare the frequency of certain common and rare words in the original and subsampled data below.
Step10: Let's tokenize the subsampled data.
Step11: Extracting central target words and their contexts
We randomly sample a context length for each central word, up to some maximum length, and then extract all the context words as a list of lists.
Step12: Example. Suppose we have a corpus with 2 sentences of length 7 and 3, and we use a max context of size 2. Here are the centers and contexts.
Step13: Extract context for the full dataset.
Step15: Negative sampling
For speed, we define a sampling class that pre-computes 10,000 random indices from the weighted distribution, using a single call to random.choices, and then sequentially returns elements of this list. If we reach the end of the cache, we refill it.
Step16: Example.
Step17: Now we generate $K$ negatives for each context. These are drawn from $p(w) \propto \text{freq}(w)^{0.75}$.
Step18: Minibatching
Suppose the $i$'th central word has $n_i$ contexts and $m_i$ noise words.
Since $n_i+m_i$ might be different for each $i$ (due to edge effects), the minibatch will be ragged. To fix this, we pad to a maximum length $L$, and then create a validity mask of length $L$, where 0 means invalid location (to be ignored when computing the loss) and 1 means valid location. We assign the label vector to have $n_i$ 1's and $L-n_i$ 0's. (Some of these labels will be masked out.)
Step19: Example. We make a ragged minibatch with 2 examples, and then pad them to a standard size.
Step20: Dataloader
Now we put it altogether.
Step21: Let's print the first minibatch.
Step22: Model
The model just has 2 embedding matrices, $U$ and $V$. The core computation is computing the logits, as shown below.
The center variable has the shape (batch size, 1), while the contexts_and_negatives variable has the shape (batch size, max_len). These get embedded into size $(B,1,E)$ and $(B,L,E)$. We permute the latter to $(B,E,L)$ and use matrix multiplication to get $(B,1,L)$ matrix of inner products between each center's embedding and each context's embedding.
Step23: Example. Assume the vocab size is 20 and we use $E=4$ embedding dimensions.
We compute the logits for a minibatch of $B=2$ sequences, with max length $L=4$.
Step25: Loss
We use masked binary cross entropy loss.
Step26: Different masks can lead to different results.
If we normalize by the number of valid masked entries, then predictions with the same per-token accuracy will score the same.
Step32: Training
Step33: Test
We find the $k$ nearest words to the query, where we measure similarity using cosine similarity
$$\text{sim} = \frac{x^T y}{||x|| \; ||y||}$$ | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import math
import os
import random
random.seed(0)
import jax
import jax.numpy as jnp
try:
from flax import linen as nn
except ModuleNotFoundError:
%pip install -qq flax
from flax import linen as nn
from flax.training import train_state
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
try:
import torch
except ModuleNotFoundError:
%pip install -qq torch
import torch
import requests
import zipfile
import tarfile
import hashlib
import collections
from IPython import display
import time
!mkdir figures # for saving plots
Explanation: Please find torch implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/20/skipgram_torch.ipynb
Learning word emebddings using skipgram with negative sampling.
Based on D2L 14.3 http://d2l.ai/chapter_natural-language-processing-pretraining/word-embedding-dataset.html and 14.4 of http://d2l.ai/chapter_natural-language-processing-pretraining/word2vec-pretraining.html.
End of explanation
# Required functions for downloading data
def download(name, cache_dir=os.path.join("..", "data")):
Download a file inserted into DATA_HUB, return the local filename.
assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}."
url, sha1_hash = DATA_HUB[name]
os.makedirs(cache_dir, exist_ok=True)
fname = os.path.join(cache_dir, url.split("/")[-1])
if os.path.exists(fname):
sha1 = hashlib.sha1()
with open(fname, "rb") as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
if sha1.hexdigest() == sha1_hash:
return fname # Hit cache
print(f"Downloading {fname} from {url}...")
r = requests.get(url, stream=True, verify=True)
with open(fname, "wb") as f:
f.write(r.content)
return fname
def download_extract(name, folder=None):
Download and extract a zip/tar file.
fname = download(name)
base_dir = os.path.dirname(fname)
data_dir, ext = os.path.splitext(fname)
if ext == ".zip":
fp = zipfile.ZipFile(fname, "r")
elif ext in (".tar", ".gz"):
fp = tarfile.open(fname, "r")
else:
assert False, "Only zip/tar files can be extracted."
fp.extractall(base_dir)
return os.path.join(base_dir, folder) if folder else data_dir
DATA_HUB = dict()
DATA_URL = "http://d2l-data.s3-accelerate.amazonaws.com/"
DATA_HUB["ptb"] = (DATA_URL + "ptb.zip", "319d85e578af0cdc590547f26231e4e31cdf1e42")
def read_ptb():
data_dir = download_extract("ptb")
with open(os.path.join(data_dir, "ptb.train.txt")) as f:
raw_text = f.read()
return [line.split() for line in raw_text.split("\n")]
sentences = read_ptb()
f"# sentences: {len(sentences)}"
Explanation: Data
We use the Penn Tree Bank (PTB), which is a small but commonly-used corpus derived from the Wall Stree Journal.
End of explanation
class Vocab:
Vocabulary for text.
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# Sort according to frequencies
counter = count_corpus(tokens)
self.token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True)
# The index for the unknown token is 0
self.unk, uniq_tokens = 0, ["<unk>"] + reserved_tokens
uniq_tokens += [token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
def count_corpus(tokens):
Count token frequencies.
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
vocab = Vocab(sentences, min_freq=10)
f"vocab size: {len(vocab)}"
Explanation: We make a vocabulary, replacing any word that occurs less than 10 times with unk.
End of explanation
def count_corpus(tokens):
Count token frequencies.
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
def subsampling(sentences, vocab):
# Map low frequency words into <unk>
sentences = [[vocab.idx_to_token[vocab[tk]] for tk in line] for line in sentences]
# Count the frequency for each word
counter = count_corpus(sentences)
num_tokens = sum(counter.values())
# Return True if to keep this token during subsampling
def keep(token):
return random.uniform(0, 1) < math.sqrt(1e-4 / counter[token] * num_tokens)
# Now do the subsampling
return [[tk for tk in line if keep(tk)] for line in sentences]
subsampled = subsampling(sentences, vocab)
Explanation: Mikolov suggested keeping word $w$ with probability
$$
\sqrt{\theta / f(w)}
$$
where $\theta=10^{-4}$ is a threshold, and $f(w)=N(w)/N$ is the empirical frequency of word $w$.
End of explanation
def compare_counts(token):
return (
f'# of "{token}": '
f"before={sum([line.count(token) for line in sentences])}, "
f"after={sum([line.count(token) for line in subsampled])}"
)
print(compare_counts("the"))
print(compare_counts("join"))
Explanation: We compare the frequency of certain common and rare words in the original and subsampled data below.
End of explanation
corpus = [vocab[line] for line in subsampled]
print(corpus[0:3])
Explanation: Let's tokenize the subsampled data.
End of explanation
def get_centers_and_contexts(corpus, max_window_size):
centers, contexts = [], []
for line in corpus:
# Each sentence needs at least 2 words to form a "central target word
# - context word" pair
if len(line) < 2:
continue
centers += line
for i in range(len(line)): # Context window centered at i
window_size = random.randint(1, max_window_size)
indices = list(range(max(0, i - window_size), min(len(line), i + 1 + window_size)))
# Exclude the central target word from the context words
indices.remove(i)
contexts.append([line[idx] for idx in indices])
return centers, contexts
Explanation: Extracting central target words and their contexts
We randomly sample a context length for each central word, up to some maximum length, and then extract all the context words as a list of lists.
End of explanation
tiny_dataset = [list(range(7)), list(range(7, 10))]
print("dataset", tiny_dataset)
for center, context in zip(*get_centers_and_contexts(tiny_dataset, 2)):
print("center", center, "has contexts", context)
Explanation: Example. Suppose we have a corpus with 2 sentences of length 7 and 3, and we use a max context of size 2. Here are the centers and contexts.
End of explanation
all_centers, all_contexts = get_centers_and_contexts(corpus, 5)
f"# center-context pairs: {len(all_centers)}"
Explanation: Extract context for the full dataset.
End of explanation
class RandomGenerator:
Draw a random int in [0, n] according to n sampling weights.
def __init__(self, sampling_weights):
self.population = list(range(len(sampling_weights)))
self.sampling_weights = sampling_weights
self.candidates = []
self.i = 0
def draw(self):
if self.i == len(self.candidates):
self.candidates = random.choices(self.population, self.sampling_weights, k=10000)
self.i = 0
self.i += 1
return self.candidates[self.i - 1]
Explanation: Negative sampling
For speed, we define a sampling class that pre-computes 10,000 random indices from the weighted distribution, using a single call to random.choices, and then sequentially returns elements of this list. If we reach the end of the cache, we refill it.
End of explanation
generator = RandomGenerator([2, 3, 4])
[generator.draw() for _ in range(10)]
Explanation: Example.
End of explanation
def get_negatives(all_contexts, corpus, K):
counter = count_corpus(corpus)
sampling_weights = [counter[i] ** 0.75 for i in range(len(counter))]
all_negatives, generator = [], RandomGenerator(sampling_weights)
for contexts in all_contexts:
negatives = []
while len(negatives) < len(contexts) * K:
neg = generator.draw()
# Noise words cannot be context words
if neg not in contexts:
negatives.append(neg)
all_negatives.append(negatives)
return all_negatives
all_negatives = get_negatives(all_contexts, corpus, 5)
Explanation: Now we generate $K$ negatives for each context. These are drawn from $p(w) \propto \text{freq}(w)^{0.75}$.
End of explanation
def batchify(data):
max_len = max(len(c) + len(n) for _, c, n in data)
centers, contexts_negatives, masks, labels = [], [], [], []
for center, context, negative in data:
cur_len = len(context) + len(negative)
centers += [center]
contexts_negatives += [context + negative + [0] * (max_len - cur_len)]
masks += [[1] * cur_len + [0] * (max_len - cur_len)]
labels += [[1] * len(context) + [0] * (max_len - len(context))]
return (np.array(centers).reshape((-1, 1)), np.array(contexts_negatives), np.array(masks), np.array(labels))
Explanation: Minibatching
Suppose the $i$'th central word has $n_i$ contexts and $m_i$ noise words.
Since $n_i+m_i$ might be different for each $i$ (due to edge effects), the minibatch will be ragged. To fix this, we pad to a maximum length $L$, and then create a validity mask of length $L$, where 0 means invalid location (to be ignored when computing the loss) and 1 means valid location. We assign the label vector to have $n_i$ 1's and $L-n_i$ 0's. (Some of these labels will be masked out.)
End of explanation
x_1 = (1, [2, 2], [3, 3, 3, 3])
x_2 = (1, [2, 2, 2], [3, 3])
batch = batchify((x_1, x_2))
names = ["centers", "contexts_negatives", "masks", "labels"]
for name, data in zip(names, batch):
print(name, "=", data)
Explanation: Example. We make a ragged minibatch with 2 examples, and then pad them to a standard size.
End of explanation
def load_data_ptb(batch_size, max_window_size, num_noise_words):
num_workers = 2
sentences = read_ptb()
vocab = Vocab(sentences, min_freq=10)
subsampled = subsampling(sentences, vocab)
corpus = [vocab[line] for line in subsampled]
all_centers, all_contexts = get_centers_and_contexts(corpus, max_window_size)
all_negatives = get_negatives(all_contexts, corpus, num_noise_words)
class PTBDataset(torch.utils.data.Dataset):
def __init__(self, centers, contexts, negatives):
assert len(centers) == len(contexts) == len(negatives)
self.centers = centers
self.contexts = contexts
self.negatives = negatives
def __getitem__(self, index):
return (self.centers[index], self.contexts[index], self.negatives[index])
def __len__(self):
return len(self.centers)
dataset = PTBDataset(all_centers, all_contexts, all_negatives)
data_iter = torch.utils.data.DataLoader(
dataset, batch_size, shuffle=True, collate_fn=batchify, num_workers=num_workers
)
return data_iter, vocab
Explanation: Dataloader
Now we put it altogether.
End of explanation
data_iter, vocab = load_data_ptb(512, 5, 5)
for batch in data_iter:
for name, data in zip(names, batch):
print(name, "shape:", data.shape)
break
batch_size, max_window_size, num_noise_words = 512, 5, 5
data_iter, vocab = load_data_ptb(batch_size, max_window_size, num_noise_words)
Explanation: Let's print the first minibatch.
End of explanation
class SkipGram(nn.Module):
vocab_size: int
embed_size: int
@nn.compact
def __call__(self, center, contexts_and_negatives):
v = nn.Embed(self.vocab_size, self.embed_size)(center)
u = nn.Embed(self.vocab_size, self.embed_size)(contexts_and_negatives)
pred = jax.vmap(jnp.matmul)(v, u.transpose(0, 2, 1))
return pred
Explanation: Model
The model just has 2 embedding matrices, $U$ and $V$. The core computation is computing the logits, as shown below.
The center variable has the shape (batch size, 1), while the contexts_and_negatives variable has the shape (batch size, max_len). These get embedded into size $(B,1,E)$ and $(B,L,E)$. We permute the latter to $(B,E,L)$ and use matrix multiplication to get $(B,1,L)$ matrix of inner products between each center's embedding and each context's embedding.
End of explanation
center = jnp.ones((2, 1), dtype=jnp.int32)
contexts_and_negatives = jnp.ones((2, 4), dtype=jnp.int32)
skip_gram = SkipGram(20, 4)
variables = skip_gram.init(jax.random.PRNGKey(0), center, contexts_and_negatives)
print(
f"Parameter embedding_weight ({variables['params']['Embed_0']['embedding'].shape}, "
f"dtype={variables['params']['Embed_0']['embedding'].dtype})"
)
skip_gram.apply(variables, center, contexts_and_negatives).shape
Explanation: Example. Assume the vocab size is 20 and we use $E=4$ embedding dimensions.
We compute the logits for a minibatch of $B=2$ sequences, with max length $L=4$.
End of explanation
def sigmoid_bce_loss(inputs, target, mask=None):
BCE with logit loss, based on https://github.com/pytorch/pytorch/blob/1522912602bc4cc5f7adbce66cad00ebb436f195/aten/src/ATen/native/Loss.cpp#L317
max_val = jnp.clip(-inputs, 0, None)
loss = (1 - target) * inputs + max_val + jnp.log(jnp.exp(-max_val) + jnp.exp((-inputs - max_val)))
if mask is not None:
loss = loss * mask
return loss.mean(axis=1)
pred = jnp.array([[0.5] * 4] * 2)
label = jnp.array([[1.0, 0.0, 1.0, 0.0]] * 2)
mask = jnp.array([[1, 1, 1, 1], [1, 1, 0, 0]])
sigmoid_bce_loss(pred, label, mask)
Explanation: Loss
We use masked binary cross entropy loss.
End of explanation
sigmoid_bce_loss(pred, label, mask) / mask.sum(axis=1) * mask.shape[1]
Explanation: Different masks can lead to different results.
If we normalize by the number of valid masked entries, then predictions with the same per-token accuracy will score the same.
End of explanation
# Functions for plotting and accumulating sum
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
Set the axes for matplotlib.
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
class Animator:
For plotting data in animation.
def __init__(
self,
xlabel=None,
ylabel=None,
legend=None,
xlim=None,
ylim=None,
xscale="linear",
yscale="linear",
fmts=("-", "m--", "g-.", "r:"),
nrows=1,
ncols=1,
figsize=(3.5, 2.5),
):
# Incrementally plot multiple lines
if legend is None:
legend = []
display.set_matplotlib_formats("svg")
self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [
self.axes,
]
# Use a lambda function to capture arguments
self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
class Accumulator:
For accumulating sums over `n` variables.
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def create_train_state(rng, learning_rate, embed_size):
Creates initial `TrainState`.
net = SkipGram(len(vocab), embed_size)
params = net.init(
rng,
jnp.ones([1, 1], dtype=jnp.int32),
jnp.ones([1, max_window_size * 2 * (num_noise_words + 1)], dtype=jnp.int32),
)["params"]
tx = optax.adam(learning_rate)
return train_state.TrainState.create(apply_fn=net.apply, params=params, tx=tx)
@jax.jit
def train_step(state, batch):
Train for a single step.
center, context_negative, mask, label = batch
def loss_fn(params):
pred = state.apply_fn({"params": params}, center, context_negative)
loss = sigmoid_bce_loss(pred.reshape(label.shape), label, mask) / mask.sum(axis=1) * mask.shape[1]
return jnp.sum(loss)
grad_fn = jax.value_and_grad(loss_fn)
loss, grads = grad_fn(state.params)
state = state.apply_gradients(grads=grads)
return state, loss
def train(data_iter, lr, num_epochs, embed_size):
rng = jax.random.PRNGKey(1)
state = create_train_state(rng, lr, embed_size)
animator = Animator(xlabel="epoch", ylabel="loss", xlim=[1, num_epochs])
metric = Accumulator(2) # Sum of losses, no. of tokens
for epoch in range(num_epochs):
seconds, num_batches = time.time(), len(data_iter)
for i, batch in enumerate(data_iter):
state, loss = train_step(state, list(map(jnp.array, batch)))
metric.add(loss, batch[0].shape[0])
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches, (metric[0] / metric[1],))
device = jax.default_backend()
print(
f"loss {metric[0] / metric[1]:.3f}, " f"{metric[1] / (time.time() - seconds):.1f} tokens/sec on {str(device)}"
)
return state
embed_size = 100
lr, num_epochs = 0.01, 5
state = train(data_iter, lr, num_epochs, embed_size)
Explanation: Training
End of explanation
def get_similar_tokens(query_token, k, state):
W = state.params["Embed_0"]["embedding"]
x = W[vocab[query_token]]
# Compute the cosine similarity. Add 1e-9 for numerical stability
cos = jnp.dot(W, x) / jnp.sqrt(jnp.sum(W * W, axis=1) * jnp.sum(x * x) + 1e-9)
topk = jax.lax.top_k(cos, k + 1)[1]
for i in topk[1:]: # Remove the input words
print(f"cosine sim={float(cos[i]):.3f}: {vocab.idx_to_token[i]}")
get_similar_tokens("chip", 3, state)
get_similar_tokens("president", 3, state)
get_similar_tokens("dog", 3, state)
Explanation: Test
We find the $k$ nearest words to the query, where we measure similarity using cosine similarity
$$\text{sim} = \frac{x^T y}{||x|| \; ||y||}$$
End of explanation |
15,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to python
Basic commands
Hello and welcome to the wonderful world of Python. Each of these cells can be copy and pasted into your own notebook. There are code cells and text cells. The code cells execute Python commands. The text cells give some helpful advice along the way. Copy and paste, or retype, the code cells into your own notebook and run them.
This notebook is meant to be a quick introduction to python. We have included some helpful links to other resources along the way. Here are some other tutorials that you can use in your own time
Step1: You can do math - any output on the last line of the cell will print to the screen
Step2: You can print anything by passing it to the print function.
Functions
Step3: Save results to a variable. Variables work like a math equation. The variable name is on the left and whatever it is equal to is on the right.
Step4: We cab see the type of the output by passing it to the type function.
Step5: There are many types in python, but the main ones for values are
Step6: Exercise
Step7: Loops
Step8: We can also loop through two lists by using the zip command. More about zip here
Step9: Exercise
Step10: Functions
If we want to do something more complicated we can define a function.
Step11: Exercise
Step12: Dictionaries
In addition to lists, python also has a collection type called a 'dictionary'. These hold what are called key-value pairs. Each key gives a certain value (although note that a value here could be a single number or a list or even another dictionary). These are very useful when you have a bunch of data you want to store.
Step13: We can add to dictionaries by using the square brackets.
Step14: We can get values out of a dictionary (acessing) by using the square brackets again.
Step15: We are not limited to strings. Dictionaries can have many types as their keys, and have many types as their values.
Step16: Excercise
What happens when you try and access an entry in a dictionary that doesn't exist?
Numpy and packages
Sometimes we want to go beyond what python can do by default. We can do this by importing 'packages'
More resources on numpy
Step17: We can add arrays together - this will add each element of each array to the corresponding element in the other array. This is type of operation is called an 'element-wise' operation and can save you from having to write loops.
Step18: Numpy arrays can be multi-dimensional. Lets focus on 2D arrays which are used in illustris. Note
Step19: We can use indexing to get a certain row or column
Step20: We can compute statistics for the whole array or along different dimension.
Step21: Exercise | Python Code:
# This line is a comment -- it does nothing
# you can add comments using the '#' symbol
Explanation: Intro to python
Basic commands
Hello and welcome to the wonderful world of Python. Each of these cells can be copy and pasted into your own notebook. There are code cells and text cells. The code cells execute Python commands. The text cells give some helpful advice along the way. Copy and paste, or retype, the code cells into your own notebook and run them.
This notebook is meant to be a quick introduction to python. We have included some helpful links to other resources along the way. Here are some other tutorials that you can use in your own time:
http://introtopython.org/hello_world.html
https://www.datacamp.com/courses/intro-to-python-for-data-science
Below is the first code cell.
End of explanation
1+1
3*5 # this will not print
14 % 3 # modulo (remainder) operator - this will print
Explanation: You can do math - any output on the last line of the cell will print to the screen
End of explanation
print(3*5)
print(2**4) # powers use the double star symbol
Explanation: You can print anything by passing it to the print function.
Functions: a function takes some arguments and returns one or more values. They can let you write a complicated test and use it over and over again. Call a function by typing its name with parentheses. Arguments for the function go inside the parentheses.
End of explanation
output = 1+1
Explanation: Save results to a variable. Variables work like a math equation. The variable name is on the left and whatever it is equal to is on the right.
End of explanation
type(output)
Explanation: We cab see the type of the output by passing it to the type function.
End of explanation
type(1.+1.2)
1.0+1.2
# we can compare numbers using comparison operators - these return a value of either True or False (boolean type)
1 > 2 # is one greater than two?
Explanation: There are many types in python, but the main ones for values are: 'int' for integers, 'float' for any number with a decimal place, and 'str' for a collection of characters or words. Groups of values can also take on different types, but more on that later.
End of explanation
# and we can use 'strings' - text
poem = 'Spiral galaxy; Plane splashed across the night sky; Gives us perspective'
#We can collect together a bunch of numbers or strings into a list
favorite_primes = [1, 3, 5, 7, 11]
type(favorite_primes)
#and access them using square brackets
favorite_primes[0] # <-- '[0]' will select the first number
# [-1] will select the last number
favorite_primes[-1]
# we can also select a range of numbers
favorite_primes[::2] # <-- select every other element
favorite_primes[:2] # select the first two elements
favorite_primes[2:] # select from the 3rd element to the last element
Explanation: Exercise:
What type do you get when you add together a float and an int?
Strings and lists
End of explanation
# we can do things multiple times in a loop:
for prime in my_favorite_primes: # loop through and get each element of the list
print(prime, prime**2) # print each element and the square of that element
# for loops are one way to loop - while loops are another
# careful! while loops can sometimes loop forever - check that they have a stopping criteria
i = 0 # start at some value
while i < 10: # will loop until this condition evaluates to True
print(i)
i = i + 1
Explanation: Loops
End of explanation
# lets first make a second list
favorite_largenumbers = [10, 300, 5e+5, 7000, 2**32] # note here that python ints can be very large with no problem
for large_number, prime in zip(favorite_largenumbers, favorite_primes):
print(large_number, prime)
Explanation: We can also loop through two lists by using the zip command. More about zip here: https://www.programiz.com/python-programming/methods/built-in/zip (somewhat technical).
End of explanation
# make a new list that has only four numbers
least_favorite_numbers = [-1, 0, 1, 2]
for bad_number, prime in zip(<MODIFY THIS PART>):
print(bad_number, prime)
Explanation: Exercise:
Modify the above code to loop through a list of my least favorite numbers AND the primes. What is the largest number that will print below?
End of explanation
def square(number):
# this function will take a number and return its square
return number**2
print(square(3))
# to make the function more general we will include a keyword argument - this argument has a default value and can be changed by the user
def raise_to_power(number, power=2):
return number**power
print(raise_to_power(3)) # with default arguments this will square it
print(raise_to_power(3, power=3)) # with a new argument this will return cubic
Explanation: Functions
If we want to do something more complicated we can define a function.
End of explanation
print(raise_to_power(<MODIFY THIS SOMEHOW>))
Explanation: Exercise:
What happens when you use the above function, but with a string as the arguments?
End of explanation
definitions = {} # here we are using the squiggly brackets to make an empty dictionary
Explanation: Dictionaries
In addition to lists, python also has a collection type called a 'dictionary'. These hold what are called key-value pairs. Each key gives a certain value (although note that a value here could be a single number or a list or even another dictionary). These are very useful when you have a bunch of data you want to store.
End of explanation
# add an entry for cosmology
definitions['cosmology'] = 'the branch of astronomy that deals with the general structure and evolution of the universe.'
# and for universe
definitions['universe'] = 'the totality of known or supposed objects and phenomena throughout space; the cosmos; macrocosm.'
Explanation: We can add to dictionaries by using the square brackets.
End of explanation
definitions['cosmology']
Explanation: We can get values out of a dictionary (acessing) by using the square brackets again.
End of explanation
# here we are using the curly braces to make a dictionary of constants. The 'e' syntax is shorthand for 'x10^', so 1e-1 is 0.1
constants_cgs = {'G': 6.67259e-8, 'h': 6.6260756e-27, 'k': 1.380658e-16}
Explanation: We are not limited to strings. Dictionaries can have many types as their keys, and have many types as their values.
End of explanation
import numpy as np # now we have access to a range of new functions that work with numerical data
# we can make an 'array' -- this is similar to a list but has some advantages
array_of_primes = np.array([1, 3, 5, 7, 11])
# you can do math on the entire array
array_of_primes + 1
# CAREFUL: this only works with numpy arrays. This will not work with lists!! Pay attention to the type that you are working with.
# Illustris data uses numpy arrays mostly, but it is always good to check.
print(type(array_of_primes), type(favorite_primes))
# We can see some info on the size and shape of the array:
print(array_of_primes.shape, array_of_primes.ndim)
# and generate arrays with values
array_of_evens = np.arange(2, 12, 2) # array starting at 2, ending at 12 (exclusive) in steps of 2
Explanation: Excercise
What happens when you try and access an entry in a dictionary that doesn't exist?
Numpy and packages
Sometimes we want to go beyond what python can do by default. We can do this by importing 'packages'
More resources on numpy: https://docs.scipy.org/doc/numpy-dev/user/quickstart.html
End of explanation
array_of_primes + array_of_evens
# we can also compare element-wise:
array_of_evens > array_of_primes
# we can use these arrays of boolean values to select values of interest from an array
array_of_evens[array_of_evens > array_of_primes] # select only the even numbers that are greater than corresponding prime numbers
#Or we can use a 'where' function to get the corresponding indices.
np.where(array_of_evens > array_of_primes)
indices = np.where(array_of_evens > array_of_primes)
array_of_evens[indices]
Explanation: We can add arrays together - this will add each element of each array to the corresponding element in the other array. This is type of operation is called an 'element-wise' operation and can save you from having to write loops.
End of explanation
velocities = np.random.rand(15).reshape(5, 3) # make an array of 15 random values and reshape them into a 2D array with four rows and three columns
#lets examine the results! This will be different for each person
velocities
Explanation: Numpy arrays can be multi-dimensional. Lets focus on 2D arrays which are used in illustris. Note: Python is row major meaning that the vertical dimension is accessed first.
End of explanation
velocities[:, 0] # get all values (the ':' character) from the first column
velocities[1, :] # get all values from the second row
Explanation: We can use indexing to get a certain row or column
End of explanation
# print the mean value, the max value, and the min value of the array
velocities.mean(), velocities.min(), velocities.max()
# print the mean in each of the columns - should be a 1D array with three values
velocities.mean(axis=0)
Explanation: We can compute statistics for the whole array or along different dimension.
End of explanation
import matplotlib.pyplot as plt # lets import some plotting tools and give it a helpful name
# this next fancy 'magic line' lets us plot right here in the notebook
%matplotlib inline
#making a simple plot is easy - just tell the 'plot' function what you x and y values are - by default it makes a line
x = np.arange(10)
y = x**2
plt.plot(x, y)
# or we can make them points by setting some options
plt.plot(x, y, marker='.', linestyle='none') # turning the line to none and the marker to a period
Explanation: Exercise:
How would you modify the above line the see the average for each row?
Plotting
There are many ways to plot in python. I will show the basics of matplotlib - more information can be found here: https://matplotlib.org/users/pyplot_tutorial.html
End of explanation |
15,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
15,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Combining Filters
Like factors, filters can be combined. Combining filters is done using the & (and) and | (or) operators. For example, let's say we want to screen for securities that are in the top 10% of average dollar volume and have a latest close price above $20. To start, let's make a high dollar volume filter using an AverageDollarVolume factor and percentile_between
Step1: Note
Step2: Now we can combine our high_dollar_volume filter with our above_20 filter using the & operator
Step3: This filter will evaluate to True for securities where both high_dollar_volume and above_20 are True. Otherwise, it will evaluate to False. A similar computation can be made with the | (or) operator.
If we want to use this filter as a screen in our pipeline, we can simply pass tradeable_filter as the screen argument.
Step4: When we run this, our pipeline output now only includes ~700 securities. | Python Code:
dollar_volume = AverageDollarVolume(window_length=30)
high_dollar_volume = dollar_volume.percentile_between(90, 100)
Explanation: Combining Filters
Like factors, filters can be combined. Combining filters is done using the & (and) and | (or) operators. For example, let's say we want to screen for securities that are in the top 10% of average dollar volume and have a latest close price above $20. To start, let's make a high dollar volume filter using an AverageDollarVolume factor and percentile_between:
End of explanation
latest_close = USEquityPricing.close.latest
above_20 = latest_close > 20
Explanation: Note: percentile_between is a Factor method returning a Filter.
Next, let's create a latest_close factor and define a filter for securities that closed above $20:
End of explanation
tradeable_filter = high_dollar_volume & above_20
Explanation: Now we can combine our high_dollar_volume filter with our above_20 filter using the & operator:
End of explanation
def make_pipeline():
mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10)
mean_close_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30)
percent_difference = (mean_close_10 - mean_close_30) / mean_close_30
dollar_volume = AverageDollarVolume(window_length=30)
high_dollar_volume = dollar_volume.percentile_between(90, 100)
latest_close = USEquityPricing.close.latest
above_20 = latest_close > 20
tradeable_filter = high_dollar_volume & above_20
return Pipeline(
columns={
'percent_difference': percent_difference
},
screen=tradeable_filter
)
Explanation: This filter will evaluate to True for securities where both high_dollar_volume and above_20 are True. Otherwise, it will evaluate to False. A similar computation can be made with the | (or) operator.
If we want to use this filter as a screen in our pipeline, we can simply pass tradeable_filter as the screen argument.
End of explanation
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
print 'Number of securities that passed the filter: %d' % len(result)
result
Explanation: When we run this, our pipeline output now only includes ~700 securities.
End of explanation |
15,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Crossentropy method
This notebook will teach you to solve reinforcement learning problems with crossentropy method. We'll follow-up by scaling everything up and using neural network policy.
Step1: Create stochastic policy
This time our policy should be a probability distribution.
policy[s,a] = P(take action a | in state s)
Since we still use integer state and action representations, you can use a 2-dimensional array to represent the policy.
Please initialize policy uniformly, that is, probabililities of all actions should be equal.
Step3: Play the game
Just like before, but we also record all states and actions we took.
Step6: Crossentropy method steps
Step8: Training loop
Generate sessions, select N best and fit to those.
Step9: Reflecting on results
You may have noticed that the taxi problem quickly converges from <-1000 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.
In case CEM failed to learn how to win from one distinct starting point, it will simply discard it because no sessions from that starting point will make it into the "elites".
To mitigate that problem, you can either reduce the threshold for elite sessions (duct tape way) or change the way you evaluate strategy (theoretically correct way). You can first sample an action for every possible state and then evaluate this choice of actions by running several games and averaging rewards.
Submit to coursera | Python Code:
# In Google Colab, uncomment this:
# !wget https://bit.ly/2FMJP5K -O setup.py && bash setup.py
# XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import pandas as pd
env = gym.make("Taxi-v2")
env.reset()
env.render()
n_states = env.observation_space.n
n_actions = env.action_space.n
print("n_states=%i, n_actions=%i" % (n_states, n_actions))
Explanation: Crossentropy method
This notebook will teach you to solve reinforcement learning problems with crossentropy method. We'll follow-up by scaling everything up and using neural network policy.
End of explanation
policy = np.ones(shape=(n_states, n_actions)) * 1 / n_actions
assert type(policy) in (np.ndarray, np.matrix)
assert np.allclose(policy, 1./n_actions)
assert np.allclose(np.sum(policy, axis=1), 1)
Explanation: Create stochastic policy
This time our policy should be a probability distribution.
policy[s,a] = P(take action a | in state s)
Since we still use integer state and action representations, you can use a 2-dimensional array to represent the policy.
Please initialize policy uniformly, that is, probabililities of all actions should be equal.
End of explanation
def generate_session(policy, t_max=10**4):
Play game until end or for t_max ticks.
:param policy: an array of shape [n_states,n_actions] with action probabilities
:returns: list of states, list of actions and sum of rewards
states, actions = [], []
total_reward = 0.
s = env.reset()
def sample_action(policy, s):
action_p = policy[s, :].reshape(-1,)
#highest_p_actions = np.argwhere(action_p == np.amax(action_p)).reshape(-1,)
#non_zero_p_actions = np.argwhere(action_p > 0).reshape(-1,)
#random_choice = np.random.choice(highest_p_actions)
#random_choice = np.random.choice(non_zero_p_actions)
random_choice = np.random.choice(np.arange(len(action_p)), p=action_p)
return random_choice
for t in range(t_max):
a = sample_action(policy, s) #<sample action from policy(hint: use np.random.choice) >
new_s, r, done, info = env.step(a)
# Record state, action and add up reward to states,actions and total_reward accordingly.
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
s, a, r = generate_session(policy)
assert type(s) == type(a) == list
assert len(s) == len(a)
assert type(r) in [float, np.float]
# let's see the initial reward distribution
import matplotlib.pyplot as plt
%matplotlib inline
sample_rewards = [generate_session(policy, t_max=1000)[-1] for _ in range(200)]
plt.hist(sample_rewards, bins=20)
plt.vlines([np.percentile(sample_rewards, 50)], [0], [100], label="50'th percentile", color='green')
plt.vlines([np.percentile(sample_rewards, 90)], [0], [100], label="90'th percentile", color='red')
plt.legend()
Explanation: Play the game
Just like before, but we also record all states and actions we took.
End of explanation
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you are confused, see examples below. Please don't assume that states are integers
(they will become different later).
#<Compute minimum reward for elite sessions. Hint: use np.percentile >
reward_threshold = np.percentile(rewards_batch, percentile)
#elite_states = <your code here >
#elite_actions = <your code here >
elite_states = []
elite_actions = []
for i, reward in enumerate(rewards_batch):
if reward >= reward_threshold:
elite_states = elite_states + states_batch[i]
elite_actions = elite_actions + actions_batch[i]
return elite_states, elite_actions
states_batch = [
[1, 2, 3], # game1
[4, 2, 0, 2], # game2
[3, 1], # game3
]
actions_batch = [
[0, 2, 4], # game1
[3, 2, 0, 1], # game2
[3, 3], # game3
]
rewards_batch = [
3, # game1
4, # game2
5, # game3
]
test_result_0 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=0)
test_result_40 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=30)
test_result_90 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=90)
test_result_100 = select_elites(
states_batch, actions_batch, rewards_batch, percentile=100)
assert np.all(test_result_0[0] == [1, 2, 3, 4, 2, 0, 2, 3, 1]) \
and np.all(test_result_0[1] == [0, 2, 4, 3, 2, 0, 1, 3, 3]),\
"For percentile 0 you should return all states and actions in chronological order"
assert np.all(test_result_40[0] == [4, 2, 0, 2, 3, 1]) and \
np.all(test_result_40[1] == [3, 2, 0, 1, 3, 3]),\
"For percentile 30 you should only select states/actions from two first"
assert np.all(test_result_90[0] == [3, 1]) and \
np.all(test_result_90[1] == [3, 3]),\
"For percentile 90 you should only select states/actions from one game"
assert np.all(test_result_100[0] == [3, 1]) and\
np.all(test_result_100[1] == [3, 3]),\
"Please make sure you use >=, not >. Also double-check how you compute percentile."
print("Ok!")
def update_policy(elite_states, elite_actions):
Given old policy and a list of elite states/actions from select_elites,
return new updated policy where each action probability is proportional to
policy[s_i,a_i] ~ #[occurences of si and ai in elite states/actions]
Don't forget to normalize policy to get valid probabilities and handle 0/0 case.
In case you never visited a state, set probabilities for all actions to 1./n_actions
:param elite_states: 1D list of states from elite sessions
:param elite_actions: 1D list of actions from elite sessions
new_policy = np.zeros([n_states, n_actions])
#<Your code here: update probabilities for actions given elite states & actions >
# Don't forget to set 1/n_actions for all actions in unvisited states.
for state, action in zip(elite_states, elite_actions):
new_policy[state, action] = new_policy[state, action] + 1
for state in range(n_states):
s = np.sum(new_policy[state, :])
if s == 0:
new_policy[state, :] = 1. / n_actions
else:
new_policy[state, :] = new_policy[state, :] / s
return new_policy
elite_states = [1, 2, 3, 4, 2, 0, 2, 3, 1]
elite_actions = [0, 2, 4, 3, 2, 0, 1, 3, 3]
new_policy = update_policy(elite_states, elite_actions)
assert np.isfinite(new_policy).all(
), "Your new policy contains NaNs or +-inf. Make sure you don't divide by zero."
assert np.all(
new_policy >= 0), "Your new policy can't have negative action probabilities"
assert np.allclose(new_policy.sum(
axis=-1), 1), "Your new policy should be a valid probability distribution over actions"
reference_answer = np.array([
[1., 0., 0., 0., 0.],
[0.5, 0., 0., 0.5, 0.],
[0., 0.33333333, 0.66666667, 0., 0.],
[0., 0., 0., 0.5, 0.5]])
assert np.allclose(new_policy[:4, :5], reference_answer)
print("Ok!")
Explanation: Crossentropy method steps
End of explanation
from IPython.display import clear_output
def show_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):
A convenience function that displays training progress.
No cool math here, just charts.
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
# reset policy just in case
policy = np.ones([n_states, n_actions]) / n_actions
n_sessions = 250 # sample this many sessions
percentile = 30 # take this percent of session with highest rewards
learning_rate = 0.5 # add this thing to all counts for stability
log = []
for i in range(100):
%time sessions = [generate_session(policy) for x in range(n_sessions)] #[ < generate a list of n_sessions new sessions > ]
states_batch, actions_batch, rewards_batch = zip(*sessions)
elite_states, elite_actions = select_elites(states_batch, actions_batch, rewards_batch, percentile=percentile) #<select elite states/actions >
new_policy = update_policy(elite_states, elite_actions) #<compute new policy >
policy = learning_rate * new_policy + (1 - learning_rate) * policy
# display results on chart
show_progress(rewards_batch, log, percentile)
Explanation: Training loop
Generate sessions, select N best and fit to those.
End of explanation
from submit import submit_taxi
submit_taxi(generate_session, policy, 'tonatiuh_rangel@hotmail.com', '7uvgN7bBzpJzVw9f')
Explanation: Reflecting on results
You may have noticed that the taxi problem quickly converges from <-1000 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.
In case CEM failed to learn how to win from one distinct starting point, it will simply discard it because no sessions from that starting point will make it into the "elites".
To mitigate that problem, you can either reduce the threshold for elite sessions (duct tape way) or change the way you evaluate strategy (theoretically correct way). You can first sample an action for every possible state and then evaluate this choice of actions by running several games and averaging rewards.
Submit to coursera
End of explanation |
15,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: Functions used to plot
Step2: Create the dataset class
Step3: <!--Empty Space for separating topics-->
<h2 id="Model">Neural Network Module and Function for Training</h2>
Create Neural Network Module using <code>ModuleList()</code>
Step4: Create the function for training the model.
Step5: Define a function used to calculate accuracy.
Step6: <!--Empty Space for separating topics-->
<h2 id="Train">Train Different Networks Model different values for the Momentum Parameter</h2>
Crate a dataset object using <code>Data</code>
Step7: Dictionary to contain different cost and accuracy values for each epoch for different values of the momentum parameter.
Step8: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of zero.
Step9: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.1.
Step10: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.2.
Step11: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.4.
Step12: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.5.
Step13: <!--Empty Space for separating topics-->
<h2 id="Result">Compare Results of Different Momentum Terms</h2>
The plot below compares results of different momentum terms. We see that in general. The Cost decreases proportionally to the momentum term, but larger momentum terms lead to larger oscillations. While the momentum term decreases faster, it seems that a momentum term of 0.2 reaches the smallest value for the cost.
Step14: The accuracy seems to be proportional to the momentum term. | Python Code:
# Import the libraries for this lab
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from matplotlib.colors import ListedColormap
from torch.utils.data import Dataset, DataLoader
torch.manual_seed(1)
np.random.seed(1)
Explanation: <a href="http://cocl.us/pytorch_link_top">
<img src="https://cocl.us/Pytorch_top" width="750" alt="IBM 10TB Storage" />
</a>
<img src="https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width="200" alt="cognitiveclass.ai logo" />
<h1>Neural Networks with Momentum</h1>
<h2>Table of Contents</h2>
<p>In this lab, you will see how different values for the momentum parameters affect the convergence rate of a neural network.</p>
<ul>
<li><a href="#Model">Neural Network Module and Function for Training</a></li>
<li><a href="#Train">Train Different Neural Networks Model different values for the Momentum Parameter</a></li>
<li><a href="#Result">Compare Results of Different Momentum Terms</a></li>
</ul>
<p>Estimated Time Needed: <strong>25 min</strong></p>
<hr>
<h2>Preparation</h2>
We'll need the following libraries:
End of explanation
# Define a function for plot the decision region
def plot_decision_regions_3class(model,data_set):
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA','#00AAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00','#00AAFF'])
X=data_set.x.numpy()
y=data_set.y.numpy()
h = .02
x_min, x_max = X[:, 0].min()-0.1 , X[:, 0].max()+0.1
y_min, y_max = X[:, 1].min()-0.1 , X[:, 1].max() +0.1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),np.arange(y_min, y_max, h))
XX=torch.torch.Tensor(np.c_[xx.ravel(), yy.ravel()])
_,yhat=torch.max(model(XX),1)
yhat=yhat.numpy().reshape(xx.shape)
plt.pcolormesh(xx, yy, yhat, cmap=cmap_light)
plt.plot(X[y[:]==0,0],X[y[:]==0,1],'ro',label='y=0')
plt.plot(X[y[:]==1,0],X[y[:]==1,1],'go',label='y=1')
plt.plot(X[y[:]==2,0],X[y[:]==2,1],'o',label='y=2')
plt.title("decision region")
plt.legend()
Explanation: Functions used to plot:
End of explanation
# Create the dataset class
class Data(Dataset):
# modified from: http://cs231n.github.io/neural-networks-case-study/
# Constructor
def __init__(self, K = 3, N = 500):
D = 2
X = np.zeros((N * K, D)) # data matrix (each row = single example)
y = np.zeros(N * K, dtype = 'uint8') # class labels
for j in range(K):
ix = range(N * j, N * (j + 1))
r = np.linspace(0.0, 1, N) # radius
t = np.linspace(j * 4, (j + 1) * 4, N) + np.random.randn(N) * 0.2 # theta
X[ix] = np.c_[r * np.sin(t), r * np.cos(t)]
y[ix] = j
self.y = torch.from_numpy(y).type(torch.LongTensor)
self.x = torch.from_numpy(X).type(torch.FloatTensor)
self.len = y.shape[0]
# Getter
def __getitem__(self, index):
return self.x[index], self.y[index]
# Get Length
def __len__(self):
return self.len
# Plot the diagram
def plot_data(self):
plt.plot(self.x[self.y[:] == 0, 0].numpy(), self.x[self.y[:] == 0, 1].numpy(), 'o', label = "y=0")
plt.plot(self.x[self.y[:] == 1, 0].numpy(), self.x[self.y[:] == 1, 1].numpy(), 'ro', label = "y=1")
plt.plot(self.x[self.y[:] == 2, 0].numpy(),self.x[self.y[:] == 2, 1].numpy(), 'go',label = "y=2")
plt.legend()
Explanation: Create the dataset class
End of explanation
# Create dataset object
class Net(nn.Module):
# Constructor
def __init__(self, Layers):
super(Net, self).__init__()
self.hidden = nn.ModuleList()
for input_size, output_size in zip(Layers, Layers[1:]):
self.hidden.append(nn.Linear(input_size, output_size))
# Prediction
def forward(self, activation):
L = len(self.hidden)
for (l, linear_transform) in zip(range(L), self.hidden):
if l < L - 1:
activation = F.relu(linear_transform(activation))
else:
activation = linear_transform(activation)
return activation
Explanation: <!--Empty Space for separating topics-->
<h2 id="Model">Neural Network Module and Function for Training</h2>
Create Neural Network Module using <code>ModuleList()</code>
End of explanation
# Define the function for training the model
def train(data_set, model, criterion, train_loader, optimizer, epochs = 100):
LOSS = []
ACC = []
for epoch in range(epochs):
for x, y in train_loader:
optimizer.zero_grad()
yhat = model(x)
loss = criterion(yhat, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
LOSS.append(loss.item())
ACC.append(accuracy(model,data_set))
results ={"Loss":LOSS, "Accuracy":ACC}
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.plot(LOSS,color = color)
ax1.set_xlabel('epoch', color = color)
ax1.set_ylabel('total loss', color = color)
ax1.tick_params(axis = 'y', color = color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('accuracy', color = color) # we already handled the x-label with ax1
ax2.plot(ACC, color = color)
ax2.tick_params(axis = 'y', color = color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
return results
Explanation: Create the function for training the model.
End of explanation
# Define a function for calculating accuracy
def accuracy(model, data_set):
_, yhat = torch.max(model(data_set.x), 1)
return (yhat == data_set.y).numpy().mean()
Explanation: Define a function used to calculate accuracy.
End of explanation
# Create the dataset and plot it
data_set = Data()
data_set.plot_data()
data_set.y = data_set.y.view(-1)
Explanation: <!--Empty Space for separating topics-->
<h2 id="Train">Train Different Networks Model different values for the Momentum Parameter</h2>
Crate a dataset object using <code>Data</code>
End of explanation
# Initialize a dictionary to contain the cost and accuracy
Results = {"momentum 0": {"Loss": 0, "Accuracy:": 0}, "momentum 0.1": {"Loss": 0, "Accuracy:": 0}}
Explanation: Dictionary to contain different cost and accuracy values for each epoch for different values of the momentum parameter.
End of explanation
# Train a model with 1 hidden layer and 50 neurons
Layers = [2, 50, 3]
model = Net(Layers)
learning_rate = 0.10
optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate)
train_loader = DataLoader(dataset = data_set, batch_size = 20)
criterion = nn.CrossEntropyLoss()
Results["momentum 0"] = train(data_set, model, criterion, train_loader, optimizer, epochs = 100)
plot_decision_regions_3class(model, data_set)
Explanation: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of zero.
End of explanation
# Train a model with 1 hidden layer and 50 neurons with 0.1 momentum
Layers = [2, 50, 3]
model = Net(Layers)
learning_rate = 0.10
optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate, momentum = 0.1)
train_loader = DataLoader(dataset = data_set, batch_size = 20)
criterion = nn.CrossEntropyLoss()
Results["momentum 0.1"] = train(data_set, model, criterion, train_loader, optimizer, epochs = 100)
plot_decision_regions_3class(model, data_set)
Explanation: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.1.
End of explanation
# Train a model with 1 hidden layer and 50 neurons with 0.2 momentum
Layers = [2, 50, 3]
model = Net(Layers)
learning_rate = 0.10
optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate, momentum = 0.2)
train_loader = DataLoader(dataset = data_set, batch_size = 20)
criterion = nn.CrossEntropyLoss()
Results["momentum 0.2"] = train(data_set, model, criterion, train_loader, optimizer, epochs = 100)
plot_decision_regions_3class(model, data_set)
Explanation: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.2.
End of explanation
# Train a model with 1 hidden layer and 50 neurons with 0.4 momentum
Layers = [2, 50, 3]
model = Net(Layers)
learning_rate = 0.10
optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate, momentum = 0.4)
train_loader = DataLoader(dataset = data_set, batch_size = 20)
criterion = nn.CrossEntropyLoss()
Results["momentum 0.4"] = train(data_set, model, criterion, train_loader, optimizer, epochs = 100)
plot_decision_regions_3class(model, data_set)
Explanation: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.4.
End of explanation
# Train a model with 1 hidden layer and 50 neurons with 0.5 momentum
Layers = [2, 50, 3]
model = Net(Layers)
learning_rate = 0.10
optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate, momentum = 0.5)
train_loader = DataLoader(dataset = data_set, batch_size = 20)
criterion = nn.CrossEntropyLoss()
Results["momentum 0.5"] = train(data_set, model, criterion, train_loader, optimizer, epochs = 100)
plot_decision_regions_3class(model,data_set)
Explanation: Create a network to classify three classes with 1 hidden layer with 50 neurons and a momentum value of 0.5.
End of explanation
# Plot the Loss result for each term
for key, value in Results.items():
plt.plot(value['Loss'],label = key)
plt.legend()
plt.xlabel('epoch')
plt.ylabel('Total Loss or Cost')
Explanation: <!--Empty Space for separating topics-->
<h2 id="Result">Compare Results of Different Momentum Terms</h2>
The plot below compares results of different momentum terms. We see that in general. The Cost decreases proportionally to the momentum term, but larger momentum terms lead to larger oscillations. While the momentum term decreases faster, it seems that a momentum term of 0.2 reaches the smallest value for the cost.
End of explanation
# Plot the Accuracy result for each term
for key, value in Results.items():
plt.plot(value['Accuracy'],label= key)
plt.legend()
plt.xlabel('epoch')
plt.ylabel('Accuracy')
Explanation: The accuracy seems to be proportional to the momentum term.
End of explanation |
15,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-2', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
15,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculating correlation functions
This document walks through using Py2PAC to calculate correlation functions with or without error estimates. We'll do this with the AngularCatalog class.
First, import the things that we'll need
Step1: Creating an AngularCatalog with randomly placed data
The first catalog we'll look at will be random data- the correlation functions should be 0 at all scales. We do this with the class method random_catalog in the AngularCatalog class. In this case, we're just making it in a rectangle over RA = 0 to 1 degrees and Dec = -0.5 to 0.5 degrees.
Step2: Calculating a correlation function
Now we want to get the correlation function from this. The things we need are a binning in separation and a random sample. The theta bins are set with AngularCatalog.set_theta_bins(min, max, nbins) and the randoms are generated with AngularCatalog.generate_random_sample(<number of randoms>).
Generating randoms
The number of randoms can be defined in a few ways.
- number_to_make=N
Step3: Setting the theta binning
Pretty simple- you just use AngularCatalog.set_theta_bins and tell it the minimum and maximum separations and the number of bins and it sets it all up. The required parameters are the min and max separation and the number of bins. The keyword parameters are logbins and unit, by default 'a' and True respectively. The logbins parameter sets whether the bins are evenly spaced in log separation (True) or linear separation (False). The unit argument sets the unit of the input minimum and maximum separations and can be arcseconds, degrees, or radians.
Step4: Different methods for calculating correlation functions
The AngularCatalog class has four functions that calculate correlation functions that differ mainly by the error estimation. $N_{gals}$ is the total number of galaxies in the data catalogs.
AngularCatalog.cf
Step5: Performing the calculations
Now we're set to actually calculate the correlation functions. For more information on exactly how the individual functions work, see the documentation.
Step6: Plotting correlation functions
The CorrelationFunction class has a plotting routine. For convenience, the AngularCatalog class can use it to plot multiple correlation functions at once. In general, correlation functions are plotted with a logarithmic x- and y-axis, but in this case we're plotting a correlation function that we expect to be zero. The log_yscale=False keyword argument does this for us.
Step7: This clearly isn't exactly 0, but is generally consistent with zero.
Correlation function management in AngularCatalogs
All the correlation functions stored in an AngularCatalog object have a name that identifies them. Each of the methods has a distinct default name, but you can also specify the name explicitly with the name keyword argument in the call to the correlation function routine. In addition, AngularCatalogs protect the already-calculated correlation functions. If a correlation function with that name already exists in the object, you must set clobber=True in the function call in order to overwrite it.
Correlation functions are stored as CorrelationFunction objects in the AngularCatalog.cfs dictionary. Below, we show that the cfs dictionary is empty, calculate a correlation function with no error bars with all default arguments, and then show that afterwards there is a correlation function in the dictionary with the default name for the cf() function.
Step8: If you try to do the same thing again, it fails unless you use clobber=True | Python Code:
import AngularCatalog_class as ac
import numpy.random as rand
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
Explanation: Calculating correlation functions
This document walks through using Py2PAC to calculate correlation functions with or without error estimates. We'll do this with the AngularCatalog class.
First, import the things that we'll need
End of explanation
#Set the seed
rand.seed(seed=1234)
#Generate the catalog
cat = ac.AngularCatalog.random_catalog(2e3, ra_range=[0, 1], dec_range=[-.5, .5])
#Show it as a scatter plot
cat.scatterplot_points(sample="data")
Explanation: Creating an AngularCatalog with randomly placed data
The first catalog we'll look at will be random data- the correlation functions should be 0 at all scales. We do this with the class method random_catalog in the AngularCatalog class. In this case, we're just making it in a rectangle over RA = 0 to 1 degrees and Dec = -0.5 to 0.5 degrees.
End of explanation
cat.generate_random_sample(number_to_make=1e4)
#Show it as a scatter plot
cat.scatterplot_points(sample="both")
Explanation: Calculating a correlation function
Now we want to get the correlation function from this. The things we need are a binning in separation and a random sample. The theta bins are set with AngularCatalog.set_theta_bins(min, max, nbins) and the randoms are generated with AngularCatalog.generate_random_sample(<number of randoms>).
Generating randoms
The number of randoms can be defined in a few ways.
- number_to_make=N: The set number of points to lay down.
- multiple_of_data=N: Makes N_data * N randoms.
- density_on_sky=N: Number of randoms per square degree.
In this case, we'll just use a set number, so number_to_make=1e4.
End of explanation
#Set the theta bins- this has default unit='a' (for arcseconds) and logbins=True
cat.set_theta_bins(20, 1000, 10)
#Examples of other ways you might call this
#cat.set_theta_bins(5.56e-3, 0.278, 15, unit='d') #same as above (modulo rounding) but in deg
#cat.set_theta_bins(20, 1000, 15, logbins=False) #Same as above but with linear bins
Explanation: Setting the theta binning
Pretty simple- you just use AngularCatalog.set_theta_bins and tell it the minimum and maximum separations and the number of bins and it sets it all up. The required parameters are the min and max separation and the number of bins. The keyword parameters are logbins and unit, by default 'a' and True respectively. The logbins parameter sets whether the bins are evenly spaced in log separation (True) or linear separation (False). The unit argument sets the unit of the input minimum and maximum separations and can be arcseconds, degrees, or radians.
End of explanation
cat.subdivide_mask(n_shortside=3, n_longside=3, preview=True)
#Actually do this subdivision
cat.subdivide_mask(n_shortside=3, n_longside=3)
Explanation: Different methods for calculating correlation functions
The AngularCatalog class has four functions that calculate correlation functions that differ mainly by the error estimation. $N_{gals}$ is the total number of galaxies in the data catalogs.
AngularCatalog.cf: calculates the correlation function without errors.
AngularCatalog.cf_bootstrap: calculates the correlation function with errors estimated with single-galaxy bootstrapping. What this means is that it calculates $N_{boots}$ correlation functions, each with $N_{gals}$ galaxies randomly selected from the data catalog with replacement so that an individual galaxy might appear several times or not at all in an interation. The mean and standard deviation in each bin of separation are the value of and error on the correlation function at that separation.
AngularCatalog.cf_block_bootstrap: calculates the correlation function with errors from block bootstrapping. Block bootstrapping is like single-galaxy bootstrapping, but works with large areas of the image rather than single galaxies. Before you run a block bootstrapped correlation function, you divide the image into $N_{blocks}$ blocks that should be as close to square and as close to equal areas as possible. Requires mask subdivision.
AngularCatalog.cf_jackknife: calculates the correlation function with errors from jackknifing. Jackknifing is another error estimator that requires the image area to be subdivided into $N_{blocks}$ blocks. Then, $N_{blocks}$ correlation functions are calculated, each one omitting a different spatial block. The mean and standard deciation again estimate the correlation function value and error. Requires mask subdivision.
Subdividing the mask
Before we do block bootstrap or jackknife, we have to subdivide the catalog. We do this with AngularCatalog.subdivide_mask. If you want to try a subdivision method, you can use preview=True, which will show you a plot of that subdivision but will not store it. If you preview a subdivision and you decide you want to keep it, you must run the routine again with preview=False (the default value).
End of explanation
%%capture
# ^ To hide the long output that we don't really care about
#Without error bars
cat.cf(n_iter=20, clobber=True, name='noerr_cf')
#Single-galaxy bootstrapping
cat.cf_bootstrap(n_boots=20, clobber=True, name="single_gal_cf")
#Block bootstrapping
cat.cf_block_bootstrap(n_boots=20, clobber=True, name="block_bs_cf")
#Jackknife
cat.cf_jackknife(clobber=True, name="jackknife")
Explanation: Performing the calculations
Now we're set to actually calculate the correlation functions. For more information on exactly how the individual functions work, see the documentation.
End of explanation
cat.plot_cfs(which_cfs=['noerr_cf', 'single_gal_cf', 'block_bs_cf', 'jackknife'],
labels=["No errors", "Single gal bootstrap", "Block bootstrap", "Jackknife"],
fmt='o-', log_yscale=False)
Explanation: Plotting correlation functions
The CorrelationFunction class has a plotting routine. For convenience, the AngularCatalog class can use it to plot multiple correlation functions at once. In general, correlation functions are plotted with a logarithmic x- and y-axis, but in this case we're plotting a correlation function that we expect to be zero. The log_yscale=False keyword argument does this for us.
End of explanation
#Simple correlation function calculation
cat.cfs={} #Clear the correlation functions in case you're re-running this block.
print "Dictionary keys before: ", cat.cfs.keys()
print ""
cat.cf()
print ""
print "Dictionary keys after: ", cat.cfs.keys()
Explanation: This clearly isn't exactly 0, but is generally consistent with zero.
Correlation function management in AngularCatalogs
All the correlation functions stored in an AngularCatalog object have a name that identifies them. Each of the methods has a distinct default name, but you can also specify the name explicitly with the name keyword argument in the call to the correlation function routine. In addition, AngularCatalogs protect the already-calculated correlation functions. If a correlation function with that name already exists in the object, you must set clobber=True in the function call in order to overwrite it.
Correlation functions are stored as CorrelationFunction objects in the AngularCatalog.cfs dictionary. Below, we show that the cfs dictionary is empty, calculate a correlation function with no error bars with all default arguments, and then show that afterwards there is a correlation function in the dictionary with the default name for the cf() function.
End of explanation
print "Without clobber or a different name: "
cat.cf()
print "With clobber:"
print ""
cat.cf(clobber=True)
print "Without clobber but with a different name: "
print ""
cat.cf(name='cf2')
print ""
print "Correlation function names: ", cat.cfs.keys()
Explanation: If you try to do the same thing again, it fails unless you use clobber=True
End of explanation |
15,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot some left, center and right images
Step1: Plot the same images but crop to remove the sky and car bonnet
Step2: Same images but resized
Step3: Converted to HSV colour space and showing only the S channel
Step4: Converted to YUV colour space and showing only the V channel
Step5: Show some examples from Track 2 with cropping, HSV (only S channel)
Step6: The S channel in the HSV colour-space looks promising as the result is very similar for both track 1 and track 2 which has very bad shadowing...
Remove the data frame header row and train/val split
Step10: Routines for reading and processing images
Step12: Generator function (not yielding here as we want to just show the images) - displays 3 images from the batch and then the same images augmented | Python Code:
from keras.preprocessing.image import img_to_array, load_img
plt.rcParams['figure.figsize'] = (12, 6)
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[1090][camera].strip())
image = img_to_array(image).astype(np.uint8)
plt.subplot(1, 3, i+1)
plt.imshow(image)
plt.axis('off')
plt.title(camera)
i += 1
Explanation: Plot some left, center and right images
End of explanation
# With cropping
plt.rcParams['figure.figsize'] = (12, 6)
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[1090][camera].strip())
image = img_to_array(image).astype(np.uint8)
image = image[55:135, :, :]
plt.subplot(1, 3, i+1)
plt.imshow(image)
plt.axis('off')
plt.title(camera)
i += 1
Explanation: Plot the same images but crop to remove the sky and car bonnet
End of explanation
# With cropping then resizing
#plt.figure()
plt.rcParams['figure.figsize'] = (6, 3)
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[7100][camera].strip())
image = img_to_array(image).astype(np.uint8)
image = image[55:135, :, :]
image = imresize(image, (32, 16, 3))
plt.subplot(1, 3, i+1)
plt.imshow(image)
plt.axis('off')
plt.title(camera)
i += 1
Explanation: Same images but resized
End of explanation
# With cropping then resizing then HSV
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[7100][camera].strip())
image = img_to_array(image).astype(np.uint8)
image = image[55:135, :, :]
image = imresize(image, (32, 16, 3))
hsv = cv2.cvtColor(image.astype("uint8"), cv2.COLOR_RGB2HSV)
hsv[:, :, 0] = hsv[:, :, 0] * 0
hsv[:, :, 2] = hsv[:, :, 2] * 0
plt.subplot(1, 3, i+1)
plt.imshow(hsv)
plt.axis('off')
plt.title(camera)
i += 1
Explanation: Converted to HSV colour space and showing only the S channel
End of explanation
# With cropping then resizing then YUV
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[7100][camera].strip())
image = img_to_array(image).astype(np.uint8)
image = image[55:135, :, :]
image = imresize(image, (32, 16, 3))
yuv = cv2.cvtColor(image.astype("uint8"), cv2.COLOR_RGB2YUV)
hsv[:, :, 0] = hsv[:, :, 0] * 0
hsv[:, :, 1] = hsv[:, :, 1] * 0
plt.subplot(1, 3, i+1)
plt.imshow(yuv)
plt.axis('off')
plt.title(camera)
i += 1
Explanation: Converted to YUV colour space and showing only the V channel
End of explanation
#plt.figure()
plt.rcParams['figure.figsize'] = (6, 3)
i = 0
for track2_image_file in ["data/track_2_1.jpg", "data/track_2_2.jpg", "data/track_2_3.jpg"]:
track2_image = load_img(track2_image_file)
track2_image = img_to_array(track2_image).astype(np.uint8)
track2_image = track2_image[55:135, :, :]
track2_image = imresize(track2_image, (32, 16, 3))
yuv = cv2.cvtColor(track2_image.astype("uint8"), cv2.COLOR_RGB2HSV)
yuv[:, :, 0] = yuv[:, :, 0] * 0
yuv[:, :, 2] = yuv[:, :, 2] * 0
plt.subplot(1, 3, i+1)
plt.imshow(yuv)
plt.axis('off')
i += 1
Explanation: Show some examples from Track 2 with cropping, HSV (only S channel)
End of explanation
# Remove header
data_frame = data_frame.ix[1:]
# shuffle the data (frac=1 meand 100% of the data)
data_frame = data_frame.sample(frac=1).reset_index(drop=True)
# 80-20 training validation split
training_split = 0.8
num_rows_training = int(data_frame.shape[0]*training_split)
print(num_rows_training)
training_data = data_frame.loc[0:num_rows_training-1]
validation_data = data_frame.loc[num_rows_training:]
# release the main data_frame from memory
data_frame = None
Explanation: The S channel in the HSV colour-space looks promising as the result is very similar for both track 1 and track 2 which has very bad shadowing...
Remove the data frame header row and train/val split
End of explanation
def read_images(img_dataframe):
#from IPython.core.debugger import Tracer
#Tracer()() #this one triggers the debugger
imgs = np.empty([len(img_dataframe), 160, 320, 3])
angles = np.empty([len(img_dataframe)])
j = 0
for i, row in img_dataframe.iterrows():
# Randomly pick left, center, right camera image and adjust steering angle
# as necessary
camera = np.random.choice(["center", "left", "right"])
imgs[j] = imread("data/" + row[camera].strip())
steering = row["steering"]
if camera == "left":
steering += 0.25
elif camera == "right":
steering -= 0.25
angles[j] = steering
j += 1
#for i, path in enumerate(img_paths):
# print("data/" + path)
# imgs[i] = imread("data/" + path)
return imgs, angles
def resize(imgs, shape=(32, 16, 3)):
Resize images to shape.
height, width, channels = shape
imgs_resized = np.empty([len(imgs), height, width, channels])
for i, img in enumerate(imgs):
imgs_resized[i] = imresize(img, shape)
#imgs_resized[i] = cv2.resize(img, (16, 32))
return imgs_resized
def normalize(imgs):
Normalize images between [-1, 1].
#return imgs / (255.0 / 2) - 1
return imgs / 255.0 - 0.5
def augment_brightness(images):
:param image: Input image
:return: output image with reduced brightness
new_imgs = np.empty_like(images)
for i, image in enumerate(images):
#rgb = toimage(image)
# convert to HSV so that its easy to adjust brightness
hsv = cv2.cvtColor(image.astype("uint8"), cv2.COLOR_RGB2HSV)
# randomly generate the brightness reduction factor
# Add a constant so that it prevents the image from being completely dark
random_bright = .25+np.random.uniform()
# Apply the brightness reduction to the V channel
hsv[:,:,2] = hsv[:,:,2]*random_bright
# Clip the image so that no pixel has value greater than 255
hsv[:, :, 2] = np.clip(hsv[:, :, 2], a_min=0, a_max=255)
# convert to RBG again
new_imgs[i] = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
return new_imgs
def preprocess(imgs):
#imgs_processed = resize(imgs)
#imgs_processed = rgb2gray(imgs_processed)
imgs_processed = normalize(imgs)
return imgs_processed
Explanation: Routines for reading and processing images
End of explanation
def gen_batches(data_frame, batch_size):
Generates random batches of the input data.
:param imgs: The input images.
:param angles: The steering angles associated with each image.
:param batch_size: The size of each minibatch.
:yield: A tuple (images, angles), where both images and angles have batch_size elements.
#while True:
df_batch = data_frame.sample(n=batch_size)
images_raw, angles_raw = read_images(df_batch)
plt.figure()
# Show a sample of 3 images
for i in range(3):
plt.subplot(2, 3, i+1)
plt.imshow(images_raw[i].astype("uint8"))
plt.axis("off")
plt.title("%.8f" % angles_raw[i])
# Augment data by altering brightness of images
#plt.figure()
augmented_imgs = augment_brightness(images_raw)
for i in range(3):
plt.subplot(2, 3, i+4)
plt.imshow(augmented_imgs[i].astype("uint8"))
plt.axis('off')
plt.title("%.8f" % angles_raw[i])
#batch_imgs, batch_angles = augment(preprocess(batch_imgs_raw), angles_raw)
# batch_imgs, batch_angles = augment(batch_imgs_raw, angles_raw)
# batch_imgs = preprocess(batch_imgs)
# yield batch_imgs, batch_angles
gen_batches(training_data, 3)
Explanation: Generator function (not yielding here as we want to just show the images) - displays 3 images from the batch and then the same images augmented
End of explanation |
15,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-esm2-hr5', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-ESM2-HR5
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
15,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob)
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
# // to get value without decimal
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches*characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:,n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:,:-1], y[:,-1] = x[:,1:], x[:,0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell(drop, num_layers)
initial_state =
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output =
# Reshape seq_output to a 2D tensor with lstm_size columns
x =
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w =
softmax_b =
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits =
# Use softmax to get the probabilities for predicted characters
out =
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot =
y_reshaped =
# Softmax cross entropy loss
loss =
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob =
# Build the LSTM cell
cell, self.initial_state =
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot =
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state =
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits =
# Loss and optimizer (with gradient clipping)
self.loss =
self.optimizer =
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
15,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Again, I'll load the NSFG pregnancy file and select live births
Step2: Here's the histogram of birth weights
Step3: To normalize the disrtibution, we could divide through by the total count
Step4: The result is a Probability Mass Function (PMF).
Step5: More directly, we can create a Pmf object.
Step6: Pmf provides Prob, which looks up a value and returns its probability
Step7: The bracket operator does the same thing.
Step8: The Incr method adds to the probability associated with a given values.
Step9: The Mult method multiplies the probability associated with a value.
Step10: Total returns the total probability (which is no longer 1, because we changed one of the probabilities).
Step11: Normalize divides through by the total probability, making it 1 again.
Step12: Here's the PMF of pregnancy length for live births.
Step13: Here's what it looks like plotted with Hist, which makes a bar graph.
Step14: Here's what it looks like plotted with Pmf, which makes a step function.
Step15: We can use MakeFrames to return DataFrames for all live births, first babies, and others.
Step16: Here are the distributions of pregnancy length.
Step17: And here's the code that replicates one of the figures in the chapter.
Step18: Here's the code that generates a plot of the difference in probability (in percentage points) between first babies and others, for each week of pregnancy (showing only pregnancies considered "full term").
Step19: Biasing and unbiasing PMFs
Here's the example in the book showing operations we can perform with Pmf objects.
Suppose we have the following distribution of class sizes.
Step20: This function computes the biased PMF we would get if we surveyed students and asked about the size of the classes they are in.
Step21: The following graph shows the difference between the actual and observed distributions.
Step22: The observed mean is substantially higher than the actual.
Step23: If we were only able to collect the biased sample, we could "unbias" it by applying the inverse operation.
Step24: We can unbias the biased PMF
Step25: And plot the two distributions to confirm they are the same.
Step26: Pandas indexing
Here's an example of a small DataFrame.
Step27: We can specify column names when we create the DataFrame
Step28: We can also specify an index that contains labels for the rows.
Step29: Normal indexing selects columns.
Step30: We can use the loc attribute to select rows.
Step31: If you don't want to use the row labels and prefer to access the rows using integer indices, you can use the iloc attribute
Step32: loc can also take a list of labels.
Step33: If you provide a slice of labels, DataFrame uses it to select rows.
Step34: If you provide a slice of integers, DataFrame selects rows by integer index.
Step35: But notice that one method includes the last elements of the slice and one does not.
In general, I recommend giving labels to the rows and names to the columns, and using them consistently.
Exercises
Exercise
Step36: Exercise
Step37: Exercise
Step38: Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
Explanation: Again, I'll load the NSFG pregnancy file and select live births:
End of explanation
hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')
thinkplot.Hist(hist)
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')
Explanation: Here's the histogram of birth weights:
End of explanation
n = hist.Total()
pmf = hist.Copy()
for x, freq in hist.Items():
pmf[x] = freq / n
Explanation: To normalize the disrtibution, we could divide through by the total count:
End of explanation
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='PMF')
Explanation: The result is a Probability Mass Function (PMF).
End of explanation
pmf = thinkstats2.Pmf([1, 2, 2, 3, 5])
pmf
Explanation: More directly, we can create a Pmf object.
End of explanation
pmf.Prob(2)
Explanation: Pmf provides Prob, which looks up a value and returns its probability:
End of explanation
pmf[2]
Explanation: The bracket operator does the same thing.
End of explanation
pmf.Incr(2, 0.2)
pmf[2]
Explanation: The Incr method adds to the probability associated with a given values.
End of explanation
pmf.Mult(2, 0.5)
pmf[2]
Explanation: The Mult method multiplies the probability associated with a value.
End of explanation
pmf.Total()
Explanation: Total returns the total probability (which is no longer 1, because we changed one of the probabilities).
End of explanation
pmf.Normalize()
pmf.Total()
Explanation: Normalize divides through by the total probability, making it 1 again.
End of explanation
pmf = thinkstats2.Pmf(live.prglngth, label='prglngth')
Explanation: Here's the PMF of pregnancy length for live births.
End of explanation
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='Pmf')
Explanation: Here's what it looks like plotted with Hist, which makes a bar graph.
End of explanation
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='Pmf')
Explanation: Here's what it looks like plotted with Pmf, which makes a step function.
End of explanation
live, firsts, others = first.MakeFrames()
Explanation: We can use MakeFrames to return DataFrames for all live births, first babies, and others.
End of explanation
first_pmf = thinkstats2.Pmf(firsts.prglngth, label='firsts')
other_pmf = thinkstats2.Pmf(others.prglngth, label='others')
Explanation: Here are the distributions of pregnancy length.
End of explanation
width=0.45
axis = [27, 46, 0, 0.6]
thinkplot.PrePlot(2, cols=2)
thinkplot.Hist(first_pmf, align='right', width=width)
thinkplot.Hist(other_pmf, align='left', width=width)
thinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='PMF', axis=axis)
thinkplot.PrePlot(2)
thinkplot.SubPlot(2)
thinkplot.Pmfs([first_pmf, other_pmf])
thinkplot.Config(xlabel='Pregnancy length(weeks)', axis=axis)
Explanation: And here's the code that replicates one of the figures in the chapter.
End of explanation
weeks = range(35, 46)
diffs = []
for week in weeks:
p1 = first_pmf.Prob(week)
p2 = other_pmf.Prob(week)
diff = 100 * (p1 - p2)
diffs.append(diff)
thinkplot.Bar(weeks, diffs)
thinkplot.Config(xlabel='Pregnancy length(weeks)', ylabel='Difference (percentage points)')
Explanation: Here's the code that generates a plot of the difference in probability (in percentage points) between first babies and others, for each week of pregnancy (showing only pregnancies considered "full term").
End of explanation
d = { 7: 8, 12: 8, 17: 14, 22: 4,
27: 6, 32: 12, 37: 8, 42: 3, 47: 2 }
pmf = thinkstats2.Pmf(d, label='actual')
Explanation: Biasing and unbiasing PMFs
Here's the example in the book showing operations we can perform with Pmf objects.
Suppose we have the following distribution of class sizes.
End of explanation
def BiasPmf(pmf, label):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
Explanation: This function computes the biased PMF we would get if we surveyed students and asked about the size of the classes they are in.
End of explanation
biased_pmf = BiasPmf(pmf, label='observed')
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, biased_pmf])
thinkplot.Config(xlabel='Class size', ylabel='PMF')
Explanation: The following graph shows the difference between the actual and observed distributions.
End of explanation
print('Actual mean', pmf.Mean())
print('Observed mean', biased_pmf.Mean())
Explanation: The observed mean is substantially higher than the actual.
End of explanation
def UnbiasPmf(pmf, label=None):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf[x] *= 1/x
new_pmf.Normalize()
return new_pmf
Explanation: If we were only able to collect the biased sample, we could "unbias" it by applying the inverse operation.
End of explanation
unbiased = UnbiasPmf(biased_pmf, label='unbiased')
print('Unbiased mean', unbiased.Mean())
Explanation: We can unbias the biased PMF:
End of explanation
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, unbiased])
thinkplot.Config(xlabel='Class size', ylabel='PMF')
Explanation: And plot the two distributions to confirm they are the same.
End of explanation
import numpy as np
import pandas
array = np.random.randn(4, 2)
df = pandas.DataFrame(array)
df
Explanation: Pandas indexing
Here's an example of a small DataFrame.
End of explanation
columns = ['A', 'B']
df = pandas.DataFrame(array, columns=columns)
df
Explanation: We can specify column names when we create the DataFrame:
End of explanation
index = ['a', 'b', 'c', 'd']
df = pandas.DataFrame(array, columns=columns, index=index)
df
Explanation: We can also specify an index that contains labels for the rows.
End of explanation
df['A']
Explanation: Normal indexing selects columns.
End of explanation
df.loc['a']
Explanation: We can use the loc attribute to select rows.
End of explanation
df.iloc[0]
Explanation: If you don't want to use the row labels and prefer to access the rows using integer indices, you can use the iloc attribute:
End of explanation
indices = ['a', 'c']
df.loc[indices]
Explanation: loc can also take a list of labels.
End of explanation
df['a':'c']
Explanation: If you provide a slice of labels, DataFrame uses it to select rows.
End of explanation
df[0:2]
Explanation: If you provide a slice of integers, DataFrame selects rows by integer index.
End of explanation
# Create original PMF
resp = nsfg.ReadFemResp()
numkdhh_pmf = thinkstats2.Pmf(resp['numkdhh'], label='actual')
numkdhh_pmf
# Create copy and confirm values
pmf = numkdhh_pmf.Copy()
print(pmf)
print(pmf.Total())
print('mean', pmf.Mean())
# Weight PMF by number of children that would respond with each value
def BiasPmf(pmf, label):
child_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
child_pmf.Mult(x, x)
child_pmf.Normalize()
return child_pmf
child_pmf = BiasPmf(pmf, 'childs_view')
print(child_pmf)
print(child_pmf.Total())
print('mean', child_pmf.Mean())
# Plot
thinkplot.PrePlot(2)
thinkplot.Pmfs([pmf, child_pmf])
thinkplot.Show(xlabel='Children In Family', ylabel='PMF')
# True mean
print('True mean', pmf.Mean())
# Mean based on the children's responses
print('Child view mean', child_pmf.Mean())
Explanation: But notice that one method includes the last elements of the slice and one does not.
In general, I recommend giving labels to the rows and names to the columns, and using them consistently.
Exercises
Exercise: Something like the class size paradox appears if you survey children and ask how many children are in their family. Families with many children are more likely to appear in your sample, and families with no children have no chance to be in the sample.
Use the NSFG respondent variable numkdhh to construct the actual distribution for the number of children under 18 in the respondents' households.
Now compute the biased distribution we would see if we surveyed the children and asked them how many children under 18 (including themselves) are in their household.
Plot the actual and biased distributions, and compute their means.
End of explanation
def PmfMean(pmf):
mean=0
for x, p in pmf.Items():
mean += x*p
return mean
PmfMean(child_pmf)
def PmfVar(pmf):
variance=0
pmf_mean=PmfMean(pmf)
for x, p in pmf.Items():
variance += p * np.power(x-pmf_mean, 2)
return variance
PmfVar(child_pmf)
print('Check Mean =', PmfMean(child_pmf) == thinkstats2.Pmf.Mean(child_pmf))
print('Check Variance = ', PmfVar(child_pmf) == thinkstats2.Pmf.Var(child_pmf))
Explanation: Exercise: Write functions called PmfMean and PmfVar that take a Pmf object and compute the mean and variance. To test these methods, check that they are consistent with the methods Mean and Var provided by Pmf.
End of explanation
live, firsts, others = first.MakeFrames()
preg.iloc[0:2].prglngth
preg_map = nsfg.MakePregMap(live)
hist = thinkstats2.Hist()
for case, births in preg_map.items():
if len(births) >= 2:
pair = preg.loc[births[0:2]].prglngth
diff = pair.iloc[1] - pair.iloc[0]
hist[diff] += 1
thinkplot.Hist(hist)
pmf = thinkstats2.Pmf(hist)
PmfMean(pmf)
Explanation: Exercise: I started this book with the question, "Are first babies more likely to be late?" To address it, I computed the difference in means between groups of babies, but I ignored the possibility that there might be a difference between first babies and others for the same woman.
To address this version of the question, select respondents who have at least two live births and compute pairwise differences. Does this formulation of the question yield a different result?
Hint: use nsfg.MakePregMap:
End of explanation
import relay
results = relay.ReadResults()
speeds = relay.GetSpeeds(results)
speeds = relay.BinData(speeds, 3, 12, 100)
pmf = thinkstats2.Pmf(speeds, 'actual speeds')
thinkplot.Pmf(pmf)
thinkplot.Config(xlabel='Speed (mph)', ylabel='PMF')
def ObservedPmf(pmf, speed, label):
observed_pmf = pmf.Copy(label=label)
for value in observed_pmf.Values():
diff = abs(speed - value)
observed_pmf[value] *= diff
observed_pmf.Normalize()
return observed_pmf
observed = ObservedPmf(pmf, 7, 'observed speeds')
thinkplot.Hist(observed)
Explanation: Exercise: In most foot races, everyone starts at the same time. If you are a fast runner, you usually pass a lot of people at the beginning of the race, but after a few miles everyone around you is going at the same speed.
When I ran a long-distance (209 miles) relay race for the first time, I noticed an odd phenomenon: when I overtook another runner, I was usually much faster, and when another runner overtook me, he was usually much faster.
At first I thought that the distribution of speeds might be bimodal; that is, there were many slow runners and many fast runners, but few at my speed.
Then I realized that I was the victim of a bias similar to the effect of class size. The race was unusual in two ways: it used a staggered start, so teams started at different times; also, many teams included runners at different levels of ability.
As a result, runners were spread out along the course with little relationship between speed and location. When I joined the race, the runners near me were (pretty much) a random sample of the runners in the race.
So where does the bias come from? During my time on the course, the chance of overtaking a runner, or being overtaken, is proportional to the difference in our speeds. I am more likely to catch a slow runner, and more likely to be caught by a fast runner. But runners at the same speed are unlikely to see each other.
Write a function called ObservedPmf that takes a Pmf representing the actual distribution of runners’ speeds, and the speed of a running observer, and returns a new Pmf representing the distribution of runners’ speeds as seen by the observer.
To test your function, you can use relay.py, which reads the results from the James Joyce Ramble 10K in Dedham MA and converts the pace of each runner to mph.
Compute the distribution of speeds you would observe if you ran a relay race at 7 mph with this group of runners.
End of explanation |
15,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time-average EM Cubes
Calculate the time-averaged emission measure distributions from the exact thermodynamic results and save them to be easily reloaded and used later.
Step1: Iterate over all "true" emission measure distributions and time-average them over the given interval.
Step2: Visualize the results to make sure we've averaged correctly.
Step3: Now save the results to our local temporary data folder. | Python Code:
import os
import io
import copy
import glob
import urllib
import numpy as np
import h5py
import matplotlib.pyplot as plt
import matplotlib.colors
import seaborn as sns
import astropy.units as u
import astropy.constants as const
from scipy.ndimage import gaussian_filter
from sunpy.map import Map,GenericMap
import synthesizAR
from synthesizAR.util import EMCube
from synthesizAR.instruments import InstrumentHinodeEIS
%matplotlib inline
base1 = '/data/datadrive1/ar_forward_modeling/systematic_ar_study/noaa1109_tn{}'
base2 = '/data/datadrive2/ar_viz/systematic_ar_study/noaa1109_tn{}/'
eis = InstrumentHinodeEIS([7.5e3,1.25e4]*u.s)
frequencies = [250,750,'750-ion',2500,5000]
temperature_bin_edges = 10.**(np.arange(5.6, 7.0, 0.05))*u.K
Explanation: Time-average EM Cubes
Calculate the time-averaged emission measure distributions from the exact thermodynamic results and save them to be easily reloaded and used later.
End of explanation
time_averaged_ems = {'{}'.format(freq):None for freq in frequencies}
for freq in frequencies:
print('tn = {} s'.format(freq))
if type(freq) == int:
base = base1
else:
base = base2
# setup field and observer objects
field = synthesizAR.Skeleton.restore(os.path.join(base.format(freq),'field_checkpoint'))
observer = synthesizAR.Observer(field,[eis],ds=field._convert_angle_to_length(0.4*u.arcsec))
observer.build_detector_files(base.format(freq))
# iterate over time
for time in eis.observing_time:
print('t = {}'.format(time))
emcube = observer.make_emission_measure_map(time,eis,temperature_bin_edges=temperature_bin_edges)
if time_averaged_ems['{}'.format(freq)] is None:
time_averaged_ems['{}'.format(freq)] = emcube
for m in time_averaged_ems['{}'.format(freq)]:
m.data /= eis.observing_time.shape[0]
else:
for m1,m2 in zip(time_averaged_ems['{}'.format(freq)],emcube):
m1.data += m2.data/eis.observing_time.shape[0]
Explanation: Iterate over all "true" emission measure distributions and time-average them over the given interval.
End of explanation
fig = plt.figure(figsize=(20,15))
plt.subplots_adjust(right=0.87)
cax = fig.add_axes([0.88, 0.12, 0.025, 0.75])
plt.subplots_adjust(hspace=0.1)
for i in range(time_averaged_ems['250'].temperature_bin_edges.shape[0]-1):
# apply a filter to the
tmp = time_averaged_ems['250'][i].submap(u.Quantity([250,500],u.arcsec),u.Quantity([150,400],u.arcsec))
tmp.data = gaussian_filter(tmp.data,
eis.channels[0]['gaussian_width']['x'].value
)
# set up axes properly and add plot
ax = fig.add_subplot(6,5,i+1,projection=tmp)
im = tmp.plot(axes=ax,
annotate=False,
cmap=matplotlib.cm.get_cmap('magma'),
norm=matplotlib.colors.SymLogNorm(1, vmin=1e25, vmax=1e29)
)
# set title and labels
ax.set_title(r'${t0:.2f}-{t1:.2f}$ {uni}'.format(t0=np.log10(tmp.meta['temp_a']),
t1=np.log10(tmp.meta['temp_b']),uni='K'))
if i<25:
ax.coords[0].set_ticklabel_visible(False)
else:
ax.set_xlabel(r'$x$ ({})'.format(u.Unit(tmp.meta['cunit1'])))
if i%5==0:
ax.set_ylabel(r'$y$ ({})'.format(u.Unit(tmp.meta['cunit2'])))
else:
ax.coords[1].set_ticklabel_visible(False)
cbar = fig.colorbar(im,cax=cax)
Explanation: Visualize the results to make sure we've averaged correctly.
End of explanation
for key in time_averaged_ems:
time_averaged_ems[key].save('../data/em_cubes_true_tn{}_t7500-12500.h5'.format(key))
foo = EMCube.restore('../data/em_cubes_tn250_t7500-12500.h5')
fig = plt.figure(figsize=(20,15))
plt.subplots_adjust(right=0.87)
cax = fig.add_axes([0.88, 0.12, 0.025, 0.75])
plt.subplots_adjust(hspace=0.1)
for i in range(foo.temperature_bin_edges.shape[0]-1):
# apply a filter to the
tmp = foo[i].submap(u.Quantity([250,500],u.arcsec),u.Quantity([150,400],u.arcsec))
tmp.data = gaussian_filter(tmp.data,
eis.channels[0]['gaussian_width']['x'].value
)
# set up axes properly and add plot
ax = fig.add_subplot(6,5,i+1,projection=tmp)
im = tmp.plot(axes=ax,
annotate=False,
cmap=matplotlib.cm.get_cmap('magma'),
norm=matplotlib.colors.SymLogNorm(1, vmin=1e25, vmax=1e29)
)
# set title and labels
ax.set_title(r'${t0:.2f}-{t1:.2f}$ {uni}'.format(t0=np.log10(tmp.meta['temp_a']),
t1=np.log10(tmp.meta['temp_b']),uni='K'))
if i<25:
ax.coords[0].set_ticklabel_visible(False)
else:
ax.set_xlabel(r'$x$ ({})'.format(u.Unit(tmp.meta['cunit1'])))
if i%5==0:
ax.set_ylabel(r'$y$ ({})'.format(u.Unit(tmp.meta['cunit2'])))
else:
ax.coords[1].set_ticklabel_visible(False)
cbar = fig.colorbar(im,cax=cax)
Explanation: Now save the results to our local temporary data folder.
End of explanation |
15,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tracking the Smoke Caused by the fires
In this example we show how to use HRRR Smoke Experimental dataset to analyse smoke in the US and we will also download historical fire data from Cal Fire web page to visualize burned area since 2013.
The High-Resolution Rapid Refresh Smoke (HRRRSmoke) is a three-dimensional model that allows simulation of mesoscale flows and smoke dispersion over complex terrain, in the boundary layer and aloft at high spatial resolution over the CONUS domain. The smoke model comprises a suite of fire and environmental products for forecasters during the fire weather season. Products derived from the HRRRSmoke model include the Fire Radiative Power (FRP), Near-Surface Smoke (PM2.5), and Vertically Integrated Smoke, to complement the 10-meter winds, 1-hour precipitation, 2-meter temperature and surface visibility experimental forecast products. Keep in mind, that this dataset is EXPERIMENTAL. Therefore, they should not be used to make decisions regarding safety of life or property.
HRRR Smoke has many different weather parameters, two of them are directly smoke related - Column-integrated mass density and Mass density (concentration) @ Specified height level above ground. First is vertically-integrated smoke and second smoke on the lowest model level (8 m).
As we would like to have data about the Continental United States we will download data by using Package API. Then we will create a widget where you can choose timestamp by using a slider. After that, we will also save the same data as a GIF to make sharing the results with friends and colleagues more fun. And finally, we will compare smoke data with CAMS particulate matter data to find out if there's a colleration between them as HRRR Smoke is still experimental.
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: At first, we need to define the dataset name and a variable we want to use.
Step3: Then we define spatial range. We decided to analyze US, where unfortunately catastrofic wildfires are taking place at the moment and influeces air quality.
Step4: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
Step5: Work with the downloaded files
We start with opening the files with xarray. After that, we will create a map plot with a time slider, then make a GIF using the images, then we will do the same thing for closer area - California; and finally, we will download csv file about fires in California to visualize yearly incidents data as a bar chart.
Step6: Here we are making a Basemap of the US that we will use for showing the data.
Step7: Now it is time to plot all the data. A great way to do it is to make an interactive widget, where you can choose time stamp by using a slider.
As the minimum and maximum values are very different, we are using logarithmic colorbar to visualize it better.
On the map we can see that the areas near fires have more smoke, but it travels pretty far. Depending on when the notebook is run, we can see very different results.
But first we define minimum, maximum and also colormap.
Step8: Let's include an image from the last time-step as well, because GitHub Preview doesn't show the time slider images.
Step9: With the function below we will save images you saw above to the local filesystem as a GIF, so it is easily shareable with others.
Step10: As we are interested in California fires right now, it would make sense to make animation of only California area as well. So people can be prepared when smoke hits their area. The model has pretty good spatial resolution as well - 3 km, which makes tracking the smoke easier.
Step11: Finally, we will remove the package we downloaded.
Step12: Data about Burned Area from Cal Fire
Now we will download csv file from Cal Fire web page and illustrate how many acres each year was burnt since 2013.
Step13: Here we convert incident_dateonly_created column to datetime, so it's easier to group data by year.
Step14: Below you can see the data from acres_burned.csv file. It has information about each incident. This time we only compute total acres burned each year.
Step15: Computing yearly sums. In some reason there's many years without much data, so we will filter it out. Also, reseting index, as we don't want dates to be as an index and making year column.
Step16: We can see the computed data below.
Step17: Finally we will make a bar chart of the data. We are using seaborn this time for plotting the data and to visualize it better, we added colormap to bar chart as well.
Image will be saved into the working directory. | Python Code:
%matplotlib notebook
%matplotlib inline
import numpy as np
import dh_py_access.lib.datahub as datahub
import xarray as xr
import matplotlib.pyplot as plt
import ipywidgets as widgets
from mpl_toolkits.basemap import Basemap,shiftgrid
import dh_py_access.package_api as package_api
import matplotlib.colors as colors
import warnings
import datetime
import shutil
import imageio
import seaborn as sns
import pandas as pd
import os
import matplotlib as mpl
import wget
warnings.filterwarnings("ignore")
Explanation: Tracking the Smoke Caused by the fires
In this example we show how to use HRRR Smoke Experimental dataset to analyse smoke in the US and we will also download historical fire data from Cal Fire web page to visualize burned area since 2013.
The High-Resolution Rapid Refresh Smoke (HRRRSmoke) is a three-dimensional model that allows simulation of mesoscale flows and smoke dispersion over complex terrain, in the boundary layer and aloft at high spatial resolution over the CONUS domain. The smoke model comprises a suite of fire and environmental products for forecasters during the fire weather season. Products derived from the HRRRSmoke model include the Fire Radiative Power (FRP), Near-Surface Smoke (PM2.5), and Vertically Integrated Smoke, to complement the 10-meter winds, 1-hour precipitation, 2-meter temperature and surface visibility experimental forecast products. Keep in mind, that this dataset is EXPERIMENTAL. Therefore, they should not be used to make decisions regarding safety of life or property.
HRRR Smoke has many different weather parameters, two of them are directly smoke related - Column-integrated mass density and Mass density (concentration) @ Specified height level above ground. First is vertically-integrated smoke and second smoke on the lowest model level (8 m).
As we would like to have data about the Continental United States we will download data by using Package API. Then we will create a widget where you can choose timestamp by using a slider. After that, we will also save the same data as a GIF to make sharing the results with friends and colleagues more fun. And finally, we will compare smoke data with CAMS particulate matter data to find out if there's a colleration between them as HRRR Smoke is still experimental.
End of explanation
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
End of explanation
dh = datahub.datahub(server,version,API_key)
dataset = 'noaa_hrrr_wrf_smoke'
variable_name1 = 'Mass_density_concentration_height_above_ground'
Explanation: At first, we need to define the dataset name and a variable we want to use.
End of explanation
# reftime = datetime.datetime.strftime(datetime.datetime.today(), '%Y-%m-%d') + 'T00:00:00'
area_name = 'usa'
today_hr = datetime.datetime.strftime(datetime.datetime.today(),'%Y%m%dT%H')
latitude_north = 49; longitude_west = -127
latitude_south = 26; longitude_east = -70.5
Explanation: Then we define spatial range. We decided to analyze US, where unfortunately catastrofic wildfires are taking place at the moment and influeces air quality.
End of explanation
package_hrrr = package_api.package_api(dh,dataset,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,area_name=area_name+today_hr)
package_hrrr.make_package()
package_hrrr.download_package()
Explanation: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
End of explanation
dd1 = xr.open_dataset(package_hrrr.local_file_name)
dd1['longitude'] = ((dd1.lon+180) % 360) - 180
dd1[variable_name1].data[dd1[variable_name1].data < 0] = 0
dd1[variable_name1].data[dd1[variable_name1].data == np.nan] = 0
Explanation: Work with the downloaded files
We start with opening the files with xarray. After that, we will create a map plot with a time slider, then make a GIF using the images, then we will do the same thing for closer area - California; and finally, we will download csv file about fires in California to visualize yearly incidents data as a bar chart.
End of explanation
m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4,
resolution = 'h', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(dd1.longitude.data,dd1.lat.data)
lonmap,latmap = m(lons,lats)
Explanation: Here we are making a Basemap of the US that we will use for showing the data.
End of explanation
vmax = np.nanmax(dd1[variable_name1].data)
vmin = 2
cmap = mpl.cm.twilight.colors[:-100]
tmap = mpl.colors.LinearSegmentedColormap.from_list('twilight_edited', cmap)
def loadimg(k):
fig=plt.figure(figsize=(10,7))
ax = fig.add_subplot(111)
pcm = m.pcolormesh(lonmap,latmap,dd1[variable_name1].data[k][0],
norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = tmap)
ilat,ilon = np.unravel_index(np.nanargmax(dd1[variable_name1].data[k][0]),dd1[variable_name1].data[k][0].shape)
cbar = plt.colorbar(pcm,fraction=0.024, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3])
cbar.ax.set_yticklabels([0,10,100,1000])
ttl = plt.title('Near Surface Smoke ' + str(dd1[variable_name1].time[k].data)[:-10],fontsize=20,fontweight = 'bold')
ttl.set_position([.5, 1.05])
cbar.set_label(dd1[variable_name1].units)
m.drawcountries()
m.drawstates()
m.drawcoastlines()
print("Maximum: ","%.2f" % np.nanmax(dd1[variable_name1].data[k][0]))
plt.show()
widgets.interact(loadimg, k=widgets.IntSlider(min=0,max=len(dd1[variable_name1].data)-1,step=1,value=0, layout=widgets.Layout(width='100%')))
Explanation: Now it is time to plot all the data. A great way to do it is to make an interactive widget, where you can choose time stamp by using a slider.
As the minimum and maximum values are very different, we are using logarithmic colorbar to visualize it better.
On the map we can see that the areas near fires have more smoke, but it travels pretty far. Depending on when the notebook is run, we can see very different results.
But first we define minimum, maximum and also colormap.
End of explanation
loadimg(9)
Explanation: Let's include an image from the last time-step as well, because GitHub Preview doesn't show the time slider images.
End of explanation
def make_ani(m,lonmap,latmap,aniname,smaller_area=False):
if smaller_area==True:
fraction = 0.035
fontsize = 13
else:
fraction = 0.024
fontsize = 20
folder = './anim/'
for k in range(len(dd1[variable_name1])):
filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png'
if not os.path.exists(filename):
fig=plt.figure(figsize=(10,7))
ax = fig.add_subplot(111)
pcm = m.pcolormesh(lonmap,latmap,dd1[variable_name1].data[k][0],
norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = tmap)
m.drawcoastlines()
m.drawcountries()
m.drawstates()
cbar = plt.colorbar(pcm,fraction=fraction, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3])
cbar.ax.set_yticklabels([0,10,100,1000])
ttl = plt.title('Near Surface Smoke ' + str(dd1[variable_name1].time[k].data)[:-10],fontsize=fontsize,fontweight = 'bold')
ttl.set_position([.5, 1.05])
cbar.set_label(dd1[variable_name1].units)
ax.set_xlim()
if not os.path.exists(folder):
os.mkdir(folder)
plt.savefig(filename,bbox_inches = 'tight',dpi=150)
plt.close()
files = sorted(os.listdir(folder))
images = []
for file in files:
if not file.startswith('.'):
filename = folder + file
images.append(imageio.imread(filename))
kargs = { 'duration': 0.3,'quantizer':2,'fps':5.0}
imageio.mimsave(aniname, images, **kargs)
print ('GIF is saved as {0} under current working directory'.format(aniname))
shutil.rmtree(folder)
make_ani(m,lonmap,latmap,'hrrr_smoke.gif')
Explanation: With the function below we will save images you saw above to the local filesystem as a GIF, so it is easily shareable with others.
End of explanation
latitude_north_cal = 43; longitude_west_cal = -126.
latitude_south_cal = 30.5; longitude_east_cal = -113
m2 = Basemap(projection='merc', lat_0 = 55, lon_0 = -4,
resolution = 'h', area_thresh = 0.05,
llcrnrlon=longitude_west_cal, llcrnrlat=latitude_south_cal,
urcrnrlon=longitude_east_cal, urcrnrlat=latitude_north_cal)
lons2,lats2 = np.meshgrid(dd1.longitude.data,dd1.lat.data)
lonmap_cal,latmap_cal = m2(lons2,lats2)
make_ani(m2,lonmap_cal,latmap_cal,'hrrr_smoke_california.gif',smaller_area=True)
Explanation: As we are interested in California fires right now, it would make sense to make animation of only California area as well. So people can be prepared when smoke hits their area. The model has pretty good spatial resolution as well - 3 km, which makes tracking the smoke easier.
End of explanation
os.remove(package_hrrr.local_file_name)
Explanation: Finally, we will remove the package we downloaded.
End of explanation
if not os.path.exists('acres_burned.csv'):
wget.download('https://www.fire.ca.gov/imapdata/mapdataall.csv',out='acres_burned.csv')
datain = pd.read_csv('acres_burned.csv')
Explanation: Data about Burned Area from Cal Fire
Now we will download csv file from Cal Fire web page and illustrate how many acres each year was burnt since 2013.
End of explanation
datain['incident_dateonly_created'] = pd.to_datetime(datain['incident_dateonly_created'])
Explanation: Here we convert incident_dateonly_created column to datetime, so it's easier to group data by year.
End of explanation
datain
Explanation: Below you can see the data from acres_burned.csv file. It has information about each incident. This time we only compute total acres burned each year.
End of explanation
burned_acres_yearly = datain.resample('1AS', on='incident_dateonly_created')['incident_acres_burned'].sum()
burned_acres_yearly = burned_acres_yearly[burned_acres_yearly.index > datetime.datetime(2012,1,1)]
burned_acres_yearly = burned_acres_yearly.reset_index()
burned_acres_yearly['year'] = pd.DatetimeIndex(burned_acres_yearly.incident_dateonly_created).year
Explanation: Computing yearly sums. In some reason there's many years without much data, so we will filter it out. Also, reseting index, as we don't want dates to be as an index and making year column.
End of explanation
burned_acres_yearly
Explanation: We can see the computed data below.
End of explanation
fig,ax = plt.subplots(figsize=(10,6))
pal = sns.color_palette("YlOrRd_r", len(burned_acres_yearly))
rank = burned_acres_yearly['incident_acres_burned'].argsort().argsort()
sns.barplot(x='year',y='incident_acres_burned',data=burned_acres_yearly,ci=95,ax=ax,palette=np.array(pal[::-1])[rank])
ax.set_xlabel('Year',fontsize=15)
ax.set_ylabel('Burned Area [acres]',fontsize=15)
ax.grid(color='#C3C8CE',alpha=1)
ax.set_axisbelow(True)
ax.spines['bottom'].set_color('#C3C8CE')
ax.spines['top'].set_color('#C3C8CE')
ax.spines['left'].set_color('#C3C8CE')
ax.spines['right'].set_color('#C3C8CE')
ttl = ax.set_title('Burned Area in California',fontsize=20,fontweight = 'bold')
ttl.set_position([.5, 1.05])
ax.tick_params(labelsize=15,length=0)
plt.savefig('acres_burned_cali.png',dpi=300)
Explanation: Finally we will make a bar chart of the data. We are using seaborn this time for plotting the data and to visualize it better, we added colormap to bar chart as well.
Image will be saved into the working directory.
End of explanation |
15,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="right">Python 3.6 Jupyter Notebook</div>
Network analysis using NetworkX
<div class="alert alert-warning">
<b>This notebook contains advanced exercises that are only applicable to students who wish to deepen their understanding and qualify for bonus marks on this course.</b> You will be able to achieve 100% for this notebook by successfully completing Exercises 1, 2, 3, 5, 6, and 7. An optional, additional exercise (Exercise 4) can be completed to qualify for bonus marks.
</div>
Your completion of the notebook exercises will be graded based on your ability to do the following
Step1: 1. Graph structures using NetworkX
In this notebook, you will continue working with the empirical dataset from the "Friends and Family" study used in Module 2.
1.1 Data preparation
As before, the first step is preparing the data for analysis. In the following, you will load the data into a DataFrame object, filter and retain the records of interest, and select the fields or data columns to use when creating graph objects.
1.1.1 Load the data into a DataFrame
In this data, each record or row is typical of what is available in a CDR, i.e., the actors involved, the starting time of the interaction, the duration of the interaction, who initiated it, and who was the recipient, among other details not included here (such as the geolocation of the sender and receiver).
Step2: 1.1.2 Row filtering
In the data set, there are calls to outsiders that can be seen in each entry where the participant's ID is "NaN". These are not relevant to the current exercise and need to be removed before you proceed. Remove all calls where one of the participant IDs is missing. First, check the number of records in your DataFrame using Pandas's shape DataFrame method.
Step3: Next, review the data using the info() method.
Step4: Next, you will clean the data by removing interactions involving outsiders as discussed above. Removing missing values is very common in data analysis, and Pandas has a convenient method, appropriately named dropna(), designed to automate this cleaning process.
Step5: 1.1.3 Column selection
For the purpose of this study, you should only focus on the social actors involved in the call interaction. Therefore, you can remove all columns not relevant to the network being analyzed.
Step6: Finally, exclude rows where the actors are the same.
Step7: 1.2 Creating graph objects with NetworkX
The call interactions captured above are directed, meaning that edges (u,v) and (v,u) are different.
First, let's try to capture the number of interactions between social actors, irrespective of who initiated the call. This will be done using an undirected graph. You will need to capture the number of interactions between any pair of actors with a link in the graph. Therefore, the graph object that needs to be created is a weighted undirected graph.
Using a Pandas DataFrame object as direct input into NetworkX to create graphs, the following demonstration illustrates how to build an unweighted and undirected graph.
Step8: Review basic information on your graph.
Step9: In the following cells, the neighbors for five of the nodes are saved in Python dict, with the node label as key, and then printed.
Step10: Your original objective is to create a weighted undirected graph for call interactions, with the weights representing the number of interactions between two distinct participants. As illustrated above, you can use the "from_pandas_dataframe" method to build an undirected graph between the pairs of actors, by specifying the graph structure using a parameter to the argument "create_using=". To get the correct weights in the undirected graph, however, you will need to add the weight information separately. Unfortunately, you cannot rely on NetworkX to do this as it cannot be used to control what data the undirected edges get. Below is a description of how to add the necessary weights to the undirected graph.
The first task is to compute the number of interactions between participants. You will use Pandas' "group_by" DataFrame method to achieve this.
Step11: Instantiate a weighted undirected graph, and populate edge information using the edges list from the directed graph.
Step12: Now, iterate through each link from the directed graph, adding the attribute weight (counts) to the corresponding link in the undirected graph.
Step13: Look at some of the edges and their corresponding weights.
Step14: You can verify whether the steps you executed above have worked using the following
Step15: Based on the comparison above, it can be said with confidence that your graph object captures the interactions as expected.
<br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Calculate the number of call interactions between participant sp10-01-52 and participant fa10-01-81 captured in your graph, using any of the above approaches.
Step16: <br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete
Step17: 1.3.2 Graph visualization using NetworkX's in-built "spring layout"
Step18: 1.3.3 Graph visualization with Pydot rendering
Step19: <br>
<div class="alert alert-info">
<b>Exercise 2 Start.</b>
</div>
Instructions
Based on the various visualizations explored above, what can you tell about these networks and the types of interactions they capture? Please provide written feedback (a sentence or two) based on your insights of the call log data in the markdown cell below.
Hint
Step20: <br>
<div class="alert alert-info">
<b>Exercise 3 End.</b>
</div>
Exercise complete
Step21: 2.1.2 Logarithmic plot of the degree distribution
In many cases, the histogram distribution is best represented using a log-log plot.
Step22: 2.2 Node centrality
Centrality measures provide relative measures of importance in a network. There are many different centrality measures, and each measures a different type of importance. In the video content, you were introduced to the following centrality measures
Step23: The visual above uses different colors on nodes to highlight their degree centrality. Blue and purple nodes have a low value, and the yellow and green nodes indicate the nodes with the highest centrality values in the network. Although it is possible to add label information on the nodes, it can become too busy and, therefore, make it difficult to read the visual. In the following example, the data is arranged according to the degree centrality measure so that the node with the highest degree centrality measure appears at the top, followed by the node with the next highest degree centrality measure, and so forth (that is, in descending order).
Step24: Note
Step25: Here are some immediate questions to ask
Step26: 2.2.2 Closeness centrality
Step27: The single node with the highest closeness centrality can be distinguished by the yellow color, whereas those with lower values are depicted in a gradation of colors from green to purplish color. To propagate information quickly in the network, one would need to involve nodes with a high closeness centrality measure.
Below, you will identify these nodes explicitly, and store the data in a separate DataFrame.
Step28: 2.2.3 Betweenness centrality
Step29: Betweenness centrality is a measure of the influence a node has over the spread of information through the network. Specifically, these nodes are strategically positioned, and dictate information flow across the network. In the visual above, two nodes (one in yelow color and the other in a blue-green color) are highlighted as the key nodes that govern information flow in the network. You can explicitly identify these nodes by re-arranging the data in order of descending betweenness centrality measure.
Step30: 2.2.4 Eigenvector centrality
The eigenvector centrality measure is based on the idea that a node is important if it is linked to other important nodes. Eigenvector centrality characterizes the "global" (as opposed to "local") prominence of a vertex in a graph. Google’s Pagerank algorithm is a variation of eigenvector centrality.
Step31: Now, identify the nodes with the highest eigenvector centrality.
Step32: 2.3 Comparing the connectedness measures
Based on the above examples, you should now have a sense that the centrality metrics are similar regarding how they rank important nodes, since a limited number of nodes are common across different metrics. However, they also differ because they do not always return the same nodes. In this section, analysis tools, which assist in comparing these metrics, are provided.
Below, you are provided with a plotting function that accepts two objects containing different centrality measures as Python dicts. The metrics you found above are all in the form of Python dict objects, and can be used as is. The function plots a scatter plot of the metrics against each other, so as to compare the centrality measures.
Note
Step33: 2.3.1 Compare betweenness centrality and degree centrality
Use a scatter plot to visually compare the betweenness centrality and degree centrality.
Step34: The distribution of the points in the scatter plot above is unclear due to the effect of the two nodes with very high centrality values. To better understand the distribution of other nodes, remove these nodes from the data set, and redraw the scatter plot.
Step35: 2.3.3 Merge the centrality measures into a single DataFrame
You can also use Pandas to merge all of the centrality measures that you have computed into a single DataFrame. The merge method accepts two DataFrames, and merges on a column that is common to both DataFrames. Below is a repeated call on merge that eventually outputs a single DataFrame.
Step36: The above Pandas functionality is generally quite useful when you are presented with data from different sources, and would like to combine them into a single DataFrame using a common column that is shared by both DataFrames.
Save the merged DataFrame for future use.
Step37: <br>
<div class="alert alert-info">
<b>Exercise 4 [Advanced] Start.</b>
</div>
Instructions
Make a copy of the Graph G, and assign it to variable "g3".
Remove the two nodes with the highest betweenness centrality measures from "g3".
Recompute the degree and eigenvector centrality measure for "g3", and assign the output to the variables "deg_centrality3" and "eig_centrality3", respectively.
Make a scatter plot comparison of the centrality measures, computed in the previous step, using the "centrality_scatter" plot function provided.
Step38: <br>
<div class="alert alert-info">
<b>Exercise 4 [Advanced] End.</b>
</div>
Exercise complete
Step39: 2.5 Clustering coefficient
The clustering coefficient is used to measure the extent to which nodes tend to cluster together. This measure can be understood as the "friends of my friends are friends" measure. In most real-world networks, such as social networks, nodes tend to create tight-knit groups, characterized by a relatively high density of connections between nodes. This likelihood tends to be greater than the average probability of a tie randomly established between two nodes. A high clustering coefficient for a network is an indication of a small world, which is a phenomenon in which two strangers often find that they have a friend in common. Human social networks, such as on Facebook, Twitter, or LinkedIn, typically exhibit the feature that, in any cluster of friends, each friend is also connected to other friends.
Two definitions of the clustering coefficient of a graph are commonly used
Step40: Watts and Strogatz (1998) proposed another clustering definition, which is referred to as the local clustering coefficient. More detail can be found in the caption of Figure 2 of their freely-available paper on the collective dynamics of small-world networks. The local clustering coefficient gives an indication of the embeddedness of single nodes or how concentrated the neighborhood of that node is. It is given by the ratio of the number of actual edges there are between neighbors to the number of potential edges there are between neighbors. The clustering coefficient of a network is then given as the average of the vertex clustering coefficients.
The intuition behind the local clustering coefficient is illustrated below, using the following network.
Step41: Choosing Node 5 as the node of interest, let's calculate its clustering coefficient (i.e., how concentrated its neighborhood is).
Step42: NetworkX contains a method for calculating the clustering coefficient for all the nodes in a graph. You can specify a list of nodes as an argument when calling the method on a graph.
Note
Step43: Do all nodes in a network or graph have the same local clustering coefficient? Call the method without specifying the "nodes" argument.
Step44: When applied to a single node, the clustering coefficient is a measure of how complete its neighborhood is. When applied to an entire network, it is the average clustering coefficient over all of the nodes in the network. Again, you can also compute this using NetworkX.
Step46: With this background, you can now calculate the average clustering coefficient of the call data network from above.
Note
Step47: <br>
<div class="alert alert-info">
<b>Exercise 5 Start.</b>
</div>
Instructions
In the previous example, nodes with lower degrees were removed from the graph, and the average clustering coefficient recomputed.
Describe, in one sentence, the effect on the average clustering coefficient as these nodes were removed?
Your markdown answer.
<br>
<div class="alert alert-info">
<b>Exercise 5 End.</b>
</div>
Exercise complete
Step48: Now, extend the table structure, from above, by adding two Boolean type columns that give the results of comparing "L.actual" to "L.random," and "C.actual" to "C.random".
Step49: Finally, the relative values "C.actual" to "C.random" can be compared.
Step50: <br>
<div class="alert alert-info">
<b>Exercise 7 Start.</b>
</div>
Instructions
Based on the above computations, and the definition of a small-world network in the introduction to Section 3 of this notebook, list which of the three networks (film actors, power grid, and C. elegans) exhibit small-world phenomena.
Hint | Python Code:
# Load the relevant libraries to your notebook.
import pandas as pd # Processing csv files and manipulating the DataFrame.
import networkx as nx # Graph-like object representation and manipulation module.
import matplotlib.pylab as plt # Plotting and data visualization module.
# This is used for basic graph visualization.
import numpy as np
from networkx.drawing.nx_agraph import graphviz_layout
import random
from IPython.display import Image, display
# Set global parameters for plotting.
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 8)
def pydot(G):
pdot = nx.drawing.nx_pydot.to_pydot(G)
display(Image(pdot.create_png()))
Explanation: <div align="right">Python 3.6 Jupyter Notebook</div>
Network analysis using NetworkX
<div class="alert alert-warning">
<b>This notebook contains advanced exercises that are only applicable to students who wish to deepen their understanding and qualify for bonus marks on this course.</b> You will be able to achieve 100% for this notebook by successfully completing Exercises 1, 2, 3, 5, 6, and 7. An optional, additional exercise (Exercise 4) can be completed to qualify for bonus marks.
</div>
Your completion of the notebook exercises will be graded based on your ability to do the following:
Understand: Do your pseudo-code and comments show evidence that you recall and understand technical concepts?
Apply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets?
Analyze: Are you able to pick the relevant method or library to resolve specific stated questions?
Evaluate: Are you able to interpret the results and justify your interpretation based on the observed data?
Notebook objectives
By the end of this notebook, you will be expected to:
Prepare a data set for graph analysis, using NetworkX;
Evaluate and compare structural properties of a graph object;
Interpret what information the structural properties provide in the physical world; and
Develop a basic understanding of small-world networks.
List of exercises:
Exercise 1: Compute the number of call interactions between a pair of nodes.
Exercise 2: Evaluate structure qualitatively in a graph based on visualization.
Exercise 3: Create a graph object using the SMS data set.
Exercise 4 [Advanced]: Compare the centrality structural properties evaluated on a graph.
Exercise 5: Describe the effect on the average clustering coefficient when nodes of lower degree are removed.
Exercise 6: List the two criteria of a small-world network.
Exercise 7: Identify small-world networks, given the values for the characteristic path length and clustering coefficient.
Notebook introduction
The use of phone logs to infer relationships between volume of communication and other parameters has been an area of major research interest. In his seminal paper, which was the first application of phone logs, George Kingsley Zipf (1949) investigated the influence of distance on communication. Many studies have since followed. Big data is characterized by significant increases in structured and unstructured data generated by mobile phones that are sampled and captured at high velocities. Its emergence, and the availability of computer processing technologies that are able to store and process these data sets efficiently, has made it possible to expand these studies in order to improve our understanding of human behavior with unprecedented resolution. Mobile phone data allows the inference of real social networks using call detail records, or CDRs (i.e., phone calls, short message service (SMS) and multimedia message (MMS) communications). These records are combined with GPS and WiFi datasets, browsing habits, application logs, and tower data to reveal a superposition of several social actors.
According to Blondel et al. (2015):
The mobile nature of a mobile phone brings two advantages: first, the temporal patterns of communications [are] reflected in great detail due to the fact that the owner of the device usually carries the device with them and therefore the possibility of receiving the call exists in almost all cases, and second, the positioning data of a mobile phone allows tracking the displacements of its owner.
Unlike self-reported surveys – which are often subjective, limited to a very small subset of the population, and have been the only avenue used to gather data in the past – mobile phone CDRs contain information on verifiable communications between millions of people at a time. Further enrichment from geolocation data, which invariably is also collected alongside CDRs, as well as other external data that is available for the target segment (typically demographics), makes mobile phone CDRs an extremely rich and informative source of data for scientists and analysts.
These interactions via mobile phones can be represented by a large network where nodes represent individuals, and links are drawn between individuals that have had a phone call, or exchanged messages or other media.
The study of the structure of such networks provides useful insights into their organization, and can assist in improving communication infrastructure, understanding human behavior, traffic planning, and marketing initiatives, among others. According to Gautier Krings (2012), these applications are informed by the extraction and analysis of different kinds of information from large networks, including the following:
Associating every node with geographical coordinates. This can facilitate how geography influences the creation of links. More specifically, the intensity of communication between nodes decreases as a power of the geographical distance that separates them.
Studying how links in networks change over time (i.e., dynamical networks). In these networks, new nodes enter or leave the network and the strength of their connections rise and wane during the observation period. Of particular interest is the influence of time scales on the emergence of different structural properties of dynamical networks.
Detecting communities in networks. Communities are groups of nodes that are densely connected to each other.
Load libraries and set global parameters for Matplotlib
End of explanation
# Read the CallLog.csv file, print the number of records loaded as well as the first 5 rows.
calls = pd.read_csv('../data/CallLog.csv')
print('Loaded {0} rows of call log.'.format(len(calls)))
calls.head()
Explanation: 1. Graph structures using NetworkX
In this notebook, you will continue working with the empirical dataset from the "Friends and Family" study used in Module 2.
1.1 Data preparation
As before, the first step is preparing the data for analysis. In the following, you will load the data into a DataFrame object, filter and retain the records of interest, and select the fields or data columns to use when creating graph objects.
1.1.1 Load the data into a DataFrame
In this data, each record or row is typical of what is available in a CDR, i.e., the actors involved, the starting time of the interaction, the duration of the interaction, who initiated it, and who was the recipient, among other details not included here (such as the geolocation of the sender and receiver).
End of explanation
# Initial number of records.
calls.shape[0]
Explanation: 1.1.2 Row filtering
In the data set, there are calls to outsiders that can be seen in each entry where the participant's ID is "NaN". These are not relevant to the current exercise and need to be removed before you proceed. Remove all calls where one of the participant IDs is missing. First, check the number of records in your DataFrame using Pandas's shape DataFrame method.
End of explanation
calls.info()
Explanation: Next, review the data using the info() method.
End of explanation
# Drop rows with NaN in either of the participant ID columns.
calls = calls.dropna(subset = ['participantID.A', 'participantID.B'])
print('{} rows remaining after dropping missing values from selected columns.'.format(len(calls)))
calls.head(n=5)
Explanation: Next, you will clean the data by removing interactions involving outsiders as discussed above. Removing missing values is very common in data analysis, and Pandas has a convenient method, appropriately named dropna(), designed to automate this cleaning process.
End of explanation
# Create a new object containing only the columns of interest.
interactions = calls[['participantID.A', 'participantID.B']]
Explanation: 1.1.3 Column selection
For the purpose of this study, you should only focus on the social actors involved in the call interaction. Therefore, you can remove all columns not relevant to the network being analyzed.
End of explanation
# Get a list of rows with different participants.
row_with_different_participants = interactions['participantID.A'] != interactions['participantID.B']
# Update "interactions" to contain only the rows identified.
interactions = interactions.loc[row_with_different_participants,:]
interactions.head()
Explanation: Finally, exclude rows where the actors are the same.
End of explanation
# Create an unweighted undirected graph using the NetworkX's from_pandas_edgelist method.
# The column participantID.A is used as the source and participantID.B as the target.
G = nx.from_pandas_edgelist(interactions,
source='participantID.A',
target='participantID.B',
create_using=nx.Graph())
Explanation: 1.2 Creating graph objects with NetworkX
The call interactions captured above are directed, meaning that edges (u,v) and (v,u) are different.
First, let's try to capture the number of interactions between social actors, irrespective of who initiated the call. This will be done using an undirected graph. You will need to capture the number of interactions between any pair of actors with a link in the graph. Therefore, the graph object that needs to be created is a weighted undirected graph.
Using a Pandas DataFrame object as direct input into NetworkX to create graphs, the following demonstration illustrates how to build an unweighted and undirected graph.
End of explanation
# Print the number of nodes in our network.
print('The undirected graph object G has {0} nodes.'.format(G.number_of_nodes()))
# Print the number of edges in our network.
print('The undirected graph object G has {0} edges.'.format(G.number_of_edges()))
Explanation: Review basic information on your graph.
End of explanation
# Declare a variable for number of nodes to get neighbors of.
max_nodes = 5
# Variable initialization.
count = 0
ndict = {}
# Loop through G and get a node's neigbours, store in ndict. Do this for a maximum of 'max_nodes' nodes.
for node in list(G.nodes()):
ndict[node] = tuple(G.neighbors(node))
count = count + 1
if count > max_nodes:
break
print(ndict)
# Print only the first item in the dict.
print([list(ndict)[0], ndict[list(ndict)[0]]])
Explanation: In the following cells, the neighbors for five of the nodes are saved in Python dict, with the node label as key, and then printed.
End of explanation
# Get the count of interactions between participants and display the top 5 rows.
grp_interactions = pd.DataFrame(interactions.groupby(['participantID.A', 'participantID.B']).size(),
columns=['counts']).reset_index()
grp_interactions.head(5)
nx.to_pandas_edgelist?
# Create a directed graph with an edge_attribute labeled counts.
g = nx.from_pandas_edgelist(grp_interactions,
source='participantID.A',
target='participantID.B',
edge_attr='counts',
create_using=nx.DiGraph())
Explanation: Your original objective is to create a weighted undirected graph for call interactions, with the weights representing the number of interactions between two distinct participants. As illustrated above, you can use the "from_pandas_dataframe" method to build an undirected graph between the pairs of actors, by specifying the graph structure using a parameter to the argument "create_using=". To get the correct weights in the undirected graph, however, you will need to add the weight information separately. Unfortunately, you cannot rely on NetworkX to do this as it cannot be used to control what data the undirected edges get. Below is a description of how to add the necessary weights to the undirected graph.
The first task is to compute the number of interactions between participants. You will use Pandas' "group_by" DataFrame method to achieve this.
End of explanation
# Set all the weights to 0 at this stage. We will add the correct weight information in the next step.
G = nx.Graph()
G.add_edges_from(g.edges(), counts=0)
Explanation: Instantiate a weighted undirected graph, and populate edge information using the edges list from the directed graph.
End of explanation
for u, v, d in g.edges(data=True):
G[u][v]['counts'] += d['counts']
Explanation: Now, iterate through each link from the directed graph, adding the attribute weight (counts) to the corresponding link in the undirected graph.
End of explanation
# Print a sample of the edges, with corresponding attribute data.
max_number_of_edges = 5
count = 0
for n1,n2,attr in G.edges(data=True): # unpacking
print(n1,n2,attr)
count = count + 1
if count > max_number_of_edges:
break
Explanation: Look at some of the edges and their corresponding weights.
End of explanation
# Verify our attribute data is correct using a selected (u,v) pair from the data.
u = 'fa10-01-77'
v = 'fa10-01-78'
print('Number of undirected call interactions between {0} and {1} is {2}.'.format(u,
v,
G.get_edge_data(v,u)['counts']))
# Compare our data set to the interactions data set.
is_uv_pair = ((interactions['participantID.A'] == u) & (interactions['participantID.B'] == v))
is_vu_pair = ((interactions['participantID.A'] == v) & (interactions['participantID.B'] == u))
print('Number of undirected call interactions between {0} and {1} is {2}'.format(u,
v,
interactions[is_uv_pair | is_vu_pair].shape[0]))
Explanation: You can verify whether the steps you executed above have worked using the following:
End of explanation
# Your answer here.
Explanation: Based on the comparison above, it can be said with confidence that your graph object captures the interactions as expected.
<br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Calculate the number of call interactions between participant sp10-01-52 and participant fa10-01-81 captured in your graph, using any of the above approaches.
End of explanation
pos = graphviz_layout(G, prog='dot') # you can also try using the "neato" engine
nx.draw_networkx(G, pos=pos, with_labels=False)
_ = plt.axis('off')
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
1.3 Graph visualization
The next step is to visualize the graph object – a topic that you briefly touched on in Notebook 1. NetworkX is not primarily a graph drawing package, but provides basic drawing capabilities using Matplotlib. More advanced graph visualization packages can be used. However, these are outside of the scope of this course.
NetworkX documentation (2015) states:
Proper graph visualization is hard, and we highly recommend that people visualize their graphs with tools dedicated to that task. Notable examples of dedicated and fully-featured graph visualization tools are Cytoscape, Gephi, Graphviz and, for LaTeX typesetting, PGF/TikZ.
A graph is an abstract mathematical object without a specific representation in the Cartesian coordinate space, and graph visualization is therefore not a well-defined problem with a unique solution. Depending on which structures in the graph object are of interest, several layout algorithms exist that can be used to optimize node positioning for display visualization. Whenever you want to visualize a graph, you have to find mapping from vertices to Cartesian coordinates first, preferably in a way that is aesthetically pleasing. A separate branch of graph theory, namely graph drawing, attempts to solve this problem via several graph layout algorithms.
You will use the interface provided by Graphviz for node positioning in most of your visualization in this course, because considering other possibilities may distract from the core objectives. Two node positioning algorithms can be accessed using the Graphviz interface provided by NetworkX. They are the following:
- dot: "hierarchical" or layered drawings of directed graphs. This is the default to use if edges have directionality. The dot algorithm produces a ranked layout of a graph honoring edge directions. It is particularly appropriate for displaying hierarchies or directed acyclic graphs.
- neato: "spring model" layouts. This is the default to use if the graph is not too large (about 100 nodes), and you don't know anything else about it. Neato attempts to minimize a global energy function, which is equivalent to statistical multidimensional scaling. An ideal spring is placed between every pair of nodes, such that its length is set to the shortest path distance between the endpoints. The springs push the nodes so their geometric distance in the layout approximates their path distance in the graph.
Below is a visual display of your weighted undirected call graph, using different visualization approaches.
1.3.1 Graphviz layout using the "dot" engine
End of explanation
layout = nx.spring_layout(G)
nx.draw_networkx(G, pos=layout, node_color='green', with_labels=False)
_ = plt.axis('off')
Explanation: 1.3.2 Graph visualization using NetworkX's in-built "spring layout"
End of explanation
pydot(G)
Explanation: 1.3.3 Graph visualization with Pydot rendering
End of explanation
# Your answer here.
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 2 Start.</b>
</div>
Instructions
Based on the various visualizations explored above, what can you tell about these networks and the types of interactions they capture? Please provide written feedback (a sentence or two) based on your insights of the call log data in the markdown cell below.
Hint:
- In your answer, indicate if there appears to be some structure in the graph, or if the connections between nodes appear random (i.e., do some nodes have more links than others)? Do the participants cluster into identifiable communities or not?
Your markdown answer here.
<br>
<div class="alert alert-info">
<b>Exercise 2 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
<br>
<div class="alert alert-info">
<b>Exercise 3 Start.</b>
</div>
Instructions
You will now need to reproduce the steps above for SMS records.
Load the file "SMSLog.csv" from the data folder in your home directory, into a variable "sms".
Create a weighted undirected graph using the number of interactions between participants as weights.
Assign the graph to variable "H" (do not overwrite "G" as you will still use it below).
Ignore all interactions where one of the parties is missing or unknown (i.e. "NaN").
Disregard any self-interactions.
Display a visualization of the obtained graph network, using the spring_layout algorithm for node positioning.
Hints:
Make sure that you use different variables when loading the datasets, and remember that you can always insert additional cells in the notebook, should you prefer to break up steps or perform additional investigations.
It is good practice to make clear comments (start the line with #) in your code when sharing your work or if you need to review it at a later stage. Make sure that you add comments to enable your tutor to understand your thinking process.
The number of cells below are only indicative. You can insert additional cells as required.
End of explanation
# Extract the degree values for all the nodes of G
degrees = []
for (nd,val) in G.degree():
degrees.append(val)
# Plot the degree distribution histogram.
out = plt.hist(degrees, bins=50)
plt.title("Degree Histogram")
plt.ylabel("Frequency Count")
plt.xlabel("Degree")
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 3 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
2. Computing and visualizing structural properties of networks
Physical networks exhibit different behaviors. Since graph objects are abstractions of these behaviors, you might expect these graphs to be different. To characterize these differences, you need more than visualizations that are pleasing to the eye. To this end, a number of characteristics have been developed to characterize the structural properties of graphs. These properties help you understand and characterize physical networks with more mathematical rigor. You will now explore characteristics discussed in the video content.
2.1 Degree distribution
The degree of a node in a network is the number of connections it has to other nodes, and the degree distribution (also referred to as the neighbor distribution) is the probability distribution of these degrees over the whole network. Specifically, the degree distribution $p(k)$ is the probability that a randomly-chosen node has $k$ connections (or neighbors).
2.1.1 Degree distribution histogram
A degree distribution histogram is a plot of the frequency of occurrence of the number of connections or neighbors, based on the relationships (edges) between entities (nodes), as represented by a graph object.
Continuing with the call data (graph G), from the preceding sections, you will now compute and plot the degree distribution of the data.
End of explanation
# Logarithmic plot of the degree distribution.
values = sorted(set(degrees))
hist = [list(degrees).count(x) for x in values]
out = plt.loglog(values, hist, marker='o')
plt.title("Degree Histogram")
plt.ylabel("Log(Frequency Count)")
plt.xlabel("Log(Degree)")
Explanation: 2.1.2 Logarithmic plot of the degree distribution
In many cases, the histogram distribution is best represented using a log-log plot.
End of explanation
# Plot degree centrality.
call_degree_centrality = nx.degree_centrality(G)
colors =[call_degree_centrality[node] for node in G.nodes()]
pos = graphviz_layout(G, prog='dot')
nx.draw_networkx(G, pos, node_color=colors, node_size=300, with_labels=False)
_ = plt.axis('off')
Explanation: 2.2 Node centrality
Centrality measures provide relative measures of importance in a network. There are many different centrality measures, and each measures a different type of importance. In the video content, you were introduced to the following centrality measures:
Degree centrality: Number of connections. An important node is involved in a large number of interactions. For directed graphs, the in-degree and out-degree concepts are used. The in-degree of a Node v is the number of edges with Vertex v as the terminal vertex, and the out-degree of v is the number of edges with v as the initial vertex.
Closeness centrality: Average length of the shortest paths between a specific node and all other nodes in the graph. An important node is typically close to, and can communicate quickly with, the other nodes in the network.
Betweenness centrality: Measures the extent to which a particular vertex lies on the path between all other vertices. An important node will lie on a high proportion of paths between other nodes in the network.
Eigenvector centrality: An important node is connected to important neighbors.
The following schematic is a demonstration and comparison of the first three of the centrality metrics discussed above. In this figure, Node X always has the highest centrality measure, although it measures different behaviors in each case.
NetworkX provides the functionality to evaluate these metrics for graph objects, which will be described in the next section.
2.2.1 Degree centrality
End of explanation
# Arrange in descending order of centrality and return the result as a tuple, i.e. (participant_id, deg_centrality).
t_call_deg_centrality_sorted = sorted(call_degree_centrality.items(), key=lambda kv: kv[1], reverse=True)
# Convert tuple to pandas dataframe.
df_call_deg_centrality_sorted = pd.DataFrame([[x,y] for (x,y) in t_call_deg_centrality_sorted],
columns=['participantID', 'deg.centrality'])
Explanation: The visual above uses different colors on nodes to highlight their degree centrality. Blue and purple nodes have a low value, and the yellow and green nodes indicate the nodes with the highest centrality values in the network. Although it is possible to add label information on the nodes, it can become too busy and, therefore, make it difficult to read the visual. In the following example, the data is arranged according to the degree centrality measure so that the node with the highest degree centrality measure appears at the top, followed by the node with the next highest degree centrality measure, and so forth (that is, in descending order).
End of explanation
# Top 5 participants with the highest degree centrality measure.
df_call_deg_centrality_sorted.head()
Explanation: Note:
In NetworkX, the degree centrality values are normalized by dividing by the maximum possible degree in a simple graph ($n-1$), where $n$ is the number of nodes in the graph. To get integer values, when required, the computed degree centrality values are multiplied by ($n-1$).
You can print the nodes with the highest degree centrality measure using "head()".
End of explanation
# Number of unique actors associated with each of the five participants with highest degree centrality measure.
for node in df_call_deg_centrality_sorted.head().participantID:
print('Node: {0}, \t num_neighbors: {1}'.format(node, len(list(G.neighbors(node)))))
# Total call interactions are associated with each of these five participants with highest degree centrality measure.
for node in df_call_deg_centrality_sorted.head().participantID:
outgoing_call_interactions = interactions['participantID.A']==node
incoming_call_interactions = interactions['participantID.B']==node
all_call_int = interactions[outgoing_call_interactions | incoming_call_interactions]
print('Node: {0}, \t total number of calls: {1}'.format(node, all_call_int.shape[0]))
Explanation: Here are some immediate questions to ask:
1. How many unique actors are associated with each of the five participants with the highest degree centrality measure?
2. How many total call interactions are associated with each of those five participants?
These questions are answered below.
End of explanation
# Plot closeness centrality.
call_closeness_centrality = nx.closeness_centrality(G)
colors = [call_closeness_centrality[node] for node in G.nodes()]
pos = graphviz_layout(G, prog='dot')
nx.draw_networkx(G, pos=pos,node_color=colors, with_labels=False)
_ = plt.axis('off')
Explanation: 2.2.2 Closeness centrality
End of explanation
# Arrange participants according to closeness centrality measure, in descending order.
# Return the result as a tuple, i.e. (participant_id, cl_centrality).
t_call_clo_centrality_sorted = sorted(call_closeness_centrality.items(), key=lambda kv: kv[1], reverse=True)
# Convert tuple to pandas dataframe.
df_call_clo_centrality_sorted = pd.DataFrame([[x,y] for (x,y) in t_call_clo_centrality_sorted],
columns=['participantID', 'clo.centrality'])
# Top 5 participants with the highest closeness centrality measure.
df_call_clo_centrality_sorted.head()
Explanation: The single node with the highest closeness centrality can be distinguished by the yellow color, whereas those with lower values are depicted in a gradation of colors from green to purplish color. To propagate information quickly in the network, one would need to involve nodes with a high closeness centrality measure.
Below, you will identify these nodes explicitly, and store the data in a separate DataFrame.
End of explanation
# Plot betweenness centrality.
call_betweenness_centrality = nx.betweenness_centrality(G)
colors =[call_betweenness_centrality[node] for node in G.nodes()]
pos = graphviz_layout(G, prog='dot')
nx.draw_networkx(G, pos=pos, node_color=colors, with_labels=False)
_ = plt.axis('off')
Explanation: 2.2.3 Betweenness centrality
End of explanation
# Arrange participants according to betweenness centrality measure, in descending order.
# Return the result as a tuple, i.e. (participant_id, btn_centrality).
t_call_btn_centrality_sorted = sorted(call_betweenness_centrality.items(), key=lambda kv: kv[1], reverse=True)
# Convert tuple to a Pandas DataFrame.
df_call_btn_centrality_sorted = pd.DataFrame([[x,y] for (x,y) in t_call_btn_centrality_sorted],
columns=['participantID', 'btn.centrality'])
# Top 5 participants with the highest betweenness centrality measure.
df_call_btn_centrality_sorted.head()
Explanation: Betweenness centrality is a measure of the influence a node has over the spread of information through the network. Specifically, these nodes are strategically positioned, and dictate information flow across the network. In the visual above, two nodes (one in yelow color and the other in a blue-green color) are highlighted as the key nodes that govern information flow in the network. You can explicitly identify these nodes by re-arranging the data in order of descending betweenness centrality measure.
End of explanation
# Plot eigenvector centrality.
call_eigenvector_centrality = nx.eigenvector_centrality(G)
colors = [call_eigenvector_centrality[node] for node in G.nodes()]
pos = graphviz_layout(G, prog='dot')
nx.draw_networkx(G, pos=pos, node_color=colors,with_labels=False)
_ = plt.axis('off')
Explanation: 2.2.4 Eigenvector centrality
The eigenvector centrality measure is based on the idea that a node is important if it is linked to other important nodes. Eigenvector centrality characterizes the "global" (as opposed to "local") prominence of a vertex in a graph. Google’s Pagerank algorithm is a variation of eigenvector centrality.
End of explanation
# Arrange participants according to eigenvector centrality measure, in descending order.
# Return the result as a tuple, i.e. (participant_id, eig_centrality).
t_call_eig_centrality_sorted = sorted(call_eigenvector_centrality.items(), key=lambda kv: kv[1], reverse=True)
# Convert tuple to pandas dataframe.
df_call_eig_centrality_sorted = pd.DataFrame([[x,y] for (x,y) in t_call_eig_centrality_sorted],
columns=['participantID', 'eig.centrality'])
# Top 5 participants with the highest eigenvector centrality measure.
df_call_eig_centrality_sorted.head()
Explanation: Now, identify the nodes with the highest eigenvector centrality.
End of explanation
# Execute this cell to define a function that produces a scatter plot.
def centrality_scatter(dict1,dict2,path="",ylab="",xlab="",title="",line=False):
'''
The function accepts two dicts containing centrality measures and outputs a scatter plot
showing the relationship between the two centrality measures
'''
# Create figure and drawing axis.
fig = plt.figure(figsize=(7,7))
# Set up figure and axis.
fig, ax1 = plt.subplots(figsize=(8,8))
# Create items and extract centralities.
items1 = sorted(list(dict1.items()), key=lambda kv: kv[1], reverse=True)
items2 = sorted(list(dict2.items()), key=lambda kv: kv[1], reverse=True)
xdata=[b for a,b in items1]
ydata=[b for a,b in items2]
ax1.scatter(xdata, ydata)
if line:
# Use NumPy to calculate the best fit.
slope, yint = np.polyfit(xdata,ydata,1)
xline = plt.xticks()[0]
yline = [slope*x+yint for x in xline]
ax1.plot(xline,yline,ls='--',color='b')
# Set new x- and y-axis limits.
plt.xlim((0.0,max(xdata)+(.15*max(xdata))))
plt.ylim((0.0,max(ydata)+(.15*max(ydata))))
# Add labels.
ax1.set_title(title)
ax1.set_xlabel(xlab)
ax1.set_ylabel(ylab)
Explanation: 2.3 Comparing the connectedness measures
Based on the above examples, you should now have a sense that the centrality metrics are similar regarding how they rank important nodes, since a limited number of nodes are common across different metrics. However, they also differ because they do not always return the same nodes. In this section, analysis tools, which assist in comparing these metrics, are provided.
Below, you are provided with a plotting function that accepts two objects containing different centrality measures as Python dicts. The metrics you found above are all in the form of Python dict objects, and can be used as is. The function plots a scatter plot of the metrics against each other, so as to compare the centrality measures.
Note:
You do not need to understand the function and its syntax. It is included for advanced students, and to make it possible to demonstrate the concepts in this section to you more easily. All you need to do is execute the cell below.
End of explanation
# Let us compare call_betweenness_centrality, call_degree_centrality.
centrality_scatter(call_betweenness_centrality, call_degree_centrality, xlab='betweenness centrality',ylab='closeness centrality',line=True)
Explanation: 2.3.1 Compare betweenness centrality and degree centrality
Use a scatter plot to visually compare the betweenness centrality and degree centrality.
End of explanation
# Make a (deep) copy of the graph; we work on the copy.
g2 = G.copy()
# Remove the 2 nodes with the highest centrality meaures as discussed above
g2.remove_node('fa10-01-04')
g2.remove_node('fa10-01-13')
# Recompute the centrality measures.
betweenness2 = nx.betweenness_centrality(g2)
centrality2 = nx.degree_centrality(g2)
# Scatter plot comparison of the recomputed measures.
centrality_scatter(betweenness2, centrality2,
xlab='betweenness centrality',ylab='degree centrality',line=True)
Explanation: The distribution of the points in the scatter plot above is unclear due to the effect of the two nodes with very high centrality values. To better understand the distribution of other nodes, remove these nodes from the data set, and redraw the scatter plot.
End of explanation
m1 = pd.merge(df_call_btn_centrality_sorted, df_call_clo_centrality_sorted)
m2 = pd.merge(m1, df_call_deg_centrality_sorted)
df_merged = pd.merge(m2, df_call_eig_centrality_sorted)
df_merged.head()
Explanation: 2.3.3 Merge the centrality measures into a single DataFrame
You can also use Pandas to merge all of the centrality measures that you have computed into a single DataFrame. The merge method accepts two DataFrames, and merges on a column that is common to both DataFrames. Below is a repeated call on merge that eventually outputs a single DataFrame.
End of explanation
df_merged.to_csv('centrality.csv', index=False)
Explanation: The above Pandas functionality is generally quite useful when you are presented with data from different sources, and would like to combine them into a single DataFrame using a common column that is shared by both DataFrames.
Save the merged DataFrame for future use.
End of explanation
# Your answer here.
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 4 [Advanced] Start.</b>
</div>
Instructions
Make a copy of the Graph G, and assign it to variable "g3".
Remove the two nodes with the highest betweenness centrality measures from "g3".
Recompute the degree and eigenvector centrality measure for "g3", and assign the output to the variables "deg_centrality3" and "eig_centrality3", respectively.
Make a scatter plot comparison of the centrality measures, computed in the previous step, using the "centrality_scatter" plot function provided.
End of explanation
print('Diameter {}'.format(nx.diameter(G)))
print('Average path length {:0.2f}'.format(nx.average_shortest_path_length(G)))
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 4 [Advanced] End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
2.4 Average path length and network diameter
The size of a network can be calculated using two measures. The first measure of network size is simply the average distance within the network, which is equal to the average of the distances between all possible pairs of vertices. The average path length shows, on average, the number of steps it takes to get from one member of the network to another.
The second, and alternative, network size measure is the diameter, and it is defined as the shortest distance between the two most distant nodes in the network. It is representative of the linear size of a network. To compute the diameter, one looks at all of the shortest path distance values between connected edges in the network.
Both are measures of the size of the network, and can be understood as the distance between the nodes. Unlike the connectedness centrality measures, which are node-focused, average path length and diameter are global metrics on the structure of the graph. Along with degree distribution and clustering coefficient, average path length provides a robust measure of network topology.
End of explanation
print('The global clustering for our graph G is {0:.4f}'.format(nx.transitivity(G)))
Explanation: 2.5 Clustering coefficient
The clustering coefficient is used to measure the extent to which nodes tend to cluster together. This measure can be understood as the "friends of my friends are friends" measure. In most real-world networks, such as social networks, nodes tend to create tight-knit groups, characterized by a relatively high density of connections between nodes. This likelihood tends to be greater than the average probability of a tie randomly established between two nodes. A high clustering coefficient for a network is an indication of a small world, which is a phenomenon in which two strangers often find that they have a friend in common. Human social networks, such as on Facebook, Twitter, or LinkedIn, typically exhibit the feature that, in any cluster of friends, each friend is also connected to other friends.
Two definitions of the clustering coefficient of a graph are commonly used:
1. Global clustering
2. Average local clustering
The global clustering coefficient, or transitivity, was discussed in the video content, and is a measure designed to give an overall indication of the clustering in the network. It is based on the concept of triplets of nodes. A triplet is three nodes that are connected by either two (open triplet) or three (closed triplet) undirected ties. A triangle consists of three closed triplets, one centered on each of the nodes. The global clustering coefficient is the number of closed triplets (or 3 $\times$ triangles) over the total number of triplets (both open and closed). In NetworkX, this measure can be obtained using a method called "transitivity", as demonstrated below.
End of explanation
# Create a graph object from a provided edges list and visualize the graph.
e = [(1,2), (2,3), (3,4), (4,5), (4,6), (5,6), (5,7), (6,7)]
g = nx.Graph()
g.add_edges_from(e)
pos = graphviz_layout(g, prog='dot')
nx.draw_networkx(g, pos=pos, with_labels=True, node_size=500)
_ = plt.axis('off')
Explanation: Watts and Strogatz (1998) proposed another clustering definition, which is referred to as the local clustering coefficient. More detail can be found in the caption of Figure 2 of their freely-available paper on the collective dynamics of small-world networks. The local clustering coefficient gives an indication of the embeddedness of single nodes or how concentrated the neighborhood of that node is. It is given by the ratio of the number of actual edges there are between neighbors to the number of potential edges there are between neighbors. The clustering coefficient of a network is then given as the average of the vertex clustering coefficients.
The intuition behind the local clustering coefficient is illustrated below, using the following network.
End of explanation
# Number of actual edges there are between neigbhors of 5.
actual_edges = len([(4,6), (6,7)])
# Total possible edges between neighbors of 5.
total_possible_edges = len([(4,6), (6,7), (4,7)])
# Clustering coeff of node.
local_clustering_coef = 1.0 * actual_edges / total_possible_edges
print(local_clustering_coef)
Explanation: Choosing Node 5 as the node of interest, let's calculate its clustering coefficient (i.e., how concentrated its neighborhood is).
End of explanation
# Local clustering for node 5
print((list(nx.clustering(g, nodes=[5]).values())[0]))
Explanation: NetworkX contains a method for calculating the clustering coefficient for all the nodes in a graph. You can specify a list of nodes as an argument when calling the method on a graph.
Note:
The clustering coefficient is defined as zero when there are less than two neighbors in a node's neighborhood.
End of explanation
nx.clustering(g)
Explanation: Do all nodes in a network or graph have the same local clustering coefficient? Call the method without specifying the "nodes" argument.
End of explanation
# Compute average clustering coefficient for our graph g directly.
print(('(Direct) The average local clustering coefficient of the network is {:0.3f}'.format(np.mean(list(nx.clustering(g).values())))))
# Or using NetworkX.
print(('(NetworkX) The average local clustering coefficient of the network is {:0.3f}'.format(nx.average_clustering(g))))
Explanation: When applied to a single node, the clustering coefficient is a measure of how complete its neighborhood is. When applied to an entire network, it is the average clustering coefficient over all of the nodes in the network. Again, you can also compute this using NetworkX.
End of explanation
# Define a function that trims.
def trim_degrees(g, degree=1):
Trim the graph by removing nodes with degree less than or equal to the value of the degree parameter
Returns a copy of the graph.
g2=g.copy()
d=nx.degree(g2)
for n in g.nodes():
if d[n]<=degree: g2.remove_node(n)
return g2
# Effect of removing weakly connected nodes.
G1 = trim_degrees(G, degree=1)
G3 = trim_degrees(G, degree=3)
G5 = trim_degrees(G, degree=5)
# Compare the clustering coefficient of the resulting network objects.
print((round(nx.average_clustering(G),3), round(nx.average_clustering(G1),3), round(nx.average_clustering(G3),3),
round(nx.average_clustering(G5),3)))
Explanation: With this background, you can now calculate the average clustering coefficient of the call data network from above.
Note:
On the network level, there are two versions of the clustering coefficient. The first one is the global clustering coefficient that you computed at the top of Section 2.5, and is discussed in the video content by Xiaowen Dong. The second one is the average local clustering coefficient of all the nodes in the network.
In most cases, nodes with a degree below a certain threshold are of little or no interest. For visualization purposes in particular, dense nodes are computationally costly, and do not render well when all of the nodes are included. Hence, you may want to exclude these from further analysis.
The next section investigates the effect of removing nodes with a degree of 1 on the clustering coefficient.
Note:
You do not need to understand the function and its syntax. It is included for advanced users, and to make it possible to demonstrate the concepts in this section to you more easily. All you need to do is execute the cell below.
End of explanation
# Reproduce the table from the study.
tbl = pd.DataFrame(np.transpose(np.array([[3.65, 18.7, 2.65],
[2.99, 12.4, 2.25],
[0.79, 0.080, 0.28],
[0.00027, 0.005, 0.05]])),
index=['Film actors', 'Power grid', 'C.elegans'],
columns = ['L.actual','L.random','C.actual','C.random'])
tbl
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 5 Start.</b>
</div>
Instructions
In the previous example, nodes with lower degrees were removed from the graph, and the average clustering coefficient recomputed.
Describe, in one sentence, the effect on the average clustering coefficient as these nodes were removed?
Your markdown answer.
<br>
<div class="alert alert-info">
<b>Exercise 5 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
3. Small-world networks
Networks incorporating the two characteristics of a high clustering coefficient $C(p)$, and a low mean shortest path or characteristic path length $L(p)$ (but higher than a random network), i.e.,
\begin{equation}
C(p) >> C_{random} \hspace{0.5cm}and\hspace{0.5cm} L(p) \gtrsim L_{random}
\end{equation}
are known as small-world networks. The name comes from the so-called "small world" phenomenon in which two strangers find that they have a friend in common. Human social networks typically exhibit the feature that, in any cluster of friends, each friend is also connected to other clusters. Consequently, it usually takes only a short string of acquaintances to connect any two people in a network.
<br>
<div class="alert alert-info">
<b>Exercise 6 Start.</b>
</div>
Instructions
Which two criteria are typically used to identify a network as a small-world network?
Type your answer in the markdown cell below.
Your markdown answer.
<br>
<div class="alert alert-info">
<b>Exercise 6 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
Consider the following data for three different real-world networks (Watts and Strogatz 1998), which were also mentioned in the video content.
The first system is a collaboration graph of feature film actors. The second is the electrical power grid of the West Coast in the United States. The third is the neural network of the nematode worm Caenorhabditis elegans (C. elegans). The graph representing film actors is a surrogate for a social network, with the advantage of being much easier to specify. The graph of the power grid is relevant to the efficiency and robustness of power networks, and C. elegans is the sole example of a completely mapped neural network. The graphs are defined as follows:
- Two actors are joined by an edge if they have acted in a film together.
- For the power grid, vertices represent generators, transformers, and substations, and edges represent high-voltage transmission lines between them.
- For C. elegans, an edge joins two neurons if they are connected by either a synapse or a gap function.
The characteristic path lengths and clustering coefficients are reproduced for the three networks in the DataFrame, and the published comparison to random graphs (for the same number of vertices and average number of edges per vertex) is provided.
End of explanation
# Compare 'L.actual' to 'L.random' to return a Boolean based on the above
tbl['L.actual gt L.random'] = tbl['L.actual'] > tbl['L.random']
tbl
Explanation: Now, extend the table structure, from above, by adding two Boolean type columns that give the results of comparing "L.actual" to "L.random," and "C.actual" to "C.random".
End of explanation
tbl['C.actual gt C.random'] = tbl['C.actual'] > tbl['C.random']
tbl
Explanation: Finally, the relative values "C.actual" to "C.random" can be compared.
End of explanation
# Create the adjacency matrix for call records using NetworkX and Pandas functions.
nodes = list(G.nodes())
call_adjmatrix = nx.to_pandas_adjacency(G,nodelist=nodes)
call_adjmatrix.to_csv('./call.adjmatrix')
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 7 Start.</b>
</div>
Instructions
Based on the above computations, and the definition of a small-world network in the introduction to Section 3 of this notebook, list which of the three networks (film actors, power grid, and C. elegans) exhibit small-world phenomena.
Hint:
You can refer to the video content for the solution.
Your markdown answer here.
<br>
<div class="alert alert-info">
<b>Exercise 7 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
4. Saving and cleaning up
Save your network data for use in the next notebook. NetworkX conveniently provides methods that, combined with Pandas, enable you to save the adjacency matrix.
End of explanation |
15,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
img_width, img_height, img_depth = 28, 28, 1
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, shape=[None, img_width, img_height, img_depth], name='inputs')
targets_ = tf.placeholder(tf.float32, shape=[None, img_width, img_height, img_depth], name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='same')
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2))
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding='same')
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2))
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding='same')
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding='same')
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='same')
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='same')
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='same')
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_, logits = logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding='same')
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2))
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3, 3), padding='same')
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2))
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3, 3), padding='same')
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3, 3), padding='same')
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3, 3), padding='same')
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3, 3), padding='same')
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='same')
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 20
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
15,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step3: Step 2
Step4: Question 1
Describe how you preprocessed the data. Why did you choose that technique?
Answer
Step5: Save Checkpoint
Step6: Load Checkpoint
Step7: Question 2
Describe how you set up the training, validation and testing data for your model. Optional
Step8: Question 3
What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow
from the classroom.
Answer
Step9: Question 4
How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)
Answer
Step10: Question 6
Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.
Answer
Step11: Question 7
Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.
NOTE | Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = './traffic-signs-data/train.p'
testing_file = './traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Step 0: Load The Data
End of explanation
### Replace each question mark with the appropriate value.
import numpy as np
# TODO: Number of training examples
n_train = len(y_train)
# TODO: Number of testing examples.
n_test = len(y_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_test.shape[1:3]
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_test))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below.
End of explanation
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
plt.figure()
for i in range(0, n_classes):
plt.subplot(5, 9, i+1)
idx = [y_test == i]
img = X_test[idx][0]
plt.imshow(img)
plt.title(str(i))
plt.axis('off')
plt.show()
Explanation: Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
End of explanation
### Preprocess the data here.
from tqdm import tqdm
import cv2
X = np.concatenate((X_train, X_test), axis=0)
y = np.concatenate((y_train, y_test), axis=0)
# convert RGB to Gray, apply histogram equalization, data normalization
X_gray = np.array([])
# Progress bar
batches_pbar = tqdm(range(len(y)), desc='Percent ', unit='images')
for i in batches_pbar:
img = X[i]
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # opencv's imread return BGR, but here is RGB
gray = cv2.equalizeHist(gray) # equalize the histogram
gray = (gray-128)/128 # normalization
gray = gray.reshape(1,32,32,1)
if i==0:
X_gray = gray
else:
X_gray = np.append(X_gray, gray, axis=0)
print(X_gray.shape)
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
There are various aspects to consider when thinking about this problem:
Neural network architecture
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation
### Generate additional data (OPTIONAL!).
### and split the data into training/validation/testing sets here.
### Feel free to use as many code cells as needed.
# data split with shuffle
# from sklearn.model_selection import train_test_split
# import os
# X_tr_val, X_ts, y_tr_val, y_ts = train_test_split(
# X_gray,
# y,
# test_size=0.1,
# random_state=832289)
# X_tr, X_val, y_tr, y_val = train_test_split(
# X_tr_val,
# y_tr_val,
# test_size=0.1,
# random_state=832289)
# print('Training features and labels randomized and split.')
# ### Generate additional data (OPTIONAL!)
# def augment_brightness_camera_images(image):
# image1 = cv2.cvtColor(image,cv2.COLOR_RGB2HSV)
# random_bright = .25+np.random.uniform()
# #print(random_bright)
# image1[:,:,2] = image1[:,:,2]*random_bright
# image1 = cv2.cvtColor(image1,cv2.COLOR_HSV2RGB)
# return image1
# def transform_image(img,ang_range,shear_range,trans_range,brightness=0):
# '''
# This function transforms images to generate new images.
# The function takes in following arguments,
# 1- Image
# 2- ang_range: Range of angles for rotation
# 3- shear_range: Range of values to apply affine transform to
# 4- trans_range: Range of values to apply translations over.
# A Random uniform distribution is used to generate different parameters for transformation
# '''
# # Rotation
# ang_rot = np.random.uniform(ang_range)-ang_range/2
# rows,cols,ch = img.shape
# Rot_M = cv2.getRotationMatrix2D((cols/2,rows/2),ang_rot,1)
# # Translation
# tr_x = trans_range*np.random.uniform()-trans_range/2
# tr_y = trans_range*np.random.uniform()-trans_range/2
# Trans_M = np.float32([[1,0,tr_x],[0,1,tr_y]])
# # Shear
# pts1 = np.float32([[5,5],[20,5],[5,20]])
# pt1 = 5+shear_range*np.random.uniform()-shear_range/2
# pt2 = 20+shear_range*np.random.uniform()-shear_range/2
# # Brightness
# pts2 = np.float32([[pt1,5],[pt2,pt1],[5,pt2]])
# shear_M = cv2.getAffineTransform(pts1,pts2)
# # apply operations
# img = cv2.warpAffine(img,Rot_M,(cols,rows))
# img = cv2.warpAffine(img,Trans_M,(cols,rows))
# img = cv2.warpAffine(img,shear_M,(cols,rows))
# if brightness == 1:
# img = augment_brightness_camera_images(img)
# return img
# def augment_data(X, y):
# n_classes = len(np.unique(y))
# avg_sample_num = len(y)/n_classes
# for i in range(0, n_classes):
# idx = np.argwhere(y==i)
# num = len(idx)
# cnt = num
# while cnt < avg_sample_num:
# img = X[idx[cnt%num]].reshape(32, 32, 1)
# img = transform_image(img,10,5,2,brightness=0).reshape(1,32, 32, 1)
# X = np.append(X, img, axis=0)
# y = np.append(y, np.array(i))
# cnt += 1
# return X, y
# X_tr, y_tr = augment_data(X_tr, y_tr)
# X_val, y_val = augment_data(X_val, y_val)
# X_ts, y_ts = augment_data(X_ts, y_ts)
Explanation: Question 1
Describe how you preprocessed the data. Why did you choose that technique?
Answer:
1) Convert RGB to Gray and in order to make model insensitive to color variations
2) Apply histogram equalization to mitigate the influence caused by illumination changes.
3) Data normalization to make the problem well conditioned, which is helpful to initialization and learning convergence.
Uncorrect augmented data results in worse performance.
The uncorrect augmentation generates images with black border which cut the performance.
The correct data augmentation should like this https://github.com/navoshta/traffic-signs/blob/master/Traffic_Signs_Recognition.ipynb .
End of explanation
# Save the data for easy access
pickle_file = 'sign.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('sign.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': X_tr,
'train_labels': y_tr,
'valid_dataset': X_val,
'valid_labels': y_val,
'test_dataset': X_ts,
'test_labels': y_ts,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: Save Checkpoint
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'sign.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
X_train = pickle_data['train_dataset']
y_train = pickle_data['train_labels']
X_validation = pickle_data['valid_dataset']
y_validation = pickle_data['valid_labels']
X_test = pickle_data['test_dataset']
y_test = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
# TODO: Re-construct the network and add a convolutional layer before the flatten layer.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras import backend as K
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
encoder.fit(y_train)
y_one_hot = encoder.transform(y_train)
X_normalized = X_train
# input image dimensions
img_rows, img_cols = 32, 32
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = (2, 2)
# convolution kernel size
kernel_size = (3, 3)
print(X_normalized.shape)
# Create the Sequential model
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=(32, 32, 1)))
model.add(Activation('relu'))
model.add(Flatten(input_shape=(32, 32, 1)))
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=20, validation_split=0.2)
Explanation: Load Checkpoint
End of explanation
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma), name='conv1_W') # the filters' weights
conv1_b = tf.Variable(tf.zeros(6), name='conv1_b') # the filters' biases
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# TODO: Activation.
conv1 = tf.nn.relu(conv1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, [1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma), name='conv2_W') # the filters' weights
conv2_b = tf.Variable(tf.zeros(16), name='conv2_b') # the filters' biases
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# TODO: Activation.
conv2 = tf.nn.relu(conv2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2= tf.nn.max_pool(conv2, [1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma), name='fc1_W')
fc1_b = tf.Variable(tf.zeros(120), name='fc1_b')
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma), name='fc2_W')
fc2_b = tf.Variable(tf.zeros(84), name='fc2_b')
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma), name='fc3_W')
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
Explanation: Question 2
Describe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?
Answer:
1) I first shuffle the dataset, and divid the dataset in to 1/10 for testing stage data and 9/10 for training stage data. Then, during traing stage, I again divide the training stage data into 1/10 for validation and 9/10 for training.
2) I use Rotation, Translation, Shear operations to generate data, since the data are very unbalanced among different classes. In the new dataset, each class has at least 1205 images.
End of explanation
### Train your model here.
### ----------- Definition -----------
import tensorflow as tf
from sklearn.utils import shuffle
tf.reset_default_graph()
with tf.device('/gpu:0'):
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
EPOCHS = 50
BATCH_SIZE = 128
rate = 0.001
logits = LeNet(x)
softmax_operation = tf.nn.softmax(logits)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
predict_operation = tf.argmax(logits, 1)
groundtruth_operation = tf.argmax(one_hot_y, 1)
correct_prediction = tf.equal(predict_operation, groundtruth_operation)
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
init = tf.initialize_all_variables()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
# set allow_soft_placement =True to Making tf.Variable ignore tf.device(), otherwise there will be some gpu error
config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)
### ----------- Traininig -----------
with tf.Session(config=config) as sess:
sess.run(init)
num_examples = len(X_train)
print("Training...")
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet2') # `save` method will call `export_meta_graph` implicitly.
print("Model saved")
######################################################
### ----------- Testing -----------
print("Testing...")
import sklearn as sk
import numpy as np
saver = tf.train.Saver()
with tf.Session(config=config) as sess:
saver.restore(sess, './lenet2')
y_true = sess.run(groundtruth_operation, feed_dict={x: X_test, y: y_test})
y_pred = sess.run(predict_operation, feed_dict={x: X_test, y: y_test})
acc = sess.run(accuracy_operation, feed_dict={x: X_test, y: y_test})
print ("Precision: " + str(sk.metrics.precision_score(y_true, y_pred, average='weighted')))
print ("Recall: " + str(sk.metrics.recall_score(y_true, y_pred, average='weighted')))
print ("f1_score: " + str(sk.metrics.f1_score(y_true, y_pred, average='weighted')))
np.set_printoptions(threshold=9999999) #print all
print ("confusion_matrix")
cm = sk.metrics.confusion_matrix(y_true, y_pred)
print (str(cm))
# n_classes = len(np.unique(y_test))
# for i in range(n_classes):
# print("--------------- Testing class " + str(i) + ' -------------')
# idx = np.argwhere(y_test==i).reshape(-1)
# y_true = sess.run(groundtruth_operation, feed_dict={x: X_test[idx], y: y_test[idx]})
# y_pred = sess.run(predict_operation, feed_dict={x: X_test[idx], y: y_test[idx]})
# acc = sess.run(accuracy_operation, feed_dict={x: X_test[idx], y: y_test[idx]})
# print ("Precision: " + str(sk.metrics.precision_score(y_true, y_pred, average='micro')))
# print ("Recall: " + str(sk.metrics.recall_score(y_true, y_pred, average='micro')))
# print ("f1_score: " + str(sk.metrics.f1_score(y_true, y_pred, average='micro')))
# print ("confusion_matrix")
# print (str(sk.metrics.confusion_matrix(y_true, y_pred)))
Explanation: Question 3
What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow
from the classroom.
Answer:
I use the classical LeNet which is a convolutional network containing five layers: two conv layers (5x5x6 and 5x5x16), three fully-connect layers (400x120, 120x84 and 84x43). I use ReLU as activation function.
End of explanation
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
rootDir = './sign-imgs'
img_num = len(os.listdir(rootDir))
X_web = np.array([])
y_web = np.array([])
for idx, file in enumerate(os.listdir(rootDir)):
path = os.path.join(rootDir, file)
img = cv2.imread(path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # opencv's imread return BGR, but here is RGB
gray = cv2.resize(cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), (32, 32))
gray = cv2.equalizeHist(gray) # equalize the histogram
gray = (gray-128)/128 # normalization
label = file.split('.')[0]
if idx==0:
X_web = gray.reshape(1, 32, 32, 1)
y_web = np.array([int(label)])
else:
X_web = np.append(X_web, gray.reshape(1, 32, 32, 1), axis=0)
y_web = np.append(y_web, [int(label)])
plt.subplot(1, img_num, idx+1)
plt.imshow(img)
plt.title('label:' + label)
plt.axis('off')
plt.show()
Explanation: Question 4
How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)
Answer:
I use Adam optimizer, and the hyperparameters are set as following,
EPOCHS = 50
BATCH_SIZE = 128
rate = 0.001
Question 5
What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.
Answer:
5 steps:
1) Data preprocessing: RGB-->GRAY (color insensitive), Illumination equalization (illumination robust), Data augmentation (mitigate data inbalanced)
2) CNN model choose: LeNet containing 5 layers (simple and efficient)
3) Optimizer choose: Adam, simple and efficient with less hyperparameters
4) mini-batch training with many epoches, and validation
5) Testing
Step 3: Test a Model on New Images
Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation
### Run the predictions here.
### Feel free to use as many code cells as needed.
saver = tf.train.Saver()
with tf.Session(config=config) as sess:
saver.restore(sess, './lenet2')
y_perd = sess.run(predict_operation, feed_dict={x: X_web, y: y_web})
print(y_perd)
Explanation: Question 6
Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.
Answer:
The fifth image suffers a serious distorition which makes classification difficult.
End of explanation
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.
saver = tf.train.Saver()
with tf.Session(config=config) as sess:
saver.restore(sess, './lenet2')
softmax_prob = sess.run(softmax_operation, feed_dict={x: X_web, y: y_web})
topk = sess.run(tf.nn.top_k(softmax_prob, k=5))
plt.figure(figsize=(20,10))
n_classes = len(softmax_prob[0])
for i in range(len(softmax_prob)):
plt.subplot(len(softmax_prob), 1, i+1)
# plt.hist(softmax_prob[i], bins=n_classes)
plt.bar(np.arange(n_classes), softmax_prob[i])
plt.title('GT: ' + str(y_web[i]))
plt.grid(True)
plt.tight_layout()
print('topk-values:' + str(topk.values[i]) + ' topk-indices:' + str(topk.indices[i]))
plt.show()
Explanation: Question 7
Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.
NOTE: You could check the accuracy manually by using signnames.csv (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in signnames.csv and see if it matches the sign from the image.
Answer:
The accuracy on captured pictures is 40%, which is lower than 77%, the accuracy on the testing dataset. Since here are only 5 captured traffic sign images, the result is not so convinced in statistics.
End of explanation |
15,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Semantic text search using embeddings
We can search through all our reviews semantically in a very efficient manner and at very low cost, by simply embedding our search query, and then finding the most similar reviews. The dataset is created in the Obtain_dataset Notebook.
Step1: Remember to use the documents embedding engine for documents (in this case reviews), and query embedding engine for queries. Note that here we just compare the cosine similarity of the embeddings of the query and the documents, and show top_n best matches.
Step2: We can search through these reviews easily. To speed up computation, we can use a special algorithm, aimed at faster search through embeddings.
Step3: As we can see, this can immediately deliver a lot of value. In this example we show being able to quickly find the examples of delivery failures. | Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv('output/embedded_1k_reviews.csv')
df['babbage_search'] = df.babbage_search.apply(eval).apply(np.array)
Explanation: Semantic text search using embeddings
We can search through all our reviews semantically in a very efficient manner and at very low cost, by simply embedding our search query, and then finding the most similar reviews. The dataset is created in the Obtain_dataset Notebook.
End of explanation
from openai.embeddings_utils import get_embedding, cosine_similarity
# search through the reviews for a specific product
def search_reviews(df, product_description, n=3, pprint=True):
embedding = get_embedding(product_description, engine='text-search-babbage-query-001')
df['similarities'] = df.babbage_search.apply(lambda x: cosine_similarity(x, embedding))
res = df.sort_values('similarities', ascending=False).head(n).combined.str.replace('Title: ','').str.replace('; Content:', ': ')
if pprint:
for r in res:
print(r[:200])
print()
return res
res = search_reviews(df, 'delicious beans', n=3)
res = search_reviews(df, 'whole wheat pasta', n=3)
Explanation: Remember to use the documents embedding engine for documents (in this case reviews), and query embedding engine for queries. Note that here we just compare the cosine similarity of the embeddings of the query and the documents, and show top_n best matches.
End of explanation
res = search_reviews(df, 'bad delivery', n=1)
Explanation: We can search through these reviews easily. To speed up computation, we can use a special algorithm, aimed at faster search through embeddings.
End of explanation
res = search_reviews(df, 'spoilt', n=1)
res = search_reviews(df, 'pet food', n=2)
Explanation: As we can see, this can immediately deliver a lot of value. In this example we show being able to quickly find the examples of delivery failures.
End of explanation |
15,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this tutorial we'll demonstrate Coach's hierarchical RL support, by building a new agent that implements the Hierarchical Actor Critic (HAC) algorithm (https
Step1: Now let's define the HAC algorithm and agent parameters.
See tutorial 1 for more details on the content of each of these classes.
Step2: Now we'll define the agent itself - HACDDPGAgent - which subclasses the DDPG agent class. The main difference between the DDPG agent and the HACDDPGAgent is the subgoal a higher level agent defines to a lower level agent, hence the overrides of the DDPG Agent functions.
Step3: The Preset
Defining the top agent in the hierarchy. Note that the agent's base parameters are the same as the DDPG agent's parameters. We also define here the memory, exploration policy and network topology.
Step4: The bottom agent
Step5: Now we define the parameters of all the agents in the hierarchy from top to bottom
Step6: Define the environment, visualization and schedule parameters. The schedule parameters refer to the top level agent.
Step7: Lastly, we create a HRLGraphManager that will execute the hierarchical agent we defined according to the parameters.
Note that the bottom level agent will run 40 steps on each single step of the top level agent.
Step8: Running the Preset | Python Code:
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
sys.path.append(module_path + '/rl_coach')
from typing import Union
import numpy as np
from rl_coach.agents.ddpg_agent import DDPGAgent, DDPGAgentParameters, DDPGAlgorithmParameters
from rl_coach.spaces import SpacesDefinition
from rl_coach.core_types import RunPhase
Explanation: In this tutorial we'll demonstrate Coach's hierarchical RL support, by building a new agent that implements the Hierarchical Actor Critic (HAC) algorithm (https://arxiv.org/pdf/1712.00948.pdf), and a preset that runs the agent on Mujoco's pendulum challenge.
The Agent
First, some imports. Note that HAC is based on DDPG, hence we will be importing the relevant classes.
End of explanation
class HACDDPGAlgorithmParameters(DDPGAlgorithmParameters):
def __init__(self):
super().__init__()
self.sub_goal_testing_rate = 0.5
self.time_limit = 40
class HACDDPGAgentParameters(DDPGAgentParameters):
def __init__(self):
super().__init__()
self.algorithm = DDPGAlgorithmParameters()
Explanation: Now let's define the HAC algorithm and agent parameters.
See tutorial 1 for more details on the content of each of these classes.
End of explanation
class HACDDPGAgent(DDPGAgent):
def __init__(self, agent_parameters, parent: Union['LevelManager', 'CompositeAgent']=None):
super().__init__(agent_parameters, parent)
self.sub_goal_testing_rate = self.ap.algorithm.sub_goal_testing_rate
self.graph_manager = None
def choose_action(self, curr_state):
# top level decides, for each of his generated sub-goals, for all the layers beneath him if this is a sub-goal
# testing phase
graph_manager = self.parent_level_manager.parent_graph_manager
if self.ap.is_a_highest_level_agent:
graph_manager.should_test_current_sub_goal = np.random.rand() < self.sub_goal_testing_rate
if self.phase == RunPhase.TRAIN:
if graph_manager.should_test_current_sub_goal:
self.exploration_policy.change_phase(RunPhase.TEST)
else:
self.exploration_policy.change_phase(self.phase)
action_info = super().choose_action(curr_state)
return action_info
def update_transition_before_adding_to_replay_buffer(self, transition):
graph_manager = self.parent_level_manager.parent_graph_manager
# deal with goals given from a higher level agent
if not self.ap.is_a_highest_level_agent:
transition.state['desired_goal'] = self.current_hrl_goal
transition.next_state['desired_goal'] = self.current_hrl_goal
self.distance_from_goal.add_sample(self.spaces.goal.distance_from_goal(
self.current_hrl_goal, transition.next_state))
goal_reward, sub_goal_reached = self.spaces.goal.get_reward_for_goal_and_state(
self.current_hrl_goal, transition.next_state)
transition.reward = goal_reward
transition.game_over = transition.game_over or sub_goal_reached
# each level tests its own generated sub goals
if not self.ap.is_a_lowest_level_agent and graph_manager.should_test_current_sub_goal:
_, sub_goal_reached = self.spaces.goal.get_reward_for_goal_and_state(
transition.action, transition.next_state)
sub_goal_is_missed = not sub_goal_reached
if sub_goal_is_missed:
transition.reward = -self.ap.algorithm.time_limit
return transition
def set_environment_parameters(self, spaces: SpacesDefinition):
super().set_environment_parameters(spaces)
if self.ap.is_a_highest_level_agent:
# the rest of the levels already have an in_action_space set to be of type GoalsSpace, thus they will have
# their GoalsSpace set to the in_action_space in agent.set_environment_parameters()
self.spaces.goal = self.spaces.action
self.spaces.goal.set_target_space(self.spaces.state[self.spaces.goal.goal_name])
if not self.ap.is_a_highest_level_agent:
self.spaces.reward.reward_success_threshold = self.spaces.goal.reward_type.goal_reaching_reward
Explanation: Now we'll define the agent itself - HACDDPGAgent - which subclasses the DDPG agent class. The main difference between the DDPG agent and the HACDDPGAgent is the subgoal a higher level agent defines to a lower level agent, hence the overrides of the DDPG Agent functions.
End of explanation
from rl_coach.architectures.tensorflow_components.layers import Dense
from rl_coach.base_parameters import VisualizationParameters, EmbeddingMergerType, EmbedderScheme
from rl_coach.architectures.embedder_parameters import InputEmbedderParameters
from rl_coach.memories.episodic.episodic_hindsight_experience_replay import HindsightGoalSelectionMethod, \
EpisodicHindsightExperienceReplayParameters
from rl_coach.memories.episodic.episodic_hrl_hindsight_experience_replay import \
EpisodicHRLHindsightExperienceReplayParameters
from rl_coach.memories.memory import MemoryGranularity
from rl_coach.spaces import GoalsSpace, ReachingGoal
from rl_coach.exploration_policies.ou_process import OUProcessParameters
from rl_coach.core_types import EnvironmentEpisodes, EnvironmentSteps, RunPhase, TrainingSteps
time_limit = 1000
polar_coordinates = False
distance_from_goal_threshold = np.array([0.075, 0.075, 0.75])
goals_space = GoalsSpace('achieved_goal',
ReachingGoal(default_reward=-1, goal_reaching_reward=0,
distance_from_goal_threshold=distance_from_goal_threshold),
lambda goal, state: np.abs(goal - state)) # raw L1 distance
top_agent_params = HACDDPGAgentParameters()
# memory - Hindsight Experience Replay
top_agent_params.memory = EpisodicHRLHindsightExperienceReplayParameters()
top_agent_params.memory.max_size = (MemoryGranularity.Transitions, 10000000)
top_agent_params.memory.hindsight_transitions_per_regular_transition = 3
top_agent_params.memory.hindsight_goal_selection_method = HindsightGoalSelectionMethod.Future
top_agent_params.memory.goals_space = goals_space
top_agent_params.algorithm.num_consecutive_playing_steps = EnvironmentEpisodes(32)
top_agent_params.algorithm.num_consecutive_training_steps = 40
top_agent_params.algorithm.num_steps_between_copying_online_weights_to_target = TrainingSteps(40)
# exploration - OU process
top_agent_params.exploration = OUProcessParameters()
top_agent_params.exploration.theta = 0.1
# actor - note that the default middleware is overriden with 3 dense layers
top_actor = top_agent_params.network_wrappers['actor']
top_actor.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)}
top_actor.middleware_parameters.scheme = [Dense([64])] * 3
top_actor.learning_rate = 0.001
top_actor.batch_size = 4096
# critic - note that the default middleware is overriden with 3 dense layers
top_critic = top_agent_params.network_wrappers['critic']
top_critic.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'action': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)}
top_critic.embedding_merger_type = EmbeddingMergerType.Concat
top_critic.middleware_parameters.scheme = [Dense([64])] * 3
top_critic.learning_rate = 0.001
top_critic.batch_size = 4096
Explanation: The Preset
Defining the top agent in the hierarchy. Note that the agent's base parameters are the same as the DDPG agent's parameters. We also define here the memory, exploration policy and network topology.
End of explanation
from rl_coach.schedules import ConstantSchedule
from rl_coach.exploration_policies.e_greedy import EGreedyParameters
bottom_agent_params = HACDDPGAgentParameters()
bottom_agent_params.algorithm.in_action_space = goals_space
bottom_agent_params.memory = EpisodicHindsightExperienceReplayParameters()
bottom_agent_params.memory.max_size = (MemoryGranularity.Transitions, 12000000)
bottom_agent_params.memory.hindsight_transitions_per_regular_transition = 4
bottom_agent_params.memory.hindsight_goal_selection_method = HindsightGoalSelectionMethod.Future
bottom_agent_params.memory.goals_space = goals_space
bottom_agent_params.algorithm.num_consecutive_playing_steps = EnvironmentEpisodes(16 * 25) # 25 episodes is one true env episode
bottom_agent_params.algorithm.num_consecutive_training_steps = 40
bottom_agent_params.algorithm.num_steps_between_copying_online_weights_to_target = TrainingSteps(40)
bottom_agent_params.exploration = EGreedyParameters()
bottom_agent_params.exploration.epsilon_schedule = ConstantSchedule(0.2)
bottom_agent_params.exploration.evaluation_epsilon = 0
bottom_agent_params.exploration.continuous_exploration_policy_parameters = OUProcessParameters()
bottom_agent_params.exploration.continuous_exploration_policy_parameters.theta = 0.1
# actor
bottom_actor = bottom_agent_params.network_wrappers['actor']
bottom_actor.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)}
bottom_actor.middleware_parameters.scheme = [Dense([64])] * 3
bottom_actor.learning_rate = 0.001
bottom_actor.batch_size = 4096
# critic
bottom_critic = bottom_agent_params.network_wrappers['critic']
bottom_critic.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'action': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)}
bottom_critic.embedding_merger_type = EmbeddingMergerType.Concat
bottom_critic.middleware_parameters.scheme = [Dense([64])] * 3
bottom_critic.learning_rate = 0.001
bottom_critic.batch_size = 4096
Explanation: The bottom agent
End of explanation
agents_params = [top_agent_params, bottom_agent_params]
Explanation: Now we define the parameters of all the agents in the hierarchy from top to bottom
End of explanation
from rl_coach.environments.gym_environment import Mujoco
from rl_coach.environments.environment import SelectedPhaseOnlyDumpMethod
from rl_coach.graph_managers.hrl_graph_manager import HRLGraphManager
from rl_coach.graph_managers.graph_manager import ScheduleParameters
env_params = Mujoco()
env_params.level = "rl_coach.environments.mujoco.pendulum_with_goals:PendulumWithGoals"
env_params.additional_simulator_parameters = {"time_limit": time_limit,
"random_goals_instead_of_standing_goal": False,
"polar_coordinates": polar_coordinates,
"goal_reaching_thresholds": distance_from_goal_threshold}
env_params.frame_skip = 10
env_params.custom_reward_threshold = -time_limit + 1
vis_params = VisualizationParameters()
vis_params.video_dump_methods = [SelectedPhaseOnlyDumpMethod(RunPhase.TEST)]
vis_params.dump_mp4 = False
vis_params.native_rendering = False
schedule_params = ScheduleParameters()
schedule_params.improve_steps = EnvironmentEpisodes(40 * 4 * 64) # 40 epochs
schedule_params.steps_between_evaluation_periods = EnvironmentEpisodes(4 * 64) # 4 small batches of 64 episodes
schedule_params.evaluation_steps = EnvironmentEpisodes(64)
schedule_params.heatup_steps = EnvironmentSteps(0)
Explanation: Define the environment, visualization and schedule parameters. The schedule parameters refer to the top level agent.
End of explanation
graph_manager = HRLGraphManager(agents_params=agents_params, env_params=env_params,
schedule_params=schedule_params, vis_params=vis_params,
consecutive_steps_to_run_each_level=EnvironmentSteps(40))
graph_manager.visualization_parameters.render = True
Explanation: Lastly, we create a HRLGraphManager that will execute the hierarchical agent we defined according to the parameters.
Note that the bottom level agent will run 40 steps on each single step of the top level agent.
End of explanation
from rl_coach.base_parameters import TaskParameters, Frameworks
log_path = '../experiments/pendulum_hac'
if not os.path.exists(log_path):
os.makedirs(log_path)
task_parameters = TaskParameters(framework_type=Frameworks.tensorflow,
evaluate_only=False,
experiment_path=log_path)
task_parameters.__dict__['checkpoint_save_secs'] = None
task_parameters.__dict__['verbosity'] = 'low'
graph_manager.create_graph(task_parameters)
graph_manager.improve()
Explanation: Running the Preset
End of explanation |
15,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Genotype data in FAPS
Tom Ellis, March 2017
In most cases, researchers will have a sample of offspring, maternal and candidate paternal individuals typed at a set of markers. In this section we'll look in more detail at how FAPS deals with genotype data to build a matrix we can use for sibship inference.
This notebook will examine how to
Step1: The object we just created contains information about the genotypes of each of the ten parent individuals. Genotypes are stored as NxLx2-dimensional arrays, where N is the number of individuals and L is the number of loci. We can view the genotype for the first parent like so (recall that Python starts counting from zero, not one)
Step2: You could subset the array by indexes the genotypes, for example by taking only the first two individuals and the first five loci
Step3: For realistic examples with many more loci, this obviously gets unwieldy pretty soon. It's cleaner to supply a list of individuals to keep or remove to the subset and drop functions. These return return a new genotypeArray for the individuals of interest.
Step4: Information on indivuals
A genotypeArray contains other useful information about the individuals
Step5: make_sibships is a convenient way to generate a single half-sibling array from individuals in mypop. This code mates makes a half-sib array with individual 0 as the mothers, with individuals 1, 2 and 3 contributing male gametes. Each father has four offspring each.
Step6: With this generation we can extract a little extra information from the genotypeArray than we could from the parents about their parents and family structure.
Step7: Of course with real data we would not normally know the identity of the father or the number of families, but this is useful for checking accuracy in simulations. It can also be useful to look up the positions of the parents in another list of names. This code finds the indices of the mothers and fathers of the offspring in the names listed in mypop.
Step8: Information on markers
Pull out marker names with marker. The names here are boring because they are simulated, but your data can have as exciting names as you'd like.
Step9: Check whether the locus names for parents and offspring match. This is obvious vital for determining who shares alleles with whom, but easy to overlook! If they don't match, the most likely explanation is that you have imported genotype data and misspecified where the genotype data start (the genotype_col argument).
Step10: FAPS uses population allele frequencies to calculate the likelihood that paternal alleles are drawn at random.
They are are useful to check the markers are doing what you think they are.
Pull out the population allele frequencies for each locus
Step11: We can also check for missing data and heterozygosity for each marker and individual. By default, data for each marker are returned
Step12: To get summaries for each individual
Step13: In this instance there is no missing data, because data are simulated to be error-free. See the next section on an empircal example where this is not true.
Importing genotype data
You can import genotype data from a text or CSV (comma-separated text) file. Both can be easily exported from a spreadsheet program. Rows index individuals, and columns index each typed locus. More specifically
Step14: Again, Python starts counting from zero rather than one, so the first column is really column zero, and so on. Because these are CSV, there was no need to specify that data are delimited by commas, but this is included for illustration.
Offspring are divided into 60 maternal families of different sizes. You can call the name of the mother of each offspring. You can also call the names of the fathers, with offspring.fathers, but since these are unknown this is not informative.
Step15: Offspring names are a combination of maternal family and a unique ID for ecah offspring.
Step16: You can call summaries of genotype data to help in data cleaning. For example, this code shows the proportion of loci with missing genotype data for the first ten offspring individuals.
Step17: This snippet shows the proportion of missing data points and heterozygosity for the first ten loci. These can be helpful in identifying dubious loci.
Step18: Multiple families
In real data set we generally work with multplie half-sibling arrays at once. For downstream analyses we need to split up the genotype data into families to reflect this. This is easy to do with split and a vector of labels to group offspring by. This returns a dictionary of genotypeArray objects labelled by maternal family. These snippet splits up the data and prints the maternal family names.
Step19: Each entry is an individual genotypeArray. You can pull out individual families by indexing the dictionary by name. For example, here are the names of the offspring in family J1246
Step20: To perform operations on each genotypeArray we now have to iterate over each element. A convenient way to do this is with dictionary comprehensions by separating out the labels from the genotypeArray objects using items.
As an example, here's how you call the number of offspring in each family. It splits up the dictionary into keys for each family, and calls size on each genotypeArray (labelled genArray in the comprehension).
Step21: You can achieve the same thing with a list comprehension, but you lose information about family ID. It is also more difficult to pass a list on to downstream functions. This snippet shows the first ten items. | Python Code:
import faps as fp
import numpy as np
allele_freqs = np.random.uniform(0.3,0.5,10)
mypop = fp.make_parents(5, allele_freqs, family_name='my_population')
Explanation: Genotype data in FAPS
Tom Ellis, March 2017
In most cases, researchers will have a sample of offspring, maternal and candidate paternal individuals typed at a set of markers. In this section we'll look in more detail at how FAPS deals with genotype data to build a matrix we can use for sibship inference.
This notebook will examine how to:
Generate simple genotypeArray objects and explore what information is contained in them.
Import external genotype data.
Work with genotype data from multiple half sib families.
Checking genotype data is an important step before committing to a full analysis. A case study of data checking and cleaning using an empirical dataset is given in section 8.
In the next section we'll see how to combine genotype information on offspring and a set of candidate parents to create an array of likelihoods of paternity for dyads of offspring and candidate fathers.
Also relevant is the section on simulating data and power analysis.
Currently, FAPS genotypeArray objects assume you are using biallelic, unlinked SNPs for a diploid. If your system deviates from these criteria in some way you can also skip this stage by creating your own array of paternity likelihoods using an appropriate likelihood function, and importing this directly as a paternityArrays. See the next section for more on paternityArray objects and how they should look.
genotypeArray objects
Basic genotype information
Genotype data are stored in a class of objects called a genotypeArray. We'll illustrate how these work with simulated data, since not all information is available for real-world data sets. We first generate a vector of population allele frequencies for 10 unlinked SNP markers, and use these to create a population of five adult individuals. This is obviously an unrealisticaly small dataset, but serves for illustration. The optional argument family_names allows you to name this generation.
End of explanation
mypop.geno[0]
Explanation: The object we just created contains information about the genotypes of each of the ten parent individuals. Genotypes are stored as NxLx2-dimensional arrays, where N is the number of individuals and L is the number of loci. We can view the genotype for the first parent like so (recall that Python starts counting from zero, not one):
End of explanation
mypop.geno[:2, :5]
Explanation: You could subset the array by indexes the genotypes, for example by taking only the first two individuals and the first five loci:
End of explanation
print(mypop.subset([0,2]).names)
print(mypop.drop([0,2]).names)
Explanation: For realistic examples with many more loci, this obviously gets unwieldy pretty soon. It's cleaner to supply a list of individuals to keep or remove to the subset and drop functions. These return return a new genotypeArray for the individuals of interest.
End of explanation
print(mypop.names) # individual names
print(mypop.size) # number of individuals
print(mypop.nloci) # numbe of loci typed.
Explanation: Information on indivuals
A genotypeArray contains other useful information about the individuals:
End of explanation
progeny = fp.make_sibships(mypop, 0, [1,2,3], 4, 'myprogeny')
Explanation: make_sibships is a convenient way to generate a single half-sibling array from individuals in mypop. This code mates makes a half-sib array with individual 0 as the mothers, with individuals 1, 2 and 3 contributing male gametes. Each father has four offspring each.
End of explanation
print(progeny.fathers)
print(progeny.mothers)
print(progeny.families)
print(progeny.nfamilies)
Explanation: With this generation we can extract a little extra information from the genotypeArray than we could from the parents about their parents and family structure.
End of explanation
print(progeny.parent_index('mother', mypop.names))
print(progeny.parent_index('father', mypop.names))
Explanation: Of course with real data we would not normally know the identity of the father or the number of families, but this is useful for checking accuracy in simulations. It can also be useful to look up the positions of the parents in another list of names. This code finds the indices of the mothers and fathers of the offspring in the names listed in mypop.
End of explanation
mypop.markers
Explanation: Information on markers
Pull out marker names with marker. The names here are boring because they are simulated, but your data can have as exciting names as you'd like.
End of explanation
mypop.markers == progeny.markers
Explanation: Check whether the locus names for parents and offspring match. This is obvious vital for determining who shares alleles with whom, but easy to overlook! If they don't match, the most likely explanation is that you have imported genotype data and misspecified where the genotype data start (the genotype_col argument).
End of explanation
mypop.allele_freqs()
Explanation: FAPS uses population allele frequencies to calculate the likelihood that paternal alleles are drawn at random.
They are are useful to check the markers are doing what you think they are.
Pull out the population allele frequencies for each locus:
End of explanation
print(mypop.missing_data())
print(mypop.heterozygosity())
Explanation: We can also check for missing data and heterozygosity for each marker and individual. By default, data for each marker are returned:
End of explanation
print(mypop.missing_data(by='individual'))
print(mypop.heterozygosity(by='individual'))
Explanation: To get summaries for each individual:
End of explanation
offspring = fp.read_genotypes(
path = '../data/offspring_2012_genotypes.csv',
mothers_col=1,
genotype_col=2)
Explanation: In this instance there is no missing data, because data are simulated to be error-free. See the next section on an empircal example where this is not true.
Importing genotype data
You can import genotype data from a text or CSV (comma-separated text) file. Both can be easily exported from a spreadsheet program. Rows index individuals, and columns index each typed locus. More specifically:
Offspring names should be given in the first column
If the data are offspring, names of the mothers are given in the second column.
If known for some reason, names of fathers can be given as well.
Genotype information should be given to the right of columns indicating individual or parental names, with locus names in the column headers.
SNP genotype data must be biallelic, that is they can only be homozygous for the first allele, heterozygous, or homozygous for the second allele. These should be given as 0, 1 and 2 respectively. If genotype data is missing this should be entered as NA.
The following code imports genotype information on real samples of offspring from half-sibling array of wild-pollinated snpadragon seedlings collected in the Spanish Pyrenees. The candidate parents are as many of the wild adult plants as we could find. You will find the data files on the IST Austria data repository (DOI:10.15479/AT:ISTA:95). Aside from the path to where the data file is stored, the two other arguments specify the column containing names of the mothers, and the first column containing genotype data of the offspring.
End of explanation
np.unique(offspring.mothers)
Explanation: Again, Python starts counting from zero rather than one, so the first column is really column zero, and so on. Because these are CSV, there was no need to specify that data are delimited by commas, but this is included for illustration.
Offspring are divided into 60 maternal families of different sizes. You can call the name of the mother of each offspring. You can also call the names of the fathers, with offspring.fathers, but since these are unknown this is not informative.
End of explanation
offspring.names
Explanation: Offspring names are a combination of maternal family and a unique ID for ecah offspring.
End of explanation
print(offspring.missing_data('individual')[:10])
Explanation: You can call summaries of genotype data to help in data cleaning. For example, this code shows the proportion of loci with missing genotype data for the first ten offspring individuals.
End of explanation
print(offspring.missing_data('marker')[:10])
print(offspring.heterozygosity()[:10])
Explanation: This snippet shows the proportion of missing data points and heterozygosity for the first ten loci. These can be helpful in identifying dubious loci.
End of explanation
offs_split = offspring.split(by = offspring.mothers)
offs_split.keys()
Explanation: Multiple families
In real data set we generally work with multplie half-sibling arrays at once. For downstream analyses we need to split up the genotype data into families to reflect this. This is easy to do with split and a vector of labels to group offspring by. This returns a dictionary of genotypeArray objects labelled by maternal family. These snippet splits up the data and prints the maternal family names.
End of explanation
offs_split["J1246"].names
Explanation: Each entry is an individual genotypeArray. You can pull out individual families by indexing the dictionary by name. For example, here are the names of the offspring in family J1246:
End of explanation
{family : genArray.size for family,genArray in offs_split.items()}
Explanation: To perform operations on each genotypeArray we now have to iterate over each element. A convenient way to do this is with dictionary comprehensions by separating out the labels from the genotypeArray objects using items.
As an example, here's how you call the number of offspring in each family. It splits up the dictionary into keys for each family, and calls size on each genotypeArray (labelled genArray in the comprehension).
End of explanation
[genArray.size for genArray in offs_split.values()][:10]
Explanation: You can achieve the same thing with a list comprehension, but you lose information about family ID. It is also more difficult to pass a list on to downstream functions. This snippet shows the first ten items.
End of explanation |
15,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sample Notebook 2 for Picasso
This notebook shows some basic interaction with the picasso library. It assumes to have a working picasso installation. To install jupyter notebooks in a conda picasso environment use conda install nb_conda.
The sample data was created using Picasso
Step1: Info file
The info file is now a list of dictionaries. Each step in picasso adds an element to the list.
Step2: Filter localizations
Filter localizations, i.e., via sx and sy
Step3: Saving localizations
Add new info to the yaml file and save everything.
Step4: Manually export images
Use the picasso functions to render images.
Step5: Calculate kinetics
Use the picasso functions to calculate kinetics. | Python Code:
from picasso import io
path = 'testdata_locs.hdf5'
locs, info = io.load_locs(path)
print('Loaded {} locs.'.format(len(locs)))
Explanation: Sample Notebook 2 for Picasso
This notebook shows some basic interaction with the picasso library. It assumes to have a working picasso installation. To install jupyter notebooks in a conda picasso environment use conda install nb_conda.
The sample data was created using Picasso:Simulate. You can download the files here: http://picasso.jungmannlab.org/testdata.zip
Load Localizations
End of explanation
for i in range(len(info)):
print(info[i]['Generated by'])
# extract width and height:
width, height = info[0]['Width'], info[0]['Height']
print('Image height: {}, width: {}'.format(width, height))
Explanation: Info file
The info file is now a list of dictionaries. Each step in picasso adds an element to the list.
End of explanation
sx_center = 0.82
sy_center = 0.82
radius = 0.04
to_keep = (locs.sx-sx_center)**2 + (locs.sy-sy_center)**2 < radius**2
filtered_locs = locs[to_keep]
print('Length of locs before filtering {}, after filtering {}.'.format(len(locs),len(filtered_locs)))
Explanation: Filter localizations
Filter localizations, i.e., via sx and sy: Remove all localizations that are not within a circle around a center position.
End of explanation
import os.path as _ospath
# Create a new dictionary for the new info
new_info = {}
new_info["Generated by"] = "Picasso Jupyter Notebook"
new_info["Filtered"] = 'Circle'
new_info["sx_center"] = sx_center
new_info["sy_center"] = sy_center
new_info["radius"] = radius
info.append(new_info)
base, ext = _ospath.splitext(path)
new_path = base+'_jupyter'+ext
io.save_locs(new_path, filtered_locs, info)
print('{} locs saved to {}.'.format(len(filtered_locs), new_path))
Explanation: Saving localizations
Add new info to the yaml file and save everything.
End of explanation
# Get minimum / maximum localizations to define the ROI to be rendered
import numpy as np
from picasso import render
import matplotlib.pyplot as plt
x_min = np.min(locs.x)
x_max = np.max(locs.x)
y_min = np.min(locs.y)
y_max = np.max(locs.y)
viewport = (y_min, x_min), (y_max, x_max)
oversampling = 10
len_x, image = render.render(locs, viewport = viewport, oversampling=oversampling, blur_method='smooth')
plt.imsave('test.png', image, cmap='hot', vmax=10)
# Cutom ROI with higher oversampling
viewport = (5, 5), (10, 10)
oversampling = 20
len_x, image = render.render(locs, viewport = viewport, oversampling=oversampling, blur_method='smooth')
plt.imsave('test_zoom.png', image, cmap='hot', vmax=10)
Explanation: Manually export images
Use the picasso functions to render images.
End of explanation
from picasso import postprocess
# Note: to calculate dark times you need picked localizations of single binding sites
path = 'testdata_locs_picked_single.hdf5'
picked_locs, info = io.load_locs(path)
# Link localizations and calcualte dark times
linked_locs = postprocess.link(picked_locs, info, r_max=0.05, max_dark_time=1)
linked_locs_dark = postprocess.compute_dark_times(linked_locs)
print('Average bright time {:.2f} frames'.format(np.mean(linked_locs_dark.n)))
print('Average dark time {:.2f} frames'.format(np.mean(linked_locs_dark.dark)))
# Compare with simulation settings:
integration_time = info[0]['Camera.Integration Time']
tau_b = info[0]['PAINT.taub']
k_on = info[0]['PAINT.k_on']
imager = info[0]['PAINT.imager']
tau_d = 1/(k_on*imager)*10**9*1000
print('------')
print('ON Measured {:.2f} ms \t Simulated {:.2f} ms'.format(np.mean(linked_locs_dark.n)*integration_time, tau_b))
print('OFF Measured {:.2f} ms \t Simulated {:.2f} ms'.format(np.mean(linked_locs_dark.dark)*integration_time, tau_d))
Explanation: Calculate kinetics
Use the picasso functions to calculate kinetics.
End of explanation |
15,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
State observer examples
This notebook relies on the Python code stored in the folder python.
Step1: Exponential decay scalar case
This section shows the exponential decay at different rate. If the value of the base is negative, over time the function tends to 0, but it oscillates between positive and negative values.
Step2: Observer example no feedback from output
Step3: Observer feedback from output
Step4: Measurement noise in the position of the car
Step5: Measurement noise in the position of the car and in the acceleration | Python Code:
#Import base libraries
import numpy as np
import matplotlib.pyplot as plt
import random
from matplotlib import animation, rc
from IPython.display import HTML
import importlib
# Import libraries for the examples
import os
import sys
module_path = os.path.abspath(os.path.join('../python'))
if module_path not in sys.path:
sys.path.append(module_path)
# Set plotting options
%matplotlib inline
plt.style.use(['seaborn-darkgrid', 'seaborn-poster'])
# make animation show in notebook
rc('animation', html='html5')
rc('text', usetex=True)
import simplified_models
import utils
from utils import Coordinate2d
from utils import GenerateBaseSimulation
import display_utils as du
# if needed...
#importlib.reload(du)
#importlib.reload(utils)
#from utils import Coordinate2d
#from utils import GenerateBaseSimulation
# Initialize random number generator
random.seed(0)
Explanation: State observer examples
This notebook relies on the Python code stored in the folder python.
End of explanation
time_vector = np.arange(0, 25, 1)
l_value = (0.9, 0.7, 0.1, -0.7)
plt.figure(figsize=(18, 6))
for i_l in l_value:
plt.plot(time_vector, i_l**time_vector, label="$a_d-lc$={}".format(i_l))
plt.legend(prop={'size':18})
#plt.title("Examples of L^t")
#plt.xlabel("Time step", fontsize=18)
Explanation: Exponential decay scalar case
This section shows the exponential decay at different rate. If the value of the base is negative, over time the function tends to 0, but it oscillates between positive and negative values.
End of explanation
n_samples = 25
sample_time = 0.1
# True model
initial_position = Coordinate2d(2, 3)
initial_speed = Coordinate2d(10, 15)
true_model = simplified_models.TrueModel(initial_position, initial_speed, sample_time)
# Estimator
initial_position_estimation = Coordinate2d(0, 0)
initial_speed_estimation = Coordinate2d(0, 0)
observer_gain = [
[0, 0],
[0, 0],
[0, 0],
[0, 0]
]
sample_time = 0.1
observer = simplified_models.Observer(initial_position_estimation, initial_speed_estimation, sample_time, observer_gain)
# generate input for the system
acceleration = [Coordinate2d(random.uniform(0, 5), random.uniform(-5, 0)) for i in range(n_samples)]
simulation1 = GenerateBaseSimulation(true_model, observer, acceleration)
fig=plt.figure(figsize=(15, 7))
ax1 = plt.subplot2grid((2,5), (0,0), rowspan=2, colspan=2)
ax1.axis('equal')
ax2 = plt.subplot2grid((2,5), (0,2), colspan=3, autoscale_on=True)
ax3 = plt.subplot2grid((2,5), (1,2), colspan=3, autoscale_on=True)
du.generate_base_figure(simulation1, ax1, ax2, ax3)
ax1.legend()
ax2.legend()
ax3.legend()
fig.tight_layout()
Explanation: Observer example no feedback from output
End of explanation
observer_gain = [
[0.5, 0],
[0.5, 0],
[0, 0.3],
[0, 0.3]
]
true_model = simplified_models.TrueModel(initial_position, initial_speed, sample_time)
observer = simplified_models.Observer(initial_position_estimation, initial_speed_estimation, sample_time, observer_gain)
simulation1 = GenerateBaseSimulation(true_model, observer, acceleration)
fig=plt.figure(figsize=(15, 7))
ax1 = plt.subplot2grid((2,5), (0,0), rowspan=2, colspan=2)
ax1.axis('equal')
ax2 = plt.subplot2grid((2,5), (0,2), colspan=3, autoscale_on=True)
ax3 = plt.subplot2grid((2,5), (1,2), colspan=3, autoscale_on=True)
du.generate_base_figure(simulation1, ax1, ax2, ax3)
ax1.legend()
ax2.legend()
ax3.legend()
fig.tight_layout()
Explanation: Observer feedback from output
End of explanation
position_noise = [Coordinate2d(random.uniform(-2, 2), random.gauss(0, 1)) for i in range(n_samples)]
true_model = simplified_models.TrueModel(initial_position, initial_speed, sample_time)
observer = simplified_models.Observer(initial_position_estimation, initial_speed_estimation, sample_time, observer_gain)
simulation1 = GenerateBaseSimulation(true_model, observer, acceleration, output_noise=position_noise)
fig=plt.figure(figsize=(15, 7))
ax1 = plt.subplot2grid((2,5), (0,0), rowspan=2, colspan=2)
ax1.axis('equal')
ax2 = plt.subplot2grid((2,5), (0,2), colspan=3, autoscale_on=True)
ax3 = plt.subplot2grid((2,5), (1,2), colspan=3, autoscale_on=True)
du.generate_base_figure(simulation1, ax1, ax2, ax3, plot_measured_position=True)
ax1.legend()
ax2.legend()
ax3.legend()
fig.tight_layout()
Explanation: Measurement noise in the position of the car
End of explanation
input_noise = [Coordinate2d(random.uniform(-2, 2), random.gauss(0, 1)) for i in range(n_samples)]
true_model = simplified_models.TrueModel(initial_position, initial_speed, sample_time)
observer = simplified_models.Observer(initial_position_estimation, initial_speed_estimation, sample_time, observer_gain)
simulation1 = GenerateBaseSimulation(true_model, observer, acceleration,
output_noise=position_noise, input_noise=input_noise)
fig=plt.figure(figsize=(15, 7))
ax1 = plt.subplot2grid((2,5), (0,0), rowspan=2, colspan=2)
ax1.axis('equal')
ax2 = plt.subplot2grid((2,5), (0,2), colspan=3, autoscale_on=True)
ax3 = plt.subplot2grid((2,5), (1,2), colspan=3, autoscale_on=True)
du.generate_base_figure(simulation1, ax1, ax2, ax3, plot_measured_position=True)
ax1.legend()
ax2.legend()
ax3.legend()
fig.tight_layout()
Explanation: Measurement noise in the position of the car and in the acceleration
End of explanation |
15,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Structures like these are encoded in "PDB" files
Entries are determined by columns in the file, not by spaces between the columns
Step1: Predict what the following will do
Step2: Write a program that | Python Code:
#record atom_name chain x y z occupancy atom_type
# | | | | | | | |
#ATOM 1086 CG LYS A 141 -4.812 9.683 2.584 1.00 26.78 N0
# | | | |
# atom_num amino_acid resid_num bfactor
Explanation: Structures like these are encoded in "PDB" files
Entries are determined by columns in the file, not by spaces between the columns
End of explanation
line_frompdb = "ATOM 1086 N SER A 141 -4.812 9.683 2.584 1.00 26.78 N0"
print(line_frompdb[2:4])
Explanation: Predict what the following will do
End of explanation
#record atom_name chain x y z occupancy atom_type
# | | | | | | | |
#ATOM 1086 CG LYS A 141 -4.812 9.683 2.584 1.00 26.78 N0
# | | | |
# atom_num amino_acid resid_num bfactor
Explanation: Write a program that:
+ Reads a pdb file (download 1stn.pdb)
+ Grabs all "ATOM" lines whose atom type is "CA"
+ Shifts the position of the molecule in x by +10 angstroms
+ Writes out a new pdb file containing these shifted atoms
+ If you want to check your work, download PyMOL and open up the files
End of explanation |
15,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License
Step3: The World Cup Problem, Part One
In the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?
Let's assume that Germany has some hypothetical goal-scoring rate, λ, in goals per game.
To represent the prior distribution of λ, I'll use a Gamma distribution with mean 1.3, which is the average number of goals per team per game in World Cup play.
Here's what the prior looks like.
Step4: Now we can create a Soccer object and initialize it with the prior Pmf
Step5: Here's the update after the first goal at 11 minutes.
Step6: Here's the update after the second goal at 23 minutes (the time between first and second goals is 12 minutes).
Step7: We can compute the mixture of these distributions by making a Meta-Pmf that maps from each Poisson Pmf to its probability.
Step9: MakeMixture takes a Meta-Pmf (a Pmf that contains Pmfs) and returns a single Pmf that represents the weighted mixture of distributions
Step10: Here's the result for the World Cup problem.
Step11: And here's what the mixture looks like.
Step12: Exercise
Step13: MCMC
Building the MCMC model incrementally, start with just the prior distribution for lam.
Step14: Let's look at the prior predictive distribution for the time between goals (in games).
Step15: Now we're ready for the inverse problem, estimating lam based on the first observed gap.
Step16: And here's the inverse problem with both observed gaps.
Step17: And we can generate a predictive distribution for the time until the next goal (in games).
Step18: Exercise
Step22: And we can generate a predictive distribution for the time until the next goal (in games). | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite
import thinkbayes2
import thinkplot
import numpy as np
from scipy.special import gamma
import pymc3 as pm
Explanation: Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
from thinkbayes2 import MakeGammaPmf
xs = np.linspace(0, 12, 101)
pmf_gamma = MakeGammaPmf(xs, 1.3)
thinkplot.Pdf(pmf_gamma)
thinkplot.decorate(title='Gamma PDF',
xlabel='Goals per game',
ylabel='PDF')
pmf_gamma.Mean()
class Soccer(Suite):
Represents hypotheses about goal-scoring rates.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: scoring rate in goals per game
data: interarrival time in minutes
x = data / 90
lam = hypo
like = lam * np.exp(-lam * x)
return like
Explanation: The World Cup Problem, Part One
In the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?
Let's assume that Germany has some hypothetical goal-scoring rate, λ, in goals per game.
To represent the prior distribution of λ, I'll use a Gamma distribution with mean 1.3, which is the average number of goals per team per game in World Cup play.
Here's what the prior looks like.
End of explanation
prior = Soccer(pmf_gamma)
thinkplot.Pdf(prior)
thinkplot.decorate(title='Gamma prior',
xlabel='Goals per game',
ylabel='PDF')
prior.Mean()
Explanation: Now we can create a Soccer object and initialize it with the prior Pmf:
End of explanation
posterior1 = prior.Copy()
posterior1.Update(11)
thinkplot.Pdf(prior, color='0.7')
thinkplot.Pdf(posterior1)
thinkplot.decorate(title='Posterior after 1 goal',
xlabel='Goals per game',
ylabel='PDF')
posterior1.Mean()
Explanation: Here's the update after the first goal at 11 minutes.
End of explanation
posterior2 = posterior1.Copy()
posterior2.Update(12)
thinkplot.Pdf(prior, color='0.7')
thinkplot.Pdf(posterior1, color='0.7')
thinkplot.Pdf(posterior2)
thinkplot.decorate(title='Posterior after 2 goals',
xlabel='Goals per game',
ylabel='PDF')
posterior2.Mean()
from thinkbayes2 import MakePoissonPmf
Explanation: Here's the update after the second goal at 23 minutes (the time between first and second goals is 12 minutes).
End of explanation
rem_time = 90 - 23
metapmf = Pmf()
for lam, prob in posterior2.Items():
lt = lam * rem_time / 90
pred = MakePoissonPmf(lt, 15)
metapmf[pred] = prob
Explanation: We can compute the mixture of these distributions by making a Meta-Pmf that maps from each Poisson Pmf to its probability.
End of explanation
def MakeMixture(metapmf, label='mix'):
Make a mixture distribution.
Args:
metapmf: Pmf that maps from Pmfs to probs.
label: string label for the new Pmf.
Returns: Pmf object.
mix = Pmf(label=label)
for pmf, p1 in metapmf.Items():
for x, p2 in pmf.Items():
mix[x] += p1 * p2
return mix
Explanation: MakeMixture takes a Meta-Pmf (a Pmf that contains Pmfs) and returns a single Pmf that represents the weighted mixture of distributions:
End of explanation
mix = MakeMixture(metapmf)
mix.Print()
Explanation: Here's the result for the World Cup problem.
End of explanation
thinkplot.Hist(mix)
thinkplot.decorate(title='Posterior predictive distribution',
xlabel='Goals scored',
ylabel='PMF')
Explanation: And here's what the mixture looks like.
End of explanation
# Solution
mix.Mean(), mix.ProbGreater(4)
Explanation: Exercise: Compute the predictive mean and the probability of scoring 5 or more additional goals.
End of explanation
cdf_gamma = pmf_gamma.MakeCdf();
mean_rate = 1.3
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
trace = pm.sample_prior_predictive(1000)
lam_sample = trace['lam']
print(lam_sample.mean())
cdf_lam = Cdf(lam_sample)
thinkplot.Cdf(cdf_gamma, label='Prior grid')
thinkplot.Cdf(cdf_lam, label='Prior MCMC')
thinkplot.decorate(xlabel='Goal scoring rate',
ylabel='Cdf')
Explanation: MCMC
Building the MCMC model incrementally, start with just the prior distribution for lam.
End of explanation
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
gap = pm.Exponential('gap', lam)
trace = pm.sample_prior_predictive(1000)
gap_sample = trace['gap']
print(gap_sample.mean())
cdf_lam = Cdf(gap_sample)
thinkplot.Cdf(cdf_lam)
thinkplot.decorate(xlabel='Time between goals (games)',
ylabel='Cdf')
Explanation: Let's look at the prior predictive distribution for the time between goals (in games).
End of explanation
first_gap = 11/90
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
gap = pm.Exponential('gap', lam, observed=first_gap)
trace = pm.sample(1000, tune=3000)
pm.traceplot(trace);
lam_sample = trace['lam']
print(lam_sample.mean())
print(posterior1.Mean())
cdf_lam = Cdf(lam_sample)
thinkplot.Cdf(posterior1.MakeCdf(), label='Posterior analytic')
thinkplot.Cdf(cdf_lam, label='Posterior MCMC')
thinkplot.decorate(xlabel='Goal scoring rate',
ylabel='Cdf')
Explanation: Now we're ready for the inverse problem, estimating lam based on the first observed gap.
End of explanation
second_gap = 12/90
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
gap = pm.Exponential('gap', lam, observed=[first_gap, second_gap])
trace = pm.sample(1000, tune=2000)
pm.traceplot(trace);
lam_sample = trace['lam']
print(lam_sample.mean())
print(posterior2.Mean())
cdf_lam = Cdf(lam_sample)
thinkplot.Cdf(posterior2.MakeCdf(), label='Posterior analytic')
thinkplot.Cdf(cdf_lam, label='Posterior MCMC')
thinkplot.decorate(xlabel='Goal scoring rate',
ylabel='Cdf')
Explanation: And here's the inverse problem with both observed gaps.
End of explanation
with model:
post_pred = pm.sample_ppc(trace, samples=1000)
gap_sample = post_pred['gap'].flatten()
print(gap_sample.mean())
cdf_gap = Cdf(gap_sample)
thinkplot.Cdf(cdf_gap)
thinkplot.decorate(xlabel='Time between goals (games)',
ylabel='Cdf')
Explanation: And we can generate a predictive distribution for the time until the next goal (in games).
End of explanation
with pm.Model() as model:
lam = pm.Gamma('lam', alpha=mean_rate, beta=1)
goals = pm.Poisson('goals', lam, observed=1)
trace = pm.sample(1000, tune=3000)
pm.traceplot(trace);
lam_sample = trace['lam']
print(lam_sample.mean())
cdf_lam = Cdf(lam_sample)
thinkplot.Cdf(cdf_lam, label='Posterior MCMC')
thinkplot.decorate(xlabel='Goal scoring rate',
ylabel='Cdf')
Explanation: Exercise: Use PyMC to write a solution to the second World Cup problem:
In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. How much evidence does this victory provide that Germany had the better team? What is the probability that Germany would win a rematch?
End of explanation
with model:
post_pred = pm.sample_ppc(trace, samples=1000)
goal_sample = post_pred['goals'].flatten()
print(goal_sample.mean())
pmf_goals = Pmf(goal_sample)
thinkplot.Hist(pmf_goals)
thinkplot.decorate(xlabel='Number of goals',
ylabel='Cdf')
from scipy.stats import poisson
class Soccer2(thinkbayes2.Suite):
Represents hypotheses about goal-scoring rates.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: goal rate in goals per game
data: goals scored in a game
return poisson.pmf(data, hypo)
from thinkbayes2 import MakeGammaPmf
xs = np.linspace(0, 8, 101)
pmf = MakeGammaPmf(xs, 1.3)
thinkplot.Pdf(pmf)
thinkplot.decorate(xlabel='Goal-scoring rate (λ)',
ylabel='PMF')
pmf.Mean()
germany = Soccer2(pmf);
germany.Update(1)
def PredictiveDist(suite, duration=1, label='pred'):
Computes the distribution of goals scored in a game.
returns: new Pmf (mixture of Poissons)
metapmf = thinkbayes2.Pmf()
for lam, prob in suite.Items():
pred = thinkbayes2.MakePoissonPmf(lam * duration, 10)
metapmf[pred] = prob
mix = thinkbayes2.MakeMixture(metapmf, label=label)
return mix
germany_pred = PredictiveDist(germany, label='germany')
thinkplot.Hist(germany_pred, width=0.45, align='right')
thinkplot.Hist(pmf_goals, width=0.45, align='left')
thinkplot.decorate(xlabel='Predicted # goals',
ylabel='Pmf')
thinkplot.Cdf(germany_pred.MakeCdf(), label='Grid')
thinkplot.Cdf(Cdf(goal_sample), label='MCMC')
thinkplot.decorate(xlabel='Predicted # goals',
ylabel='Pmf')
Explanation: And we can generate a predictive distribution for the time until the next goal (in games).
End of explanation |
15,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Accessing Simulation data directly
The Python interface also allows users to access simulation data directly, without requiring file output. In this notebook we repeat the "Two Stream" instability example, but now without any file I/O. The simulation parameters are the same as in the original example, but now we do not define any diagnostics routine
Step1: Grid data
We can acces the raw electric field, magnetic field and current density data of a Simulation object sim through the sim.emf.E[x|y|z], sim.emf.B[x|y|z] and sim.current.J[x|y|z] properties, respectively. Each of these properties will be a [nx] NumPy float32 array that can be used as usual
Step2: Particle data
We can access raw particle data using the particles property of each Species object. This property is a NumPy array of t_part structures containing
Step3: Charge Density
Besides the raw simulation data, we can also access diagnostic data that needs to be generated, such as the charge density. These diagnostics can be generated on the fly; to get the charge density from a Species object we use the charge() method
Step4: Phasespace Density
Similarily, we can get the phasespace density of a given Species object using the phasespace() method | Python Code:
# Using spectral EM1D code
import em1ds as zpic
import numpy as np
nx = 120
box = 4 * np.pi
dt = 0.08
tmax = 50.0
ppc = 500
ufl = [0.4, 0.0, 0.0]
uth = [0.001,0.001,0.001]
right = zpic.Species( "right", -1.0, ppc, ufl = ufl, uth = uth )
ufl[0] = -ufl[0]
left = zpic.Species( "left", -1.0, ppc, ufl = ufl, uth = uth )
# Initialize the simulation without diagnostics
sim = zpic.Simulation( nx, box, dt, species = [right,left] )
sim.emf.solver_type = 'PSATD'
# Run the simulation
sim.run( tmax )
Explanation: Accessing Simulation data directly
The Python interface also allows users to access simulation data directly, without requiring file output. In this notebook we repeat the "Two Stream" instability example, but now without any file I/O. The simulation parameters are the same as in the original example, but now we do not define any diagnostics routine:
End of explanation
import matplotlib.pyplot as plt
# Plot field values at the center of the cells
xmin = sim.emf.dx/2
xmax = sim.emf.box - sim.emf.dx/2
plt.plot(np.linspace(xmin, xmax, num = sim.nx), sim.emf.Ex )
plt.xlabel("$x_1$")
plt.ylabel("$E_1$")
plt.title("Longitudinal Electric Field\n t = {:g}".format(sim.t))
plt.grid(True)
plt.show()
sim.emf.Ex[10] = 20
plt.plot(np.linspace(xmin, xmax, num = sim.nx), sim.emf.Ex )
plt.xlabel("$x_1$")
plt.ylabel("$E_1$")
plt.title("Longitudinal Electric Field\n t = {:g}".format(sim.t))
plt.grid(True)
plt.show()
Explanation: Grid data
We can acces the raw electric field, magnetic field and current density data of a Simulation object sim through the sim.emf.E[x|y|z], sim.emf.B[x|y|z] and sim.current.J[x|y|z] properties, respectively. Each of these properties will be a [nx] NumPy float32 array that can be used as usual:
End of explanation
import matplotlib.pyplot as plt
# Simple function to convert particle positions
x = lambda s : (s.particles['ix'] + s.particles['x']) * s.dx
plt.plot(x(left), left.particles['ux'], '.', ms=1,alpha=0.2, label = "Left")
plt.plot(x(right), right.particles['ux'], '.', ms=1,alpha=0.2, label = "Right")
plt.xlabel("x1")
plt.ylabel("u1")
plt.title("u1-x1 phasespace\nt = {:g}".format(sim.t))
plt.legend()
plt.grid(True)
plt.show()
Explanation: Particle data
We can access raw particle data using the particles property of each Species object. This property is a NumPy array of t_part structures containing:
* ix - the particle cell
* x - the particle position inside the cell normalized to the cell size ( 0 <= x < 1 )
* ux, uy, uz - the particle generalized velocity in each direction
We can easily use this data to produce the phasespace plot for this simulation. Note that we had to convert the cell index / position to simulation position:
End of explanation
import matplotlib.pyplot as plt
charge = left.charge()
xmin = sim.dx/2
xmax = sim.box - sim.dx/2
plt.plot(np.linspace(xmin, xmax, num = sim.nx), left.charge() )
plt.xlabel("x1")
plt.ylabel("rho")
plt.title("Left beam charge density\nt = {:g}".format(sim.t))
plt.grid(True)
plt.show()
Explanation: Charge Density
Besides the raw simulation data, we can also access diagnostic data that needs to be generated, such as the charge density. These diagnostics can be generated on the fly; to get the charge density from a Species object we use the charge() method:
End of explanation
import matplotlib.pyplot as plt
nx = [120,128]
range = [[0,sim.box],[-1.5,1.5]]
pha = left.phasespace( ["x1", "u1"], nx, range )
plt.imshow( pha, interpolation = 'nearest', origin = 'lower',
extent = ( range[0][0], range[0][1], range[1][0], range[1][1] ),
aspect = 'auto')
plt.colorbar().set_label('density')
plt.xlabel("x1")
plt.ylabel("u1")
plt.title("u1-x1 phasespace density\nt = {:g}".format(sim.t))
plt.show()
Explanation: Phasespace Density
Similarily, we can get the phasespace density of a given Species object using the phasespace() method:
End of explanation |
15,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: XEB calibration
Step2: Select qubits
First we select a processor and calibration metric(s) to visualize the latest calibration report.
Note
Step3: Using this report as a guide, we select a good set of qubits.
Step4: An example random circuit on these qubits (used as the forward operations of the Loschmidt echo) is shown below.
Step5: Set up XEB calibration
Now we specify the cycle depths and other options for XEB calibration below. Note that all cirq.FSimGate parameters are characterized by default.
Step7: Run a Loschmidt echo benchmark
Note
Step8: Without calibration
First we run the Loschmidt echo without calibration.
Step9: With XEB calibration
Now we perform XEB calibration.
Step10: And run the XEB calibrated batch below.
Step11: Compare results
The next cell plots the results. | Python Code:
try:
import cirq
except ImportError:
!pip install --quiet cirq --pre
# The Google Cloud Project id to use.
project_id = "" #@param {type:"string"}
processor_id = "" #@param {type:"string"}
from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook
device_sampler = get_qcs_objects_for_notebook(project_id, processor_id)
if not device_sampler.signed_in:
raise Exception("Please setup project_id in this cell or set the `GOOGLE_CLOUD_PROJECT` env var to your project id.")
import cirq
from cirq.experiments import random_quantum_circuit_generation as rqcg
import cirq_google as cg
import matplotlib.pyplot as plt
import numpy as np
import tqdm
#@title Helper functions
from typing import Optional, Sequence
def create_random_circuit(
qubits: Sequence[cirq.GridQubit],
cycles: int,
twoq_gate: cirq.Gate = cirq.FSimGate(np.pi / 4, 0.0),
seed: Optional[int] = None,
) -> cirq.Circuit:
return rqcg.random_rotations_between_grid_interaction_layers_circuit(
qubits,
depth=cycles,
two_qubit_op_factory=lambda a, b, _: twoq_gate.on(a, b),
pattern=cirq.experiments.GRID_STAGGERED_PATTERN,
single_qubit_gates=[cirq.PhasedXPowGate(phase_exponent=p, exponent=0.5)
for p in np.arange(-1.0, 1.0, 0.25)],
seed=seed
)
def create_loschmidt_echo_circuit(
qubits: Sequence[cirq.GridQubit],
cycles: int,
twoq_gate: cirq.Gate = cirq.FSimGate(np.pi / 4, 0.0),
seed: Optional[int] = None,
) -> cirq.Circuit:
Returns a Loschmidt echo circuit using a random unitary U.
Args:
qubits: Qubits to use.
cycles: Depth of random rotations in the forward & reverse unitary.
twoq_gate: Two-qubit gate to use.
pause: Optional duration to pause for between U and U^\dagger.
seed: Seed for circuit generation.
forward = create_random_circuit(qubits, cycles, twoq_gate, seed)
return forward + cirq.inverse(forward) + cirq.measure(*qubits, key="z")
def to_ground_state_prob(result: cirq.Result) -> float:
return np.mean(np.sum(result.measurements["z"], axis=1) == 0)
Explanation: XEB calibration: Example and benchmark
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/google/xeb_calibration_example"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/xeb_calibration_example.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/xeb_calibration_example.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/xeb_calibration_example.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This tutorial shows a detailed example and benchmark of XEB calibration, a calibration technique introduced in the Calibration: Overview and API tutorial.
Disclaimer: The data shown in this tutorial is exemplary and not representative of the QCS in production.
Setup
Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre.
End of explanation
processor_id = "" #@param {type:"string"}
metrics = "parallel_p00_error, two_qubit_sqrt_iswap_gate_xeb_pauli_error_per_cycle" #@param {type:"string"}
metrics = [m.strip() for m in metrics.split(sep=",")]
from matplotlib.colors import LogNorm
_, axes = plt.subplots(
nrows=1, ncols=len(metrics), figsize=(min(16, 8 * len(metrics)), 7)
)
calibration = cg.get_engine_calibration(processor_id=processor_id)
for i, metric in enumerate(metrics):
calibration.heatmap(metric).plot(
ax=axes[i] if len(metrics) > 1 else axes,
collection_options={"norm": LogNorm()},
annotation_format="0.3f",
annotation_text_kwargs = {"size": "small"}
);
Explanation: Select qubits
First we select a processor and calibration metric(s) to visualize the latest calibration report.
Note: All calibration metrics are defined in this guide. The parallel_p00_error and/or parallel_p11_error metrics are good to eliminate qubits with high readout errors.
End of explanation
# Select qubit indices here.
qubit_indices = [
(2, 5), (2, 6), (2, 7), (2, 8), (3, 8),
(3, 7), (3, 6), (3, 5), (4, 5), (4, 6)
]
qubits = [cirq.GridQubit(*idx) for idx in qubit_indices]
Explanation: Using this report as a guide, we select a good set of qubits.
End of explanation
create_random_circuit(qubits, cycles=10, seed=1)
Explanation: An example random circuit on these qubits (used as the forward operations of the Loschmidt echo) is shown below.
End of explanation
xeb_options = cg.LocalXEBPhasedFSimCalibrationOptions(
cycle_depths=(5, 25, 50, 100),
n_processes=1,
fsim_options=cirq.experiments.XEBPhasedFSimCharacterizationOptions(
characterize_theta=False,
characterize_zeta=True,
characterize_chi=True,
characterize_gamma=True,
characterize_phi=False,
),
)
Explanation: Set up XEB calibration
Now we specify the cycle depths and other options for XEB calibration below. Note that all cirq.FSimGate parameters are characterized by default.
End of explanation
Setup the Loschmidt echo experiment.
cycle_values = range(0, 40 + 1, 4)
nreps = 20_000
trials = 10
sampler = cg.get_engine_sampler(
project_id=project_id,
processor_id=processor_id,
gate_set_name="sqrt_iswap",
)
loschmidt_echo_batch = [
create_loschmidt_echo_circuit(qubits, cycles=c, seed=trial)
for trial in range(trials) for c in cycle_values
]
Explanation: Run a Loschmidt echo benchmark
Note: See the Loschmidt echo tutorial for background about this benchmark.
End of explanation
# Run on the engine.
raw_results = sampler.run_batch(programs=loschmidt_echo_batch, repetitions=nreps)
# Convert measurements to survival probabilities.
raw_probs = np.array(
[to_ground_state_prob(*res) for res in raw_results]
).reshape(trials, len(cycle_values))
Explanation: Without calibration
First we run the Loschmidt echo without calibration.
End of explanation
# Get characterization requests.
characterization_requests = cg.prepare_characterization_for_operations(loschmidt_echo_batch, xeb_options)
# Characterize the requests on the engine.
characterizations = cg.run_calibrations(characterization_requests, sampler)
# Make compensations to circuits in the Loschmidt echo batch.
xeb_calibrated_batch = [
cg.make_zeta_chi_gamma_compensation_for_moments(circuit, characterizations).circuit
for circuit in loschmidt_echo_batch
]
Explanation: With XEB calibration
Now we perform XEB calibration.
End of explanation
# Run on the engine.
xeb_results = sampler.run_batch(programs=xeb_calibrated_batch, repetitions=nreps)
# Convert measurements to survival probabilities.
xeb_probs = np.array(
[to_ground_state_prob(*res) for res in xeb_results]
).reshape(trials, len(cycle_values))
Explanation: And run the XEB calibrated batch below.
End of explanation
plt.semilogy(cycle_values, np.average(raw_probs, axis=0), lw=3, label="No calibration")
plt.semilogy(cycle_values, np.average(xeb_probs, axis=0), lw=3, label="XEB calibration")
plt.xlabel("Cycles")
plt.ylabel("Survival probability")
plt.legend();
Explanation: Compare results
The next cell plots the results.
End of explanation |
15,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:28
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
15,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Загрузим данные
Step1: Зафиксируем генератор случайных чисел для воспроизводимости
Step2: Домашка!
Разделим данные на условно обучающую и отложенную выборки
Step3: Измерять качество будем с помощью метрики среднеквадратичной ошибки
Step4: <div class="panel panel-info" style="margin
Step5: <div class="panel panel-info" style="margin
Step6: <div class="panel panel-info" style="margin | Python Code:
from sklearn.datasets import load_boston
bunch = load_boston()
print(bunch.DESCR)
X, y = pd.DataFrame(data=bunch.data, columns=bunch.feature_names.astype(str)), bunch.target
X.head()
Explanation: Загрузим данные
End of explanation
SEED = 22
np.random.seed = SEED
Explanation: Зафиксируем генератор случайных чисел для воспроизводимости:
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=SEED)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
Explanation: Домашка!
Разделим данные на условно обучающую и отложенную выборки:
End of explanation
from sklearn.metrics import mean_squared_error
Explanation: Измерять качество будем с помощью метрики среднеквадратичной ошибки:
End of explanation
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
clf = LinearRegression()
clf.fit(X_train, y_train);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(clf, X_test, y_test, cv=5, scoring='neg_mean_squared_error'))))
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задача 1.</h3>
</div>
<div class="panel">
Обучите <b>LinearRegression</b> из пакета <b>sklearn.linear_model</b> на обучающей выборке (<i>X_train, y_train</i>) и измерьте качество на <i>X_test</i>.
<br>
<br>
<i>P.s. Ошибка должна быть в районе 20. </i>
</div>
</div>
End of explanation
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
X_scaled = ss.fit_transform(X_train)
y_scaled = ss.fit_transform(y_train)
sgd = SGDRegressor()
sgd.fit(X_scaled, y_scaled);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(sgd, X_scaled, y_scaled, cv=5, scoring='neg_mean_squared_error'))))
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задача 2. (с подвохом)</h3>
</div>
<div class="panel">
Обучите <b>SGDRegressor</b> из пакета <b>sklearn.linear_model</b> на обучающей выборке (<i>X_train, y_train</i>) и измерьте качество на <i>X_test</i>.
</div>
</div>
End of explanation
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import RidgeCV
############Ridge
params = {
'alpha': [10**x for x in range(-2,3)]
}
from sklearn.linear_model import Ridge
gsR = RidgeCV() #GridSearchCV(Ridge(), param_grid=params)
gsR.fit(X_train, y_train);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(gsR, X_test, y_test, cv=5, scoring='neg_mean_squared_error'))))
############Lasso
from sklearn.linear_model import Lasso
from sklearn.linear_model import LassoCV
gsL = GridSearchCV(Lasso(), param_grid=params) #LassoCV() - медленнее
gsL.fit(X_train, y_train);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(gsL, X_test, y_test, cv=5, scoring='neg_mean_squared_error'))))
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import ElasticNetCV
gsE = GridSearchCV(ElasticNet(), param_grid=params) #ElasticNetCV() - просто заменить, не слишком точен
gsE.fit(X_train, y_train);
print('Вышла средняя ошибка, равная %5.4f' % \
(-np.mean(cross_val_score(gsE, X_test, y_test, cv=5, scoring='neg_mean_squared_error'))))
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задача 3.</h3>
</div>
<div class="panel">
Попробуйте все остальные классы:
<ul>
<li>Ridge
<li>Lasso
<li>ElasticNet
</ul>
<br>
В них, как вам уже известно, используются параметры регуляризации <b>alpha</b>. Настройте его как с помощью <b>GridSearchCV</b>, так и с помощью готовых <b>-CV</b> классов (<b>RidgeCV</b>, <b>LassoCV</b> и т.д.).
<br><br>
Найдите уже, в конце-концов, самую точную линейную модель!
</div>
</div>
End of explanation |
15,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Discrete Random Variables and Sampling
George Tzanetakis, University of Victoria
In this notebook we will explore discrete random variables and sampling. After defining a helper class and associated functions we will be able to create both symbolic and numeric random variables and generate samples from them.
A random variable class
Define a helper random variable class based on the scipy discrete random variable functionality providing both numeric and symbolic RVs. You don't need to look at the implementation - the usage will be obvious through the examples below.
Step1: Let's first create some random samples of symbolic random variables corresponding to a coin and a dice
Step2: Now let's look at a numeric random variable corresponding to a dice so that we can more easily make plots and histograms
Step3: Let's now look at a histogram of these generated samples. Notice that even with 500 samples the bars are not equal length so the calculated frequencies are only approximating the probabilities used to generate them
Step4: Let's now plot the cumulative histogram of the samples. By observing the cumulative histogram you can see a possible way to sample any discrete probability distribution. The idea is to use a random number between 0.0 and 1.0 (programming languages typically provide a uniform random number generator that can be used for this purpose) and use it to "index" the y-axis of the histogram. If the number is between 0.0 and the height of the first bar then output 1, if it is between the height of the first bar and the second bar output 2, and so on. This is called the inverse transform method for sampling a distribution. It also works for arbitrary continuous probability densities for which the cumulative distribution can be computed either analytically or approximated numerically. (Note
Step5: Let's now estimate the frequency of the event roll even number in different ways.
First let's directly count the number of even numbers in the generated samples. Then let's
take the sum of the counts of the individual estimated probabilities.
Step6: Notice that we can always estimate the probability of an event by simply counting how many times it occurs in the samples of an experiment. However if we have multiple events we are interested in then it can be easier to calculate the probabilities of the values of invdividual random variables and then use the rules of probability to estimate the probabilities of more complex events.
Generating a random melody
Just for fun let's generate a random melody. We will assume all notes have equal duration, there are no rests, and each note is equally likely. Using music21, a python package for symbolic music processing, we can generate a simple "score" using a simplified notation called tiny notation and then render it as a MIDI file that we can listen.
First let's check that we can hear a melody specified in tiny notation.
Step7: Now we can create a random variable with all 12 notes in a chromatic scale and generate a random melody of 10 notes by sampling it.
Step8: Using music21 it is possible to load a particular Bach Chorale and play it using MIDI.
Step9: We can parse a whole bunch of chorales (in this case all the ones that are in C major) and derive probabilities for each note of the chromatic scale. That way we can estimate the probability density of notes and make a very simple "chorale" like random melody generator. Listen to how it sounds compared to the uniform random atonal melodies we heard before. I calculate the probabilities of each pitch class in chorale_probs and also compute a transition matrix that we will use later when exploring Markov Chains. This takes a a few minutes to process so be patient. There 405 chorales and 23 are in C major. You will see them slowly printed at the output. Wait until you see that the processing is finished - takes about a minute.
Step10: We have calculated the probabilities of each pitch class in our data collection (chorales in C Major) as a 12-element vector (chorale_probs). We also have calculated the transition matrix that calculates the frequency of each transition from one note to another for a total of 12 by 12.
Step11: Let's listen and view the score of a melody generated using the pitch class probabilities. You can see that it is somewhat more ordered and musical than the random melody above. However it does not take at all into account any sequencing information.
Step12: We can also define a Markov Chain based on the transition probabilities we have calculated. This provides a better modeling of sequencing information as can be heard when listening/viewing the randomly generated melody. Markov chains and other probabilistic models for sequences are described in more detail in another notebook. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from scipy import stats
import numpy as np
class Random_Variable:
def __init__(self, name, values, probability_distribution):
self.name = name
self.values = values
self.probability_distribution = probability_distribution
if all(type(item) is np.int64 for item in self.values):
self.type = 'numeric'
self.rv = stats.rv_discrete(name = name,
values = (values, probability_distribution))
elif all(type(item) is str for item in values):
self.type = 'symbolic'
self.rv = stats.rv_discrete(name = name,
values = (np.arange(len(values)), probability_distribution))
self.symbolic_values = values
else:
self.type = 'undefined'
def sample(self,size):
if (self.type =='numeric'):
return self.rv.rvs(size=size)
elif (self.type == 'symbolic'):
numeric_samples = self.rv.rvs(size=size)
mapped_samples = [self.values[x] for x in numeric_samples]
return mapped_samples
Explanation: Discrete Random Variables and Sampling
George Tzanetakis, University of Victoria
In this notebook we will explore discrete random variables and sampling. After defining a helper class and associated functions we will be able to create both symbolic and numeric random variables and generate samples from them.
A random variable class
Define a helper random variable class based on the scipy discrete random variable functionality providing both numeric and symbolic RVs. You don't need to look at the implementation - the usage will be obvious through the examples below.
End of explanation
values = ['H', 'T']
probabilities = [0.5, 0.5]
coin = Random_Variable('coin', values, probabilities)
samples = coin.sample(50)
print(samples)
values = ['1', '2', '3', '4', '5', '6']
probabilities = [1/6.] * 6
dice = Random_Variable('dice', values, probabilities)
samples = dice.sample(10)
print(samples)
Explanation: Let's first create some random samples of symbolic random variables corresponding to a coin and a dice
End of explanation
values = np.arange(1,7)
probabilities = [1/6.] * 6
dice = Random_Variable('dice', values, probabilities)
samples = dice.sample(200)
plt.stem(samples, markerfmt= ' ')
Explanation: Now let's look at a numeric random variable corresponding to a dice so that we can more easily make plots and histograms
End of explanation
plt.figure()
plt.hist(samples,bins=[1,2,3,4,5,6,7],normed=1, rwidth=0.5,align='left');
Explanation: Let's now look at a histogram of these generated samples. Notice that even with 500 samples the bars are not equal length so the calculated frequencies are only approximating the probabilities used to generate them
End of explanation
plt.hist(samples,bins=[1,2,3,4,5,6,7],normed=1, rwidth=0.5,align='left', cumulative=True);
Explanation: Let's now plot the cumulative histogram of the samples. By observing the cumulative histogram you can see a possible way to sample any discrete probability distribution. The idea is to use a random number between 0.0 and 1.0 (programming languages typically provide a uniform random number generator that can be used for this purpose) and use it to "index" the y-axis of the histogram. If the number is between 0.0 and the height of the first bar then output 1, if it is between the height of the first bar and the second bar output 2, and so on. This is called the inverse transform method for sampling a distribution. It also works for arbitrary continuous probability densities for which the cumulative distribution can be computed either analytically or approximated numerically. (Note: there are other more efficient ways of sampling continuous densities)
End of explanation
# we can also write the predicates directly using lambda notation
est_even = len([x for x in samples if x%2==0]) / len(samples)
est_2 = len([x for x in samples if x==2]) / len(samples)
est_4 = len([x for x in samples if x==4]) / len(samples)
est_6 = len([x for x in samples if x==6]) / len(samples)
print(est_even)
# Let's print some estimates
print('Estimates of 2,4,6 = ', (est_2, est_4, est_6))
print('Direct estimate = ', est_even)
print('Sum of estimates = ', est_2 + est_4 + est_6)
print('Theoretical value = ', 0.5)
Explanation: Let's now estimate the frequency of the event roll even number in different ways.
First let's directly count the number of even numbers in the generated samples. Then let's
take the sum of the counts of the individual estimated probabilities.
End of explanation
import music21 as m21
from music21 import midi
littleMelody = m21.converter.parse("tinynotation: 3/4 c4 d8 f g16 a g f#")
sp = midi.realtime.StreamPlayer(littleMelody)
littleMelody.show()
sp.play()
Explanation: Notice that we can always estimate the probability of an event by simply counting how many times it occurs in the samples of an experiment. However if we have multiple events we are interested in then it can be easier to calculate the probabilities of the values of invdividual random variables and then use the rules of probability to estimate the probabilities of more complex events.
Generating a random melody
Just for fun let's generate a random melody. We will assume all notes have equal duration, there are no rests, and each note is equally likely. Using music21, a python package for symbolic music processing, we can generate a simple "score" using a simplified notation called tiny notation and then render it as a MIDI file that we can listen.
First let's check that we can hear a melody specified in tiny notation.
End of explanation
values = ['c4','c#4','d4','d#4', 'e4', 'f4','f#4', 'g4','g#4','a4','a#4','b4']
probabilities = [1/12.]*12
note_rv = Random_Variable('note', values, probabilities)
print(note_rv.sample(10))
note_string = 'tinynotation: 4/4 ; ' + " ".join(note_rv.sample(10))
print(note_string)
randomMelody = m21.converter.parse(note_string)
sp = midi.realtime.StreamPlayer(randomMelody)
randomMelody.show()
sp.play()
Explanation: Now we can create a random variable with all 12 notes in a chromatic scale and generate a random melody of 10 notes by sampling it.
End of explanation
bachPiece = m21.corpus.parse('bwv66.6')
sp = midi.realtime.StreamPlayer(bachPiece)
bachPiece.show()
sp.play()
Explanation: Using music21 it is possible to load a particular Bach Chorale and play it using MIDI.
End of explanation
chorales = m21.corpus.search('bach', fileExtensions='xml')
print(chorales)
chorale_probs = np.zeros(12)
totalNotes = 0
transition_matrix = np.ones([12,12]) # hack of adding one to avoid 0 counts
j = 0
for (i,chorale) in enumerate(chorales):
score = chorale.parse()
analyzedKey = score.analyze('key')
# only consider C major chorales
if (analyzedKey.mode == 'major') and (analyzedKey.tonic.name == 'C'):
j = j + 1
print(j,chorale)
score.parts[0].pitches
for (i,p) in enumerate(score.parts[0].pitches):
chorale_probs[p.pitchClass] += 1
if i < len(score.parts[0].pitches)-1:
transition_matrix[p.pitchClass][score.parts[0].pitches[i+1].pitchClass] += 1
totalNotes += 1
chorale_probs /= totalNotes
transition_matrix /= transition_matrix.sum(axis=1,keepdims=1)
print('Finished processing')
Explanation: We can parse a whole bunch of chorales (in this case all the ones that are in C major) and derive probabilities for each note of the chromatic scale. That way we can estimate the probability density of notes and make a very simple "chorale" like random melody generator. Listen to how it sounds compared to the uniform random atonal melodies we heard before. I calculate the probabilities of each pitch class in chorale_probs and also compute a transition matrix that we will use later when exploring Markov Chains. This takes a a few minutes to process so be patient. There 405 chorales and 23 are in C major. You will see them slowly printed at the output. Wait until you see that the processing is finished - takes about a minute.
End of explanation
print("Note probabilities:")
print(chorale_probs)
print("Transition matrix:")
print(transition_matrix)
Explanation: We have calculated the probabilities of each pitch class in our data collection (chorales in C Major) as a 12-element vector (chorale_probs). We also have calculated the transition matrix that calculates the frequency of each transition from one note to another for a total of 12 by 12.
End of explanation
values = ['c4','c#4','d4','d#4', 'e4', 'f4','f#4', 'g4','g#4','a4','a#4','b4']
note_rv = Random_Variable('note', values, chorale_probs)
note_string = 'tinynotation: 4/4 ; ' + " ".join(note_rv.sample(16))
print(note_string)
randomChoraleMelody = m21.converter.parse(note_string)
sp = midi.realtime.StreamPlayer(randomChoraleMelody)
randomChoraleMelody.show()
sp.play()
Explanation: Let's listen and view the score of a melody generated using the pitch class probabilities. You can see that it is somewhat more ordered and musical than the random melody above. However it does not take at all into account any sequencing information.
End of explanation
def markov_chain(transmat, state, state_names, samples):
(rows, cols) = transmat.shape
rvs = []
values = list(np.arange(0,rows))
# create random variables for each row of transition matrix
for r in range(rows):
rv = Random_Variable("row" + str(r), values, transmat[r])
rvs.append(rv)
# start from initial state given as an argument and then sample the appropriate
# random variable based on the state following the transitions
states = []
for n in range(samples):
state = rvs[state].sample(1)[0]
states.append(state_names[state])
return states
note_string = 'tinynotation: 4/4 ; ' + " ".join(markov_chain(transition_matrix,0,values, 16))
print(note_string)
markovChoraleMelody = m21.converter.parse(note_string)
markovChoraleMelody.show()
sp = midi.realtime.StreamPlayer(markovChoraleMelody)
sp.play()
Explanation: We can also define a Markov Chain based on the transition probabilities we have calculated. This provides a better modeling of sequencing information as can be heard when listening/viewing the randomly generated melody. Markov chains and other probabilistic models for sequences are described in more detail in another notebook.
End of explanation |
15,865 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
pandas version: 1.2 | Problem:
import pandas as pd
df = pd.DataFrame([(.21, .3212), (.01, .61237), (.66123, .03), (.21, .18),(pd.NA, .18)],
columns=['dogs', 'cats'])
def g(df):
df['dogs'] = df['dogs'].apply(lambda x: round(x,2) if str(x) != '<NA>' else x)
return df
df = g(df.copy()) |
15,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Double Multiple Stripe Analysis (2MSA) for Single Degree of Freedom (SDOF) Oscillators
<img src="../../../../figures/intact-damaged.jpg" width="500" align="middle">
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
Step2: Load ground motion records
For what concerns the ground motions to be used in the Double Multiple Stripe Analysis the following inputs are required
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
Step4: Calculate fragility function
In order to obtain the fragility model, it is necessary to input the location of the damage model (damage_model), using the format described in the RMTK manual. It is as well necessary to input the damping value of the structure(s) under analysis and the value of the period (T) to be considered in the regression analysis. The method allows to consider or not degradation. Finally, if desired, it is possible to save the resulting fragility model in a .csv file.
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
from rmtk.vulnerability.common import utils
import double_MSA_on_SDOF
import numpy
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF.read_pinching_parameters import read_parameters
import MSA_utils
%matplotlib inline
Explanation: Double Multiple Stripe Analysis (2MSA) for Single Degree of Freedom (SDOF) Oscillators
<img src="../../../../figures/intact-damaged.jpg" width="500" align="middle">
End of explanation
capacity_curves_file = '/Users/chiaracasotto/GitHub/rmtk_data/2MSA/capacity_curves.csv'
sdof_hysteresis = "/Users/chiaracasotto/GitHub/rmtk_data/pinching_parameters.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
End of explanation
gmrs_folder = '../../../../../rmtk_data/MSA_records'
number_models_in_DS = 1
no_bins = 2
no_rec_bin = 10
damping_ratio = 0.05
minT = 0.1
maxT = 2
filter_aftershocks = 'FALSE'
Mw_multiplier = 0.92
waveform_path = '../../../../../rmtk_data/2MSA/waveform.csv'
gmrs = utils.read_gmrs(gmrs_folder)
gmr_characteristics = MSA_utils.assign_Mw_Tg(waveform_path, gmrs, Mw_multiplier,
damping_ratio, filter_aftershocks)
#utils.plot_response_spectra(gmrs,minT,maxT)
Explanation: Load ground motion records
For what concerns the ground motions to be used in the Double Multiple Stripe Analysis the following inputs are required:
1. gmrs_folder: path to the folder containing the ground motion records to be used in the analysis. Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
2. record_scaled_folder. In this folder there should be a csv file for each Intensity Measure bin selected for the MSA, containing the names of the records that should be scaled to that IM bin, and the corresponding scaling factors. An example of this type of file is provided in the RMTK manual.
3. no_bins: number of Intensity Measure bins.
4. no_rec_bin: number of records per bin
5. number_models_in_DS: the number of model to populate each initial damage state with.
If a certain relationship wants to be kept between the ground motion characteristics of the mainshock and the aftershock, the variable filter_aftershocks should be set to TRUE and the following parameters should be defined:
1. Mw_multiplier: the ratio between the aftershock magnitude and the mainshock magnitude.
2. waveform_path: the path to the file containing for each gmr magnitude and predominant period;
Otherwise the variable filter_aftershocks should be set to FALSE and the aforementioned parameters can be left empty.
If the user wants to plot acceleration, displacement and velocity response spectra, the function utils.plot_response_spectra(gmrs, minT, maxT) should be un-commented. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "/Users/chiaracasotto/GitHub/rmtk_data/2MSA/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
End of explanation
degradation = False
record_scaled_folder = "../../../../../rmtk_data/2MSA/Scaling_factors"
msa = MSA_utils.define_2MSA_parameters(no_bins,no_rec_bin,record_scaled_folder,filter_aftershocks)
PDM, Sds, gmr_info = double_MSA_on_SDOF.calculate_fragility(
capacity_curves, hysteresis, msa, gmrs, gmr_characteristics,
damage_model, damping_ratio,degradation, number_models_in_DS)
Explanation: Calculate fragility function
In order to obtain the fragility model, it is necessary to input the location of the damage model (damage_model), using the format described in the RMTK manual. It is as well necessary to input the damping value of the structure(s) under analysis and the value of the period (T) to be considered in the regression analysis. The method allows to consider or not degradation. Finally, if desired, it is possible to save the resulting fragility model in a .csv file.
End of explanation
IMT = 'Sa'
T = 0.47
#T = numpy.arange(0.4,1.91,0.01)
regression_method = 'max likelihood'
fragility_model = MSA_utils.calculate_fragility_model_damaged( PDM,gmrs,gmr_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa","Sd" and "HI" (Housner Intensity).
2. period: This parameter defines the period for which a spectral intensity measure should be computed. If Housner Intensity is selected as intensity measure a range of periods should be defined instead (for example T=np.arange(0.3,3.61,0.01)).
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 4
MSA_utils.plot_fragility_model(fragility_model,damage_model,minIML, maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
output_type = "csv"
output_path = "../../../../../rmtk_data/2MSA/"
minIML, maxIML = 0.01, 4
tax = 'RC'
MSA_utils.save_mean_fragility(fragility_model,damage_model,tax,output_type,output_path,minIML, maxIML)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
15,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img style="float
Step1: Let us take a sneak peek at the data
Step2: What is the size of the dataset?
Step3: Now we see that there are different models of hard disks, let us list them
<img style="float
Step4: let us see how many models are there in total
Step5: <img style="float
Step6: Sort and plot
Step7: Question 2. Find failures for a hard disk models
Step8: Question 3. How do you compute failure rate for a model
Now let us express the failure / total number of hard disks as ratio. This will give us an understanding of models and their failure behavior
To get that data, instead of computing again, we can join the 2 data frames that were previously computed
and compute the ratio
Step9: let us see in total hard disks for a model, how many failed and how many did not
Step10: now let us compute the ratio of failure number/total_hard_disk of hard disk
Step11: The higher the ratio value is , the model is prone to failure
Step12: Now we know which models fail the most, let us introduce a new feature in our analysis, capacity.
We are going feature by feature the reason being, the more features we add that add value to the outcome, we see how our understanding of the data starts to change.
Let us look at the capacity
Step13: Question 4. Given a model and capacity bytes, what does failure count look like
Step14: Looking at this chart can you tell what is not being represented right?
We are having repeated entries for the same capacity and this really does not give us insights on the relation between capacity data and the models.
Step15: we see that for some models and their respective capacitys we do not have a fail count, lets fill it with 0
Step16: This heat map gives us a better understanding of model, capacity vs failure
Step17: The above charts give us an explanation of which models failed the most, which models had the most number of hard disks running , the ratio of hard disk
Step18: Question 6. Find the average running time for failed hard disks and average running time for hard disks that have not failed
Step19: <img style="float
Step20: Now what can we do with this data? Is this useful? What can I generate from the above data that gives
me a little more insight ?
We can generate what is the average time of failure and average success time for capacity
Step21: Question 7. How about using hours (SMART_9) column now and co-relate it with failure
Step22: Now we want to know upto when for a given hard disk and capacity , how long the hard disk ran
Step23: Question 8. Given the data , identify the model and capacity of the hard disk to buy based on how long it runs
Step24: Let us convert bytes to gigabytes and round it to the nearest number
Step25: The above visualization is confusing as the bars reflect combination of failure and hours count | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize']=15,10
df = pd.read_csv('data/data.csv')
Explanation: <img style="float:center" src="img/explore.jpg" width=300/>
Exploring the data
When we look at spreadsheets or large amounts of data, its hard for us to understand what is really happening. But when we visually interpret the data then everything starts making sense.
<img style="float::left" src="img/explore-reason.png" />
Question 1. Find the total number of hard disks for a given model
Question 2. Find total failures for a hard disk models
Question 3. How do you compute failure rate for a model
Question 4. Given a model and capacity bytes, what does failure count look like
Question 5. Let us count how many days each hard disk ran
Question 6. Find the average running time for failed hard disks and average running time for hard disks that have not failed
Question 7. How about using hours (SMART_9) column now and co-relate it with failure
Question 8. Given the data , identify the model and capacity of the hard disk to buy based on how long it runs
Step by step approach
First let us look at our data
End of explanation
df.head()
Explanation: Let us take a sneak peek at the data
End of explanation
df.shape
Explanation: What is the size of the dataset?
End of explanation
df_model = pd.DataFrame(df.model.unique(),columns=['model'])
df_model.head()
df_model.count()[0]
Explanation: Now we see that there are different models of hard disks, let us list them
<img style="float:center" src="img/distinct.gif" />
End of explanation
print "Total number of distinct models : "+ str(df_model.count()[0])
# Exerice 1: Find the distinct number of serial numbers
# Exercise 2: Find the distinct number of capacity bytes
Explanation: let us see how many models are there in total
End of explanation
df_model_serial = pd.DataFrame(df.groupby(['model']).serial.nunique())
df_model_serial.head()
df_model_serial = df_model_serial.reset_index()
df_model_serial.head()
df_model_serial.columns = ['model','total_HD']
df_model_serial.head(39)
df_model_serial.plot(kind="barh",x="model",y="total_HD")
Explanation: <img style="float:center" src="img/group-by.gif" />
Question 1. Find the total number of hard disks for a given model
Now let us see how many hard disks are there for each model and visualize it.
We see that serial number represents the hard disk and they are related to a model i.e multiple serial numbers belongs to one type of model
End of explanation
df_model_serial.sort_values(by='total_HD',inplace=True)
df_model_serial.plot(kind="barh",x="model",y="total_HD")
#Exercise 3: Find the count of different capacity bytes for a model and plot with and without sorting
Explanation: Sort and plot
End of explanation
df_fail = pd.DataFrame(df.groupby('model').failure.sum())
df_fail.head()
df_fail = df_fail.reset_index()
df_fail.head()
df_fail.plot(kind="barh",x="model",y="failure",figsize=(18,10))
# Exercise 4 : sort the above data frame and plot it
Explanation: Question 2. Find failures for a hard disk models
End of explanation
merged_df = df_model_serial.merge(df_fail,how='inner',on='model')
merged_df.head()
Explanation: Question 3. How do you compute failure rate for a model
Now let us express the failure / total number of hard disks as ratio. This will give us an understanding of models and their failure behavior
To get that data, instead of computing again, we can join the 2 data frames that were previously computed
and compute the ratio
End of explanation
merged_df['success'] = merged_df.total_HD - merged_df.failure
merged_df.head()
merged_df.plot(kind="bar",x="model",y=["failure","success"],subplots=True)
Explanation: let us see in total hard disks for a model, how many failed and how many did not
End of explanation
merged_df['ratio_failure'] = merged_df.failure / merged_df.total_HD
merged_df.head(25)
merged_df.sort_values(by="ratio_failure",ascending=False,inplace=True)
merged_df.head()
merged_df.plot(kind="bar",x="model",y="ratio_failure")
Explanation: now let us compute the ratio of failure number/total_hard_disk of hard disk
End of explanation
#Exercise: Find ratio of success and plot it
#Exercise : Plot multiple bar charts comparing ratio of success and failure
Explanation: The higher the ratio value is , the model is prone to failure
End of explanation
df_capacity = pd.DataFrame(df.capacity.unique(),columns=['capacity'])
df_capacity.head()
df_capacity.shape
#Exercise : For a given capacity bytes, find the total number of failures and plot it
Explanation: Now we know which models fail the most, let us introduce a new feature in our analysis, capacity.
We are going feature by feature the reason being, the more features we add that add value to the outcome, we see how our understanding of the data starts to change.
Let us look at the capacity
End of explanation
df_fail_mod_cap = pd.DataFrame(df.groupby(['model','capacity']).failure.sum())
df_fail_mod_cap.head()
df_fail_mod_cap = df_fail_mod_cap.reset_index()
df_fail_mod_cap.head(25)
df_fail_mod_cap.plot(x="capacity",y="failure",kind="bar",figsize=(20,5))
Explanation: Question 4. Given a model and capacity bytes, what does failure count look like
End of explanation
df_fail_mod_cap.head()
df_fail_mod_cap_pivot = df_fail_mod_cap.pivot("model","capacity","failure")
df_fail_mod_cap_pivot.head()
Explanation: Looking at this chart can you tell what is not being represented right?
We are having repeated entries for the same capacity and this really does not give us insights on the relation between capacity data and the models.
End of explanation
df_fail_mod_cap.fillna(0,inplace=True)
df_fail_mod_cap.head()
sns.heatmap(df_fail_mod_cap_pivot)
Explanation: we see that for some models and their respective capacitys we do not have a fail count, lets fill it with 0
End of explanation
#Exercise : Find count of success for a model with different capacities and plot it
Explanation: This heat map gives us a better understanding of model, capacity vs failure
End of explanation
df_days = pd.DataFrame(df.groupby(['capacity','serial']).date.count())
df_days = df_days.reset_index()
df_days.head()
df_days.columns = ['capacity','serial','total_days']
df_days.head()
df_days.capacity.value_counts()
df_days.shape
df_days_pivot = df_days.pivot('capacity','serial','total_days')
df_days_pivot.head()
df_days_pivot.fillna(0,inplace=True)
df_days_pivot.head()
# Exercise : Visualize the above dataframe
Explanation: The above charts give us an explanation of which models failed the most, which models had the most number of hard disks running , the ratio of hard disk : failure rate and hard disk and for a given capacity of a model what the failure count looks like
<img style="float:center" src="img/explore-clock.png" width=150/>
Hard disk data is time series data, so let us start using time
Question 5. Let us count how many days each hard disk ran
End of explanation
df_fail_days = pd.DataFrame(df[['capacity','serial','failure']].loc[df['failure'] == 1 ])
df_fail_days.head()
Explanation: Question 6. Find the average running time for failed hard disks and average running time for hard disks that have not failed
End of explanation
df_fail_count = df_days.merge(df_fail_days,how="left",on=['capacity','serial'])
df_fail_count.head()
df_fail_count.fillna(0,inplace=True)
df_fail_count.head()
df_fail_count.dtypes
g = sns.FacetGrid(df_fail_count, col="failure",hue='failure',size=5,aspect=1.5)
g.map_dataframe(plt.scatter,x='capacity',y='total_days')
Explanation: <img style="float:center" src="img/sql-joins.jpg"/>
now let us merge the previous data frame which had serial number and count of days
End of explanation
df_fail_count_avg = pd.DataFrame(df_fail_count.groupby(['capacity','failure']).total_days.mean())
df_fail_count_avg.head()
df_fail_count_avg = df_fail_count_avg.reset_index()
df_fail_count_avg.head()
df_fail_count_avg_pivot = df_fail_count_avg.pivot('capacity','failure','total_days')
df_fail_count_avg_pivot.head()
df_fail_count_avg_pivot.plot(kind="bar")
Explanation: Now what can we do with this data? Is this useful? What can I generate from the above data that gives
me a little more insight ?
We can generate what is the average time of failure and average success time for capacity
End of explanation
df_hours = df[['serial','capacity','failure','smart_9']]
df_hours.head()
df_hours.shape
Explanation: Question 7. How about using hours (SMART_9) column now and co-relate it with failure
End of explanation
df_hours_max = pd.DataFrame(df_hours.groupby(['serial','capacity']).smart_9.max())
df_hours_max.head()
df_hours_max.shape
df_hours_max = df_hours_max.reset_index()
df_hours_max_merge = df_hours_max.merge(df_hours,on=['serial','capacity','smart_9'],how='inner')
df_hours_max_merge.head()
df_hours_max_merge_pivot = pd.pivot_table(df_hours_max_merge,index='capacity',columns='failure',values='smart_9'
,aggfunc='mean')
df_hours_max_merge_pivot.head()
df_hours_max_merge_pivot.plot(kind='bar')
Explanation: Now we want to know upto when for a given hard disk and capacity , how long the hard disk ran
End of explanation
df_model_capacity_hours = df[['model','capacity','failure','smart_9']]
df_model_capacity_hours.head()
Explanation: Question 8. Given the data , identify the model and capacity of the hard disk to buy based on how long it runs
End of explanation
df_model_capacity_hours.capacity = df_model_capacity_hours.capacity / 1024 ** 3
df_model_capacity_hours.head()
df_model_capacity_hours.capacity = df_model_capacity_hours.capacity.astype(np.int64)
df_model_capacity_hours.head()
df_model_capacity_hours_pivot = pd.pivot_table(data=df_model_capacity_hours,index='model',columns=['failure','capacity'],
values='smart_9',aggfunc='mean')
df_model_capacity_hours_pivot.head()
df_model_capacity_hours_pivot.fillna(0,inplace=True)
df_model_capacity_hours_pivot.head()
df_model_capacity_hours_pivot.plot(kind="barh")
Explanation: Let us convert bytes to gigabytes and round it to the nearest number
End of explanation
sns.heatmap(df_model_capacity_hours_pivot)
Explanation: The above visualization is confusing as the bars reflect combination of failure and hours count
End of explanation |
15,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass
Step9: Loss layers
Step10: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
Step11: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
Step12: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
Step20: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. | Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
Receive inputs x and weights w
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print 'Testing affine_forward function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print 'Testing affine_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print 'Testing relu_forward function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print 'Testing relu_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print 'Testing affine_relu_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print 'Testing svm_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print '\nTesting softmax_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-2
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print 'Testing initialization ... '
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print 'Testing test-time forward pass ... '
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print 'Testing training loss (no regularization)'
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print 'Running numeric gradient check with reg = ', reg
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
# data = {
# 'X_train': X_train,# training data
# 'y_train': y_train,# training labels
# 'X_val': X_val,# validation data
# 'y_val': y_val,# validation labels
# }
model = TwoLayerNet(input_dim=data['X_train'].size/data['X_train'].shape[0],
hidden_dim=160,
num_classes=len(np.unique(data['y_train'])),
reg=0.1)
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=1000)
solver.train()
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-2
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-1
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print 'next_w error: ', rel_error(next_w, expected_next_w)
print 'velocity error: ', rel_error(expected_velocity, config['velocity'])
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
[FILL THIS IN]
The five layer neural network are more sensitive to weight initialization scale
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'cache error: ', rel_error(expected_cache, config['cache'])
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'v error: ', rel_error(expected_v, config['v'])
print 'm error: ', rel_error(expected_m, config['m'])
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
## Tune hyperparameters
## Goal: Reach 50% validation accuracy
import sys
results = {}
best_val = -1
best_model = None
# random search for hyperparameter optimization
max_count = 3
learning_rates = sorted(10**np.random.uniform(-4, -3, max_count))
weight_scales = sorted(10**np.random.uniform(-2, -1, max_count))
i = 0
for lr in learning_rates:
for ws in weight_scales:
print('set %d, learning rate: %f, weight_scale: %f' % (i+1, lr, ws))
i += 1
sys.stdout.flush()
model = FullyConnectedNet(
[100, 100, 100, 100, 100],
weight_scale=ws, dtype=np.float64,use_batchnorm=False, reg=1e-2)
solver = Solver(model, data,
print_every=1000, num_epochs=1, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': lr,
},
lr_decay = 0.9,
verbose = True
)
solver.train()
train_acc = solver.train_acc_history[-1]
val_acc = solver.val_acc_history[-1]
results[(lr,ws)] = train_acc, val_acc
# Print out results.
for lr, ws in sorted(results):
train_acc, val_acc = results[(lr, ws)]
print 'lr %e ws %e train accuracy: %f, validation accuracy: %f' % (
lr, ws, train_acc, val_acc)
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('weight scale')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('weight scale/log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Notify when finished
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
learning_rate = 1.184318e-04
model = FullyConnectedNet([100, 100, 100, 100, 100],
weight_scale=5.608636e-02, reg=1e-2)
solver = Solver(model, data,
num_epochs=10, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': learning_rate
},
verbose=True,
print_every=1000)
solvers[update_rule] = solver
solver.train()
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history)
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, label='train')
plt.plot(solver.val_acc_history, label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
best_model = model
X_val = data['X_val']
y_val = data['y_val']
X_test = data['X_test']
y_test = data['y_test']
pass
################################################################################
# END OF YOUR CODE #
################################################################################
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
y_test_pred = np.argmax(best_model.loss(X_test), axis=1)
y_val_pred = np.argmax(best_model.loss(X_val), axis=1)
print 'Validation set accuracy: ', (y_val_pred == y_val).mean()
print 'Test set accuracy: ', (y_test_pred == y_test).mean()
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation |
15,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repeated Games With Mistakes
Nikolas Skoufis, 23/10/15
Supervisor
Step1: All strategies inherit from a base Strategy class
Arbitrary strategies can be simulated, including non-deterministic ones because history is stored
Calculation strategies
Monte Carlo
Monte Carlo methods, single processor and multiprocessor (using multiprocessing module)
Step2: Smart brute force
Computational method using a queue and bounding of terms
Amenable to multiprocessing, but large overhead
```python
Set up a variable for the expected payoff
expected_payoff = 0
Set up a queue to hold the partial histories
q = Queue()
Initialize the queue with an empty history, with probability 1
q.put((1, '', ''))
while not q.empty()
Step3: Expected value only
Similar to the last method, but only consider games with length = expected length
Fast but not really that accurate
Step4: Results
Simluations were run on the Monash Campus Cluster
Simulations take a long time to get accurate results, even with multiprocessing
MCC
Heterogeneous cluster (low cores + high ram, high cores + low ram, gpus)
Submit jobs using job files to Sun Grid Engine with qsub
Wrote a script to generate batch files based on array and then submit | Python Code:
from repeatedmistakes.strategies import SuspiciousTitForTat, TitForTat
from repeatedmistakes.repeatedgame import RepeatedGame
my_game = RepeatedGame(SuspiciousTitForTat, TitForTat)
simulation_results = my_game.simulate(10)
print("STFT: " + str(simulation_results[SuspiciousTitForTat]))
print("TFT: " + str(simulation_results[TitForTat]))
Explanation: Repeated Games With Mistakes
Nikolas Skoufis, 23/10/15
Supervisor: Julian Garcia
Prisoner's Dilemma
Game between two prisoners who can either cooperate or defect
Can encode outcomes in a payoff matrix
Nash equilibrium is for both players to defect
Iterated prisoner's dilemma is multiple rounds of the prisoner's dilemma
Best strategy is to always defect, unless the number of rounds are variable
If rounds are variable TFT is the best strategy (cf. Axelrod's tournaments)
Expected payoff
Can compute the expected value of the payoff between two strategies
$$\sum_{i=0}^{\infty} = \delta^i \pi_i$$
Closed forms for simple pairs of strategies
Quickly becomes difficult for non-deterministic strategies
Mistakes complicate all of this!
Which strategies are fault tolerant?
Tools and software
Github for version control
Travis for CI, Coveralls for code coverage
<img src="CoverallsAndTravis.png">
Nose for testing, Hypothesis for property based testing
Consider some simple code that encodes and decodes text to/from some character encoding
```python
from hypothesis import given
from hypothesis.strategies import text
@given(text())
def test_decode_inverts_encode(s):
assert decode(encode(s)) == s
```
Implementation
Need a way to simulate and analyse different strategies
End of explanation
from repeatedmistakes.simulations_multiprocessed import simulate_payoff
from repeatedmistakes.repeatedgame import PrisonersDilemmaPayoff
# Fixed number of trials
fixed_payoff = simulate_payoff(SuspiciousTitForTat, TitForTat, PrisonersDilemmaPayoff(),
continuation_probability=0.9, mistake_probability=0.01, trials=1000)
# With estimator stdev
estimator_payoff = simulate_payoff(SuspiciousTitForTat, TitForTat, PrisonersDilemmaPayoff(),
continuation_probability=0.9, mistake_probability=0.01, estimator_stdev=0.2)
print("Payoff for fixed number of trials: " + str(fixed_payoff))
print("Payoff with estimator stdev: " + str(estimator_payoff))
Explanation: All strategies inherit from a base Strategy class
Arbitrary strategies can be simulated, including non-deterministic ones because history is stored
Calculation strategies
Monte Carlo
Monte Carlo methods, single processor and multiprocessor (using multiprocessing module)
End of explanation
from repeatedmistakes.calculations_multiprocessed import calculate_payoff_with_mistakes
results = calculate_payoff_with_mistakes(SuspiciousTitForTat, TitForTat, PrisonersDilemmaPayoff(),
continuation_probability=0.9, mistake_probability=0.01,
epsilon=1e-6)
print("Smart brute force results: " + str(results))
Explanation: Smart brute force
Computational method using a queue and bounding of terms
Amenable to multiprocessing, but large overhead
```python
Set up a variable for the expected payoff
expected_payoff = 0
Set up a queue to hold the partial histories
q = Queue()
Initialize the queue with an empty history, with probability 1
q.put((1, '', ''))
while not q.empty():
# Get an item from the front of the queue
item = q.get()
# Set up Strategy objects with the given histories
player_one = Strategy(item.history1)
player_two = Strategy(item.history2)
# Compute the moves that the strategies produce with the given histories, passing the opponent's history as well
move_one = player_one.next_move(player_two.history)
move_two = player_two.next_move(player_one.history)
# Compute the probability of no mistakes occurring
probability = item.probability * no_mistake_probability * continuation_probability
# If this maximum possible term size is larger than the threshold
if probability * max_payoff > epsilon:
# Multiply this by the payoff from the outcome of a no-mistake round to find the term
term = probability * payoff(move_one, move_two)
# Add the term to the expected payoff
expected_payoff += term
# Add the probability along with the histories (including the new moves) back onto the queue
q.put(probability,
item.history1 + move_one,
item.history2 + move_two)
else:
# The probability was too small, so don't add it back to the queue
# Repeat this for each of the two one mistake cases and the two mistake case
```
End of explanation
from repeatedmistakes.expected_only import expected_only
results = expected_only(SuspiciousTitForTat, TitForTat, PrisonersDilemmaPayoff(),
continuation_probability=0.9, mistake_probability=0.01,
epsilon=1e-6)
print("Expected value only results: " + str(results))
Explanation: Expected value only
Similar to the last method, but only consider games with length = expected length
Fast but not really that accurate
End of explanation
#!/bin/sh
#$ -S /bin/sh
#$ -m bea
#$ -M nmsko2@student.monash.edu
#$ -l h_vmem=8G
#$ -pe smp 8
module load python/3.4.3
python3 simulation.py 0.9 0.001
Explanation: Results
Simluations were run on the Monash Campus Cluster
Simulations take a long time to get accurate results, even with multiprocessing
MCC
Heterogeneous cluster (low cores + high ram, high cores + low ram, gpus)
Submit jobs using job files to Sun Grid Engine with qsub
Wrote a script to generate batch files based on array and then submit
End of explanation |
15,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color=Teal>ATOMIC and ASTRING FUNCTIONS (Python Code)</font>
By Sergei Yu. Eremenko, PhD, Dr.Eng., Professor, Honorary Professor
https
Step1: <font color=teal>2. Atomic String Function (AString) is an Integral and Composing Branch of Atomic Function up(x) (introduced in 2017 by S. Yu. Eremenko)</font>
AString function is solitary kink function which simultaneously is integral and composing branch of atomic function up(x)
<font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font>
Step2: Atomic String, Atomic Function (AF) and AF Derivative plotted together
Step3: <font color=teal>3. Properties of Atomic Function Up(x)</font>
3.1. Atomic Function Derivative expressed via Atomic Function itself
Atomic Function Derivative can be exressed via Atomic Function itself - up'(x)= 2up(2x+1)-2up(2x-1) meaning the shape of pulses for derivative function can be represented by shifted and stratched Atomic Function itself - remarkable property
<font color=maroon>up'(x)= 2up(2x+1)-2up(2x-1)</font>
Atomic Function and its Derivative plotted together
Step4: 3.2. Partition of Unity
The Atomic Function pulses superposition set at points -2, -1, 0, +1, +2... can exactly represent a Unity (number 1)
Step5: 3.3. Atomic Function (AF) is a 'finite', 'compactly supported', or 'solitary' function
Like a Spline, Atomic Function (AF) 'compactly supported' not equal to zero only on section |x|<=1
Step6: 3.4 Atomic Function is a non-analytical function (can not be represented by Taylor's series), but with known Fourier Transformation allowing to exactly calculate AF in certain points, with tabular representation provided in script above.
<font color=teal>4. Properties of Atomic String Function</font>
4.1. AString is not only Integral but also Composing Branch of Atomic Function
<font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font>
Astring is a swing-like function - Integral of Atomic Function (AF) which can be expressed via AF itself
Step7: 4.3. AStrings and Atomic Solitons
Solitonic mathematical properties of AString and Atomic Functions have been explored in author's paper [3] (Eremenko, S.Yu. Atomic solitons as a new class of solitons; 2018; https
Step8: 4.6. Partition of Line from Atomic String functions
Combination/summation of Atomic Strings can exactly represent a straight line
Step9: Partition based on AString with certain width and height depending on a size of 'quanta'
Step10: 5. Representing curved shapes via AStrings and Atomic Functions
Shifts and stretches of Atomic adn AString functions allows reproducing curved surfaces (eq curved spacetime). Details are in author's papers "Atomic Strings and Fabric of Spacetime", "Atomic Solitons as a New Class of Solitons".
Step11: <font color=teal>6. 'Soliton Nature' book</font>
6.1. AStrings and Atomic functions are also described in the book 'Soliton Nature'
Soliton Nature book is easy-to-read, pictorial, interactive book which uses beautiful photography, video channel, and computer scripts in R and Python to demonstrate existing and explore new solitons – the magnificent and versatile energy concentration phenomenon of nature. New class of atomic solitons can be used to describe Higgs boson (‘the god particle’) fields, spacetime quanta and other fundamental building blocks of nature. | Python Code:
import numpy as np
import pylab as pl
pl.rcParams["figure.figsize"] = 9,6
###################################################################
##This script calculates the values of Atomic Function up(x) (1971)
###################################################################
################### One Pulse of atomic function
def up1(x: float) -> float:
#Atomic function table
up_y = [0.5, 0.48, 0.460000017,0.440000421,0.420003478,0.400016184, 0.380053256, 0.360139056,
0.340308139, 0.320605107,0.301083436, 0.281802850, 0.262826445, 0.244218000, 0.226041554,
0.208361009, 0.191239338, 0.174736305, 0.158905389, 0.143991189, 0.129427260, 0.115840866,
0.103044024, 0.9110444278e-01, 0.798444445e-01, 0.694444445e-01, 0.598444445e-01,
0.510444877e-01, 0.430440239e-01, 0.358409663e-01, 0.294282603e-01, 0.237911889e-01,
0.189053889e-01, 0.147363055e-01, 0.112393379e-01, 0.836100883e-02, 0.604155412e-02,
0.421800000e-02, 0.282644445e-02, 0.180999032e-02, 0.108343562e-02, 0.605106267e-03,
0.308138660e-03, 0.139055523e-03, 0.532555251e-04, 0.161841328e-04, 0.347816874e-05,
0.420576116e-05, 0.167693347e-07, 0.354008603e-10, 0]
up_x = np.arange(0.5, 1.01, 0.01)
res = 0.
if ((x>=0.5) and (x<=1)):
for i in range(len(up_x) - 1):
if (up_x[i] >= x) and (x < up_x[i+1]):
N1 = 1 - (x - up_x[i])/0.01
res = N1 * up_y[i] + (1 - N1) * up_y[i+1]
return res
return res
############### Atomic Function Pulse with width, shift and scale #############
def upulse(t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
res = 0.
if (x >= 0.5) and (x <= 1):
res = up1(x)
elif (x >= 0.0) and (x < 0.5):
res = 1 - up1(1 - x)
elif (x >= -1 and x <= -0.5):
res = up1(-x)
elif (x > -0.5) and (x < 0):
res = 1 - up1(1 + x)
res = d + res * c
return res
############### Atomic Function Applied to list with width, shift and scale #############
def up(x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(upulse(x[i], a, b, c, d))
return res
x = np.arange(-2.0, 2.0, 0.01)
pl.title('Atomic Function up(x)')
pl.plot(x, up(x), label='Atomic Function')
pl.grid(True)
pl.show()
Explanation: <font color=Teal>ATOMIC and ASTRING FUNCTIONS (Python Code)</font>
By Sergei Yu. Eremenko, PhD, Dr.Eng., Professor, Honorary Professor
https://www.researchgate.net/profile/Sergei_Eremenko
https://www.amazon.com/Sergei-Eremenko/e/B082F3MQ4L
https://www.linkedin.com/in/sergei-eremenko-3862079
https://www.facebook.com/SergeiEremenko.Author
Atomic functions (AF) described in many books and hundreds of papers have been discovered in 1970s by Academician NAS of Ukraine Rvachev V.L. (https://ru.wikipedia.org/w/index.php?oldid=83948367) (author's teacher) and professor Rvachev V.A. and advanced by many followers, notably professor Kravchenko V.F. (https://ru.wikipedia.org/w/index.php?oldid=84521570), H. Gotovac (https://www.researchgate.net/profile/Hrvoje_Gotovac), V.M. Kolodyazhni (https://www.researchgate.net/profile/Volodymyr_Kolodyazhny), O.V. Kravchenko (https://www.researchgate.net/profile/Oleg_Kravchenko) as well as the author S.Yu. Eremenko (https://www.researchgate.net/profile/Sergei_Eremenko) [1-4] for a wide range of applications in mathematical physics, boundary value problems, statistics, radio-electronics, telecommunications, signal processing, and others.
As per historical survey (https://www.researchgate.net/publication/308749839), some elements, analogs, subsets or Fourier transformations of AFs sometimes named differently (Fabius function, hat function, compactly supported smooth function) have been probably known since 1930s and rediscovered many times by scientists from different countries, including Fabius, W.Hilberg and others. However, the most comprehensive 50+ years’ theory development supported by many books, dissertations, hundreds of papers, lecture courses and multiple online resources have been performed by the schools of V.L. Rvachev, V.A. Rvachev and V.F. Kravchenko.
In 2017-2020, Sergei Yu. Eremenko, in papers "Atomic Strings and Fabric of Spacetime", "Atomic Solitons as a New Class of Solitons", "Atomic Machine Learning" and book "Soliton Nature" [1-8], has introduced <b>AString</b> atomic function as an integral and 'composing branch' of Atomic Function up(x): <font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font>
AString function, is a smooth solitonic kink function by joining of which on a periodic lattice it is possible to compose a straight-line resembling flat spacetime as well as to build 'solitonic atoms' composing different fields. It may lead to novel models of spacetime and quantized gravity where AString may describe Spacetime Quantum, or Spacetime Metriant. Also, representing of different fields via shift and stretches of AStrings and Atomic Functions may lead to unified theory where AString may describe some fundamental building block of quantum fields, like a string, elementary spacetime distortion or metriant.
So, apart from traditional areas of AF applications in mathematical physics, radio-electronics and signal processing, AStrings and Atomic Functions may be expanded to Spacetime Physics, String theory, General and Special Relativity, Theory of Solitons, Lattice Physics, Quantized Gravity, Cosmology, Dark matter and Multiverse theories as well as Finite Element Methods, Nonarchimedean Computers, Atomic regression analysis, Atomic Kernels, Machine Learning and Artificial Intelligence.
<font color=teal>1. Atomic Function up(x) (introduced in 1971 by V.L.Rvachev and V.A.Rvachev)</font>
End of explanation
############### Atomic String #############
def AString1(x: float) -> float:
res = 1 * (upulse(x/2.0 - 0.5) - 0.5)
return res
############### Atomic String Pulse with width, shift and scale #############
def AStringPulse(t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
if (x < -1):
res = -0.5
elif (x > 1):
res = 0.5
else:
res = AString1(x)
res = d + res * c
return res
###### Atomic String Applied to list with width, shift and scale #############
def AString(x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(AStringPulse(x[i], a, b, c, d))
#res[i] = AStringPulse(x[i], a, b, c)
return res
###### Summation of two lists #############
def Sum(x1: list, x2: list) -> list:
res = []
for i in range(len(x1)):
res.append(x1[i] + x2[i])
return res
x = np.arange(-2.0, 2.0, 0.01)
pl.title('Atomic String Function')
pl.plot(x, AString(x, 1.0, 0, 1, 0), label='Atomic String')
pl.grid(True)
pl.show()
Explanation: <font color=teal>2. Atomic String Function (AString) is an Integral and Composing Branch of Atomic Function up(x) (introduced in 2017 by S. Yu. Eremenko)</font>
AString function is solitary kink function which simultaneously is integral and composing branch of atomic function up(x)
<font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font>
End of explanation
x = np.arange(-2.0, 2.0, 0.01)
#This Calculates Derivative
dx = x[1] - x[0]
dydx = np.gradient(up(x), dx)
pl.plot(x, up(x), label='Atomic Function')
pl.plot(x, AString(x, 1.0, 0, 1, 0), linewidth=2, label='Atomic String Function')
pl.plot(x, dydx, '--', label='A-Function Derivative')
pl.title('Atomic and AString Functions')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Atomic String, Atomic Function (AF) and AF Derivative plotted together
End of explanation
x = np.arange(-2.0, 2.0, 0.01)
pl.plot(x, up(x), label='Atomic Function', linewidth=2)
pl.plot(x, dydx, '--', label='Atomic Function Derivative', linewidth=1, color="Green")
pl.title('Atomic Function and Its Derivative')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: <font color=teal>3. Properties of Atomic Function Up(x)</font>
3.1. Atomic Function Derivative expressed via Atomic Function itself
Atomic Function Derivative can be exressed via Atomic Function itself - up'(x)= 2up(2x+1)-2up(2x-1) meaning the shape of pulses for derivative function can be represented by shifted and stratched Atomic Function itself - remarkable property
<font color=maroon>up'(x)= 2up(2x+1)-2up(2x-1)</font>
Atomic Function and its Derivative plotted together
End of explanation
x = np.arange(-2.0, 2.0, 0.01)
pl.plot(x, up(x, 1, -1), '--', linewidth=1, label='Atomic Function at x=-1')
pl.plot(x, up(x, 1, +0), '--', linewidth=1, label='Atomic Function at x=0')
pl.plot(x, up(x, 1, -1), '--', linewidth=1, label='Atomic Function at x=-1')
pl.plot(x, Sum(up(x, 1, -1), Sum(up(x), up(x, 1, 1))), linewidth=2, label='Atomic Function Compounding')
pl.title('Atomic Function Compounding represent 1')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 3.2. Partition of Unity
The Atomic Function pulses superposition set at points -2, -1, 0, +1, +2... can exactly represent a Unity (number 1):
1 = ... up(x-3) + up(x-2) + up(x-1) + up(x-0) + up(x+1) + up(x+2) + up(x+3) + ...
<font color=maroon>1 = ... up(x-3) + up(x-2) + up(x-1) + up(x-0) + up(x+1) + up(x+2) + up(x+3) + ...</font>
End of explanation
x = np.arange(-5.0, 5.0, 0.01)
pl.plot(x, up(x), label='Atomic Function', linewidth=2)
#pl.plot(x, dydx, '--', label='Atomic Function Derivative', linewidth=1, color="Green")
pl.title('Atomic Function is compactly supported')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 3.3. Atomic Function (AF) is a 'finite', 'compactly supported', or 'solitary' function
Like a Spline, Atomic Function (AF) 'compactly supported' not equal to zero only on section |x|<=1
End of explanation
######### Presentation of Atomic Function via Atomic Strings ##########
x = np.arange(-2.0, 2.0, 0.01)
pl.plot(x, AString(x, 1, 0, 1, 0), '--', linewidth=1, label='AString(x)')
pl.plot(x, AString(x, 0.5, -0.5, +1, 0), '--', linewidth=2, label='+AString(2x+1)')
pl.plot(x, AString(x, 0.5, +0.5, -1, 0), '--', linewidth=2, label='-AString(2x-1)')
#pl.plot(x, up(x, 1.0, 0, 1, 0), '--', linewidth=1, label='Atomic Function')
AS2 = Sum(AString(x, 0.5, -0.5, +1, 0), AString(x, 0.5, +0.5, -1, 0))
pl.plot(x, AS2, linewidth=3, label='Up(x) via Strings')
pl.title('Atomic Function as a Combination of AStrings')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 3.4 Atomic Function is a non-analytical function (can not be represented by Taylor's series), but with known Fourier Transformation allowing to exactly calculate AF in certain points, with tabular representation provided in script above.
<font color=teal>4. Properties of Atomic String Function</font>
4.1. AString is not only Integral but also Composing Branch of Atomic Function
<font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font>
Astring is a swing-like function - Integral of Atomic Function (AF) which can be expressed via AF itself:
AString(x) = Integral(0,x)(Up(x)) = Up(x/2 - 1/2) - 1/2
<font color=maroon>AString(x) = Integral(0,x)(Up(x)) = Up(x/2 - 1/2) - 1/2</font>
4.2. Atomic Function is a 'solitonic atom' composed from two opposite AStrings
The concept of 'Solitonic Atoms' (bions) composed from opposite kinks is known in soliton theory [3,5].
<font color=maroon>up(x) = AString(2x + 1) - AString(2x - 1)</font>
End of explanation
x = np.arange(-2, 2.0, 0.01)
pl.title('AString and Fabius Functions')
pl.plot(x, AString(x, 0.5, 0.5, 1, 0.5), label='Fabius Function')
pl.plot(x, AString(x, 1, 0, 1, 0), label='AString Function')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 4.3. AStrings and Atomic Solitons
Solitonic mathematical properties of AString and Atomic Functions have been explored in author's paper [3] (Eremenko, S.Yu. Atomic solitons as a new class of solitons; 2018; https://www.researchgate.net/publication/329465767). They both satisfy differential equations with shifted arguments which introduce special kind of <b>nonlinearity</b> typical for all mathematical solitons.
AString belong to the class of <b>Solitonic Kinks</b> similar to sine-Gordon, Frenkel-Kontorova, tanh and others. Unlike other kinks, AStrings are truly solitary (compactly-supported) and also have a unique property of composing of both straight-line and solitonic atoms on lattice resembling particle-like properties of solitons.
Atomic Function up(x) is not actually a mathematical soliton, but a complex object composed from summation of two opposite AString kinks, and in solitonic terminology, is called 'solitonic atoms' (like bions).
4.4. All derivatives of AString can be represented via AString itself
<font color=maroon>AString'(x) = AString(2x + 1) - AString(2x - 1)</font>
It means AString is a smooth (infinitely divisible) function, with fractalic properties.
4.5. AString and Fabius Function
Fabius Function https://en.wikipedia.org/wiki/Fabius_function, with unique property f'(x) = 2f(2x), published in 1966 but was probably known since 1935, is shifted and stretched AString function. Fabius function is not directly an integral of atomic function up(x).
<font color=maroon>Fabius(x) = AString(2x - 1) + 0.5</font>
End of explanation
x = np.arange(-3, 3, 0.01)
pl.plot(x, AString(x, 1, -1.0, 1, 0), '--', linewidth=1, label='AString 1')
pl.plot(x, AString(x, 1, +0.0, 1, 0), '--', linewidth=1, label='AString 2')
pl.plot(x, AString(x, 1, +1.0, 1, 0), '--', linewidth=1, label='AString 3')
AS2 = Sum(AString(x, 1, -1.0, 1, 0), AString(x, 1, +0.0, 1, 0))
AS3 = Sum(AS2, AString(x, 1, +1.0, 1, 0))
pl.plot(x, AS3, label='AStrings Sum', linewidth=2)
pl.title('Atomic Strings compose Line')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 4.6. Partition of Line from Atomic String functions
Combination/summation of Atomic Strings can exactly represent a straight line:
x = ...Astring(x-2) + Astring(x-1) + AString(x) + Astring(x+1) + Astring(x+2)...
<font color=maroon>x = ...Astring(x-2) + Astring(x-1) + AString(x) + Astring(x+1) + Astring(x+2)...</font>
Partition based on AString function with width 1 and height 1
End of explanation
x = np.arange(-40.0, 40.0, 0.01)
width = 10.0
height = 10.0
#pl.plot(x, ABline (x, 1, 0), label='ABLine 1*x')
pl.plot(x, AString(x, width, -3*width/2, height, -3*width/2), '--', linewidth=1, label='AString 1')
pl.plot(x, AString(x, width, -1*width/2, height, -1*width/2), '--', linewidth=1, label='AString 2')
pl.plot(x, AString(x, width, +1*width/2, height, +1*width/2), '--', linewidth=1, label='AString 3')
pl.plot(x, AString(x, width, +3*width/2, height, +3*width/2), '--', linewidth=1, label='AString 4')
AS2 = Sum(AString(x, width, -3*width/2, height, -3*width/2), AString(x, width, -1*width/2, height, -1*width/2))
AS3 = Sum(AS2, AString(x, width,+1*width/2, height, +1*width/2))
AS4 = Sum(AS3, AString(x, width,+3*width/2, height, +3*width/2))
pl.plot(x, AS4, label='AStrings Joins', linewidth=2)
pl.title('Atomic Strings Combinations')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Partition based on AString with certain width and height depending on a size of 'quanta'
End of explanation
x = np.arange(-50.0, 50.0, 0.1)
dx = x[1] - x[0]
CS6 = Sum(up(x, 5, -30, 5, 5), up(x, 15, 0, 15, 5))
CS6 = Sum(CS6, up(x, 10, +30, 10, 5))
pl.plot(x, CS6, label='Spacetime Density distribution')
IntC6 = np.cumsum(CS6)*dx/50
pl.plot(x, IntC6, label='Spacetime Shape (Geodesics)')
DerC6 = np.gradient(CS6, dx)
pl.plot(x, DerC6, label='Spacetime Curvature')
LightTrajectory = -10 -IntC6/5
pl.plot(x, LightTrajectory, label='Light Trajectory')
pl.title('Shape of Curved Spacetime model')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 5. Representing curved shapes via AStrings and Atomic Functions
Shifts and stretches of Atomic adn AString functions allows reproducing curved surfaces (eq curved spacetime). Details are in author's papers "Atomic Strings and Fabric of Spacetime", "Atomic Solitons as a New Class of Solitons".
End of explanation
#pl.rcParams["figure.figsize"] = 16,12
book = pl.imread('BookSpread_small.png')
pl.imshow(book)
Explanation: <font color=teal>6. 'Soliton Nature' book</font>
6.1. AStrings and Atomic functions are also described in the book 'Soliton Nature'
Soliton Nature book is easy-to-read, pictorial, interactive book which uses beautiful photography, video channel, and computer scripts in R and Python to demonstrate existing and explore new solitons – the magnificent and versatile energy concentration phenomenon of nature. New class of atomic solitons can be used to describe Higgs boson (‘the god particle’) fields, spacetime quanta and other fundamental building blocks of nature.
End of explanation |
15,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Метод главных компонент
В данном задании вам будет предложено ознакомиться с подходом, который переоткрывался в самых разных областях, имеет множество разных интерпретаций, а также несколько интересных обобщений
Step1: Теория
Любой набор данных представляет собой матрицу $X$.
Метод главных компонент последовательно находит следующие линейные комбинации признаков (компоненты) из $X$
Step2: Путём диагонализации истинной матрицы ковариаций $C$, мы можем найти преобразование исходного набора данных, компоненты которого наилучшим образом будут описывать дисперсию, с учётом их ортогональности друг другу
Step3: А теперь сравним эти направления с направлениями, которые выбирает метод главных компонент
Step4: Видно, что уже при небольшом количестве данных они отличаются незначительно. Увеличим размер выборки
Step5: В этом случае главные компоненты значительно точнее приближают истинные направления данных, вдоль которых наблюдается наибольшая дисперсия.
Статистический взгляд на модель
Как формализовать предположения метода, указанные выше? При помощи вероятностной модели!
Задача, стоящая за любым методом уменьшения размерности
Step6: Вариационный взгляд на модель
Мы знаем, что каждой главной компоненте соответствует описываемая ей дисперсия данных (дисперсия данных при проекции на эту компоненту). Она численно равна значению диагональных элементов матрицы $\Lambda$, получаемой из спектрального разложения матрицы ковариации данных (смотри теорию выше).
Исходя из этого, мы можем отсортировать дисперсию данных вдоль этих компонент по убыванию, и уменьшить размерность данных, отбросив $q$ итоговых главных компонент, имеющих наименьшую дисперсию.
Делать это можно двумя разными способами. Например, если вы вдальнейшем обучаете на данных с уменьшенной размерностью модель классификации или регрессии, то можно запустить итерационный процесс
Step7: Интерпретация главных компонент
В качестве главных компонент мы получаем линейные комбинации исходных призанков, поэтому резонно возникает вопрос об их интерпретации.
Для этого существует несколько подходов, мы рассмотрим два
Step8: Интерпретация главных компонент с использованием данных
Рассмотрим теперь величину, которую можно проинтерпретировать, как квадрат косинуса угла между объектом выборки и главной компонентой
Step9: Анализ основных недостатков метода главных компонент
Рассмотренные выше задачи являются, безусловно, модельными, потому что данные для них были сгенерированы в соответствии с предположениями метода главных компонент. На практике эти предположения, естественно, выполняются далеко не всегда. Рассмотрим типичные ошибки PCA, которые следует иметь в виду перед тем, как его применять.
Направления с максимальной дисперсией в данных неортогональны
Рассмотрим случай выборки, которая сгенерирована из двух вытянутых нормальных распределений
Step10: В чём проблема, почему pca здесь работает плохо? Ответ прост | Python Code:
import numpy as np
import pandas as pd
import matplotlib
from matplotlib import pyplot as plt
import matplotlib.patches as mpatches
matplotlib.style.use('ggplot')
import seaborn as sns
%matplotlib inline
Explanation: Метод главных компонент
В данном задании вам будет предложено ознакомиться с подходом, который переоткрывался в самых разных областях, имеет множество разных интерпретаций, а также несколько интересных обобщений: методом главных компонент (principal component analysis).
Programming assignment
Задание разбито на две части:
- работа с модельными данными,
- работа с реальными данными.
В конце каждого пункта от вас требуется получить ответ и загрузить в соответствующую форму в виде набора текстовых файлов.
End of explanation
from sklearn.decomposition import PCA
mu = np.zeros(2)
C = np.array([[3,1],[1,2]])
data = np.random.multivariate_normal(mu, C, size=50)
plt.scatter(data[:,0], data[:,1])
plt.show()
Explanation: Теория
Любой набор данных представляет собой матрицу $X$.
Метод главных компонент последовательно находит следующие линейные комбинации признаков (компоненты) из $X$:
- каждая компонента ортогональна всем остальным и нормированна: $<w_i, w_j> = 0, \quad ||w_i||=1$,
- каждая компонента описывает максимально возможную дисперсию данных (с учётом предыдущего ограничения).
Предположения, в рамках которых данный подход будет работать хорошо:
- линейность компонент: мы предполагаем, что данные можно анализировать линейными методами,
- большие дисперсии важны: предполагается, что наиболее важны те направления в данных, вдоль которых они имеют наибольшую дисперсию,
- все компоненты ортогональны: это предположение позволяет проводить анализ главных компонент при помощи техник линейной алгебры (например, сингулярное разложение матрицы $X$ или спектральное разложение матрицы $X^TX$).
Как это выглядит математически?
Обозначим следующим образом выборочную матрицу ковариации данных: $\hat{C} \propto Q = X^TX$. ($Q$ отличается от $\hat{C}$ нормировкой на число объектов).
Сингулярное разложение матрицы $Q$ выглядит следующим образом:
$$Q = X^TX = W \Lambda W^T$$
Можно строго показать, что столбцы матрицы $W$ являются главными компонентами матрицы $X$, т.е. комбинациями признаков, удовлетворяющих двум условиям, указанным в начале. При этом дисперсия данных вдоль направления, заданного каждой компонентой, равна соответствующему значению диагональной матрицы $\Lambda$.
Как же на основании этого преобразования производить уменьшение размерности? Мы можем отранжировать компоненты, используя значения дисперсий данных вдоль них.
Сделаем это: $\lambda_{(1)} > \lambda_{(2)} > \dots > \lambda_{(D)}$.
Тогда, если мы выберем компоненты, соответствующие первым $d$ дисперсиям из этого списка, мы получим набор из $d$ новых признаков, которые наилучшим образом описывают дисперсию изначального набора данных среди всех других возможных линейных комбинаций исходных признаков матрицы $X$.
- Если $d=D$, то мы вообще не теряем никакой информации.
- Если $d<D$, то мы теряем информацию, которая, при справедливости указанных выше предположений, будет пропорциональна сумме дисперсий отброшенных компонент.
Получается, что метод главных компонент позволяет нам ранжировать полученные компоненты по "значимости", а также запустить процесс их отбора.
Пример
Рассмотрим набор данных, который сэмплирован из многомерного нормального распределения с матрицей ковариации $C = \begin{pmatrix} 3 & 1 \ 1 & 2 \end{pmatrix}$.
End of explanation
v, W_true = np.linalg.eig(C)
plt.scatter(data[:,0], data[:,1])
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data[:,0], (W_true[0,0]/W_true[0,1])*data[:,0], color="g")
plt.plot(data[:,0], (W_true[1,0]/W_true[1,1])*data[:,0], color="g")
g_patch = mpatches.Patch(color='g', label='True components')
plt.legend(handles=[g_patch])
plt.axis('equal')
limits = [np.minimum(np.amin(data[:,0]), np.amin(data[:,1])),
np.maximum(np.amax(data[:,0]), np.amax(data[:,1]))]
plt.xlim(limits[0],limits[1])
plt.ylim(limits[0],limits[1])
plt.draw()
Explanation: Путём диагонализации истинной матрицы ковариаций $C$, мы можем найти преобразование исходного набора данных, компоненты которого наилучшим образом будут описывать дисперсию, с учётом их ортогональности друг другу:
End of explanation
def plot_principal_components(data, model, scatter=True, legend=True):
W_pca = model.components_
if scatter:
plt.scatter(data[:,0], data[:,1])
plt.plot(data[:,0], -(W_pca[0,0]/W_pca[0,1])*data[:,0], color="c")
plt.plot(data[:,0], -(W_pca[1,0]/W_pca[1,1])*data[:,0], color="c")
if legend:
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[c_patch], loc='lower right')
# сделаем графики красивыми:
plt.axis('equal')
limits = [np.minimum(np.amin(data[:,0]), np.amin(data[:,1]))-0.5,
np.maximum(np.amax(data[:,0]), np.amax(data[:,1]))+0.5]
plt.xlim(limits[0],limits[1])
plt.ylim(limits[0],limits[1])
plt.draw()
model = PCA(n_components=2)
model.fit(data)
plt.scatter(data[:,0], data[:,1])
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data[:,0], (W_true[0,0]/W_true[0,1])*data[:,0], color="g")
plt.plot(data[:,0], (W_true[1,0]/W_true[1,1])*data[:,0], color="g")
# построим компоненты, полученные с использованием метода PCA:
plot_principal_components(data, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
Explanation: А теперь сравним эти направления с направлениями, которые выбирает метод главных компонент:
End of explanation
data_large = np.random.multivariate_normal(mu, C, size=5000)
model = PCA(n_components=2)
model.fit(data_large)
plt.scatter(data_large[:,0], data_large[:,1], alpha=0.1)
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data_large[:,0], (W_true[0,0]/W_true[0,1])*data_large[:,0], color="g")
plt.plot(data_large[:,0], (W_true[1,0]/W_true[1,1])*data_large[:,0], color="g")
# построим компоненты, полученные с использованием метода PCA:
plot_principal_components(data_large, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
Explanation: Видно, что уже при небольшом количестве данных они отличаются незначительно. Увеличим размер выборки:
End of explanation
from sklearn.decomposition import PCA
from sklearn.model_selection import cross_val_score as cv_score
def plot_scores(d_scores):
n_components = np.arange(1,d_scores.size+1)
plt.plot(n_components, d_scores, 'b', label='PCA scores')
plt.xlim(n_components[0], n_components[-1])
plt.xlabel('n components')
plt.ylabel('cv scores')
plt.legend(loc='lower right')
plt.show()
def write_answer_1(optimal_d):
with open("pca_answer1.txt", "w") as fout:
fout.write(str(optimal_d))
data = pd.read_csv('data_task1.csv')
print('Data shape: %s' % str(data.shape))
d_scores = np.empty(data.shape[1])
d_values = np.arange(1, data.shape[1]+1)
for index, d_num in enumerate(d_values):
model_PCA = PCA(n_components=d_num, svd_solver='full')
model_PCA.fit(data)
d_scores[index] = np.mean(cv_score(model_PCA, data))
plot_scores(d_scores)
write_answer_1(d_values[np.argmax(d_scores)])
Explanation: В этом случае главные компоненты значительно точнее приближают истинные направления данных, вдоль которых наблюдается наибольшая дисперсия.
Статистический взгляд на модель
Как формализовать предположения метода, указанные выше? При помощи вероятностной модели!
Задача, стоящая за любым методом уменьшения размерности: получить из набора зашумлённых признаков $X$ истинные значения $Y$, которые на самом деле определяют набор данных (т.е. сведение датасета с большим количеством признаков к данным, имеющим т.н. "эффективную размерность").
В случае метода главных компонент мы хотим найти направления, вдоль которых максимальна дисперсия, с учётом описанных выше предположений о структуре данных и компонент.
Материал, описанный ниже в данной секции, не обязателен для ознакомления для выполнения следующего задания, т.к. требует некоторых знаний статистики.
Для тех, кто собирается его пропустить: в конце раздела мы получим метрику качества, которая должна определять, насколько данные хорошо описываются построенной моделью при заданном числе компонент. Отбор признаков при этом сводится к тому, что мы выбираем то количество компонент, при котором используемая метрика (логарифм правдоподобия) является максимальной.
С учётом предположений задача метода главных компонент выглядит следующим образом:
$$ x = Wy + \mu + \epsilon$$
где:
- $x$ -- наблюдаемые данные
- $W$ -- матрица главных компонент (каждый стобец -- одна компонента)
- $y$ -- их проекция на главные компоненты
- $\mu$ -- среднее наблюдаемых данных
- $\epsilon \sim \mathcal{N}(0, \sigma^2I)$ -- нормальный шум
Исходя из распределения шума, выпишем распределение на $x$:
$$p(x \mid y) = \mathcal{N}(Wx + \mu, \sigma^2I) $$
Введём априорное распределение на $y$:
$$p(y) = \mathcal{N}(0, 1)$$
Выведем из этого при помощи формулы Байеса маргинальное распределение на $p(x)$:
$$p(x) = \mathcal{N}(\mu, \sigma^2I + WW^T)$$
Тогда правдоподобие набора данных при условии используемой модели выглядит следующим образом:
$$\mathcal{L} = \sum_{i=1}^N \log p(x_i) = -N/2 \Big( d\log(2\pi) + \log |C| + \text{tr}(C^{-1}S) \Big)$$
где:
- $C = \sigma^2I + WW^T$ -- матрица ковариации в маргинальной модели
- $S = \frac{1}{N} \sum_{i=1}^N (x_i - \mu)(x_i - \mu)^T$ -- выборочная ковариация
Значение $\mathcal{L}$ имеет смысл логарифма вероятности получения набора данных $X$ при условии, что он удовлетворяет предположениям модели метода главных компонент. Чем оно больше -- тем лучше модель описывает наблюдаемые данные.
Задание 1. Автоматическое уменьшение размерности данных при помощи логарифма правдоподобия $\mathcal{L}$
Рассмотрим набор данных размерности $D$, чья реальная размерность значительно меньше наблюдаемой (назовём её $d$). От вас требуется:
Для каждого значения $\hat{d}$ в интервале [1,D] построить модель PCA с $\hat{d}$ главными компонентами.
Оценить средний логарифм правдоподобия данных для каждой модели на генеральной совокупности, используя метод кросс-валидации с 3 фолдами (итоговая оценка значения логарифма правдоподобия усредняется по всем фолдам).
Найти модель, для которой он максимален, и внести в файл ответа число компонент в данной модели, т.е. значение $\hat{d}_{opt}$.
Для оценки логарифма правдоподобия модели для заданного числа главных компонент при помощи метода кросс-валидации используйте следующие функции:
model = PCA(n_components=n)
scores = cv_score(model, data)
Обратите внимание, что scores -- это вектор, длина которого равна числу фолдов. Для получения оценки на правдоподобие модели его значения требуется усреднить.
Для визуализации оценок можете использовать следующую функцию:
plot_scores(d_scores)
которой на вход передаётся вектор полученных оценок логарифма правдоподобия данных для каждого $\hat{d}$.
Для интересующихся: данные для заданий 1 и 2 были сгенерированны в соответствии с предполагаемой PCA моделью. То есть: данные $Y$ с эффективной размерностью $d$, полученные из независимых равномерных распределений, линейно траснформированны случайной матрицей $W$ в пространство размерностью $D$, после чего ко всем признакам был добавлен независимый нормальный шум с дисперсией $\sigma$.
End of explanation
from sklearn.decomposition import PCA
from sklearn.cross_validation import cross_val_score as cv_score
def plot_variances(d_variances):
n_components = np.arange(1,d_variances.size+1)
plt.plot(n_components, d_variances, 'b', label='Component variances')
plt.xlim(n_components[0], n_components[-1])
plt.xlabel('n components')
plt.ylabel('variance')
plt.legend(loc='upper right')
plt.show()
def write_answer_2(optimal_d):
with open("pca_answer2.txt", "w") as fout:
fout.write(str(optimal_d))
data = pd.read_csv('data_task2.csv')
print('Data shape: %s' % str(data.shape))
model_PCA = PCA(svd_solver='full')
model_PCA.fit_transform(data)
d_variances = model_PCA.explained_variance_
plot_variances(d_variances)
diff_d_variances = d_variances[:-1] - d_variances[1:]
eff_d_num = np.argmax(diff_d_variances) + 1
print('Effective number of dimensions: %d' % eff_d_num)
write_answer_2(eff_d_num)
Explanation: Вариационный взгляд на модель
Мы знаем, что каждой главной компоненте соответствует описываемая ей дисперсия данных (дисперсия данных при проекции на эту компоненту). Она численно равна значению диагональных элементов матрицы $\Lambda$, получаемой из спектрального разложения матрицы ковариации данных (смотри теорию выше).
Исходя из этого, мы можем отсортировать дисперсию данных вдоль этих компонент по убыванию, и уменьшить размерность данных, отбросив $q$ итоговых главных компонент, имеющих наименьшую дисперсию.
Делать это можно двумя разными способами. Например, если вы вдальнейшем обучаете на данных с уменьшенной размерностью модель классификации или регрессии, то можно запустить итерационный процесс: удалять компоненты с наименьшей дисперсией по одной, пока качество итоговой модели не станет значительно хуже.
Более общий способ отбора признаков заключается в том, что вы можете посмотреть на разности в дисперсиях в отсортированном ряде $\lambda_{(1)} > \lambda_{(2)} > \dots > \lambda_{(D)}$: $\lambda_{(1)}-\lambda_{(2)}, \dots, \lambda_{(D-1)} - \lambda_{(D)}$, и удалить те компоненты, на которых разность будет наибольшей. Именно этим методом вам и предлагается воспользоваться для тестового набора данных.
Задание 2. Ручное уменьшение размерности признаков посредством анализа дисперсии данных вдоль главных компонент
Рассмотрим ещё один набор данных размерности $D$, чья реальная размерность значительно меньше наблюдаемой (назовём её также $d$). От вас требуется:
Построить модель PCA с $D$ главными компонентами по этим данным.
Спроецировать данные на главные компоненты.
Оценить их дисперсию вдоль главных компонент.
Отсортировать дисперсии в порядке убывания и получить их попарные разности: $\lambda_{(i-1)} - \lambda_{(i)}$.
Найти разность с наибольшим значением и получить по ней оценку на эффективную размерность данных $\hat{d}$.
Построить график дисперсий и убедиться, что полученная оценка на $\hat{d}{opt}$ действительно имеет смысл, после этого внести полученное значение $\hat{d}{opt}$ в файл ответа.
Для построения модели PCA используйте функцию:
model.fit(data)
Для трансформации данных используйте метод:
model.transform(data)
Оценку дисперсий на трансформированных данных от вас потребуется реализовать вручную. Для построения графиков можно воспользоваться функцией
plot_variances(d_variances)
которой следует передать на вход отсортированный по убыванию вектор дисперсий вдоль компонент.
End of explanation
from sklearn import datasets
def plot_iris(transformed_data, target, target_names):
plt.figure()
for c, i, target_name in zip("rgb", [0, 1, 2], target_names):
plt.scatter(transformed_data[target == i, 0],
transformed_data[target == i, 1], c=c, label=target_name)
plt.legend()
plt.show()
def write_answer_3(list_pc1, list_pc2):
with open("pca_answer3.txt", "w") as fout:
fout.write(" ".join([str(num) for num in list_pc1]))
fout.write(" ")
fout.write(" ".join([str(num) for num in list_pc2]))
# загрузим датасет iris
iris = datasets.load_iris()
data = iris.data
print(iris.DESCR)
target = iris.target
target_names = iris.target_names
from scipy.stats import pearsonr
model_PCA = PCA(svd_solver='full')
data_transformed = model_PCA.fit_transform(data)
print('Explained variance ratio by feature: %s' % str(model_PCA.explained_variance_ratio_))
plot_iris(data_transformed, target, target_names)
pearsonr_dim = []
for d_num in np.arange(data.shape[1]):
pearsr_1, _ = pearsonr(data[:,d_num], data_transformed[:,0])
pearsr_2, _ = pearsonr(data[:,d_num], data_transformed[:,1])
pearsonr_dim.append([pearsr_1, pearsr_2])
list_pc1, list_pc2 = [], []
for i, corr in enumerate(pearsonr_dim):
print('Correlation of feature #{} with PCA: {}'.format(i, corr))
if np.argmax(np.abs(corr)) == 0:
list_pc1.append(i+1)
else:
list_pc2.append(i+1)
write_answer_3(list_pc1, list_pc2)
Explanation: Интерпретация главных компонент
В качестве главных компонент мы получаем линейные комбинации исходных призанков, поэтому резонно возникает вопрос об их интерпретации.
Для этого существует несколько подходов, мы рассмотрим два:
- рассчитать взаимосвязи главных компонент с исходными признаками
- рассчитать вклады каждого конкретного наблюдения в главные компоненты
Первый способ подходит в том случае, когда все объекты из набора данных не несут для нас никакой семантической информации, которая уже не запечатлена в наборе признаков.
Второй способ подходит для случая, когда данные имеют более сложную структуру. Например, лица для человека несут больший семантический смысл, чем вектор значений пикселей, которые анализирует PCA.
Рассмотрим подробнее способ 1: он заключается в подсчёте коэффициентов корреляций между исходными признаками и набором главных компонент.
Так как метод главных компонент является линейным, то предлагается для анализа использовать корреляцию Пирсона, выборочный аналог которой имеет следующую формулу:
$$r_{jk} = \frac{\sum_{i=1}^N (x_{ij} - \bar{x}j) (y{ik} - \bar{y}k)}{\sqrt{\sum{i=1}^N (x_{ij} - \bar{x}j)^2 \sum{i=1}^N (y_{ik} - \bar{y}_k)^2}} $$
где:
- $\bar{x}_j$ -- среднее значение j-го признака,
- $\bar{y}_k$ -- среднее значение проекции на k-ю главную компоненту.
Корреляция Пирсона является мерой линейной зависимости. Она равна 0 в случае, когда величины независимы, и $\pm 1$, если они линейно зависимы. Исходя из степени корреляции новой компоненты с исходными признаками, можно строить её семантическую интерпретацию, т.к. смысл исходных признаков мы знаем.
Задание 3. Анализ главных компонент при помощи корреляций с исходными признаками.
Обучите метод главных компонент на датасете iris, получите преобразованные данные.
Посчитайте корреляции исходных признаков с их проекциями на первые две главные компоненты.
Для каждого признака найдите компоненту (из двух построенных), с которой он коррелирует больше всего.
На основании п.3 сгруппируйте признаки по компонентам. Составьте два списка: список номеров признаков, которые сильнее коррелируют с первой компонентой, и такой же список для второй. Нумерацию начинать с единицы. Передайте оба списка функции write_answer_3.
Набор данных состоит из 4 признаков, посчитанных для 150 ирисов. Каждый из них принадлежит одному из трёх видов. Визуализацию проекции данного датасета на две компоненты, которые описывают наибольшую дисперсию данных, можно получить при помощи функции
plot_iris(transformed_data, target, target_names)
на вход которой требуется передать данные, преобразованные при помощи PCA, а также информацию о классах. Цвет точек отвечает одному из трёх видов ириса.
Для того чтобы получить имена исходных признаков, используйте следующий список:
iris.feature_names
При подсчёте корреляций не забудьте центрировать признаки и проекции на главные компоненты (вычитать из них среднее).
End of explanation
from sklearn.datasets import fetch_olivetti_faces
def write_answer_4(list_pc):
with open("pca_answer4.txt", "w") as fout:
fout.write(" ".join([str(num) for num in list_pc]))
olivetti_faces = fetch_olivetti_faces(shuffle=True, random_state=0)
data = olivetti_faces.data
image_shape = (64, 64)
print(olivetti_faces.DESCR)
model_RPCA = PCA(n_components=10, svd_solver='randomized')
data_transformed = model_RPCA.fit_transform(data)
def cosine_metric(x, i):
return x[i]**2 / np.sum(x**2)
cos_matrix = []
for i, item in enumerate(data_transformed):
cos_matrix_obj = []
for j in np.arange(len(item)):
cos_matrix_obj.append(cosine_metric(item, j))
cos_matrix.append(cos_matrix_obj)
list_pc = np.argmax(cos_matrix, axis=0)
print(list_pc)
write_answer_4(list_pc)
Explanation: Интерпретация главных компонент с использованием данных
Рассмотрим теперь величину, которую можно проинтерпретировать, как квадрат косинуса угла между объектом выборки и главной компонентой:
$$ cos^2_{ik} = \frac{f_{ik}^2}{\sum_{\ell=1}^d f_{i\ell}^2} $$
где
- i -- номер объекта
- k -- номер главной компоненты
- $f_{ik}$ -- модуль центрированной проекции объекта на компоненту
Очевидно, что
$$ \sum_{k=1}^d cos^2_{ik} = 1 $$
Это значит, что для каждого объекта мы в виде данной величины получили веса, пропорциональные вкладу, которую вносит данный объект в дисперсию каждой компоненты. Чем больше вклад, тем более значим объект для описания конкретной главной компоненты.
Задание 4. Анализ главных компонент при помощи вкладов в их дисперсию отдельных объектов
Загрузите датасет лиц Olivetti Faces и обучите на нём модель RandomizedPCA (используется при большом количестве признаков и работает быстрее, чем обычный PCA). Получите проекции признаков на 10 первых главных компонент.
Посчитайте для каждого объекта его относительный вклад в дисперсию каждой из 10 компонент, используя формулу из предыдущего раздела (d = 10).
Для каждой компоненты найдите и визуализируйте лицо, которое вносит наибольший относительный вклад в неё. Для визуализации используйте функцию
plt.imshow(image.reshape(image_shape))
Передайте в функцию write_answer_4 список номеров лиц с наибольшим относительным вкладом в дисперсию каждой из компонент, список начинается с 0.
End of explanation
C1 = np.array([[10,0],[0,0.5]])
phi = np.pi/3
C2 = np.dot(C1, np.array([[np.cos(phi), np.sin(phi)],
[-np.sin(phi),np.cos(phi)]]))
data = np.vstack([np.random.multivariate_normal(mu, C1, size=50),
np.random.multivariate_normal(mu, C2, size=50)])
plt.scatter(data[:,0], data[:,1])
# построим истинные интересующие нас компоненты
plt.plot(data[:,0], np.zeros(data[:,0].size), color="g")
plt.plot(data[:,0], 3**0.5*data[:,0], color="g")
# обучим модель pca и построим главные компоненты
model = PCA(n_components=2)
model.fit(data)
plot_principal_components(data, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
Explanation: Анализ основных недостатков метода главных компонент
Рассмотренные выше задачи являются, безусловно, модельными, потому что данные для них были сгенерированы в соответствии с предположениями метода главных компонент. На практике эти предположения, естественно, выполняются далеко не всегда. Рассмотрим типичные ошибки PCA, которые следует иметь в виду перед тем, как его применять.
Направления с максимальной дисперсией в данных неортогональны
Рассмотрим случай выборки, которая сгенерирована из двух вытянутых нормальных распределений:
End of explanation
C = np.array([[0.5,0],[0,10]])
mu1 = np.array([-2,0])
mu2 = np.array([2,0])
data = np.vstack([np.random.multivariate_normal(mu1, C, size=50),
np.random.multivariate_normal(mu2, C, size=50)])
plt.scatter(data[:,0], data[:,1])
# обучим модель pca и построим главные компоненты
model = PCA(n_components=2)
model.fit(data)
plot_principal_components(data, model)
plt.draw()
Explanation: В чём проблема, почему pca здесь работает плохо? Ответ прост: интересующие нас компоненты в данных коррелированны между собой (или неортогональны, в зависимости от того, какой терминологией пользоваться). Для поиска подобных преобразований требуются более сложные методы, которые уже выходят за рамки метода главных компонент.
Для интересующихся: то, что можно применить непосредственно к выходу метода главных компонент, для получения подобных неортогональных преобразований, называется методами ротации. Почитать о них можно в связи с другим методом уменьшения размерности, который называется Factor Analysis (FA), но ничего не мешает их применять и к главным компонентам.
Интересное направление в данных не совпадает с направлением максимальной дисперсии
Рассмотрим пример, когда дисперсии не отражают интересующих нас направлений в данных:
End of explanation |
15,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting
There are many libraries for plotting in Python. The standard library is matplotlib. Its examples and gallery are particularly useful references.
Matplotlib is most useful if you have data in numpy arrays. We can then plot standard single graphs straightforwardly
Step1: The above command is only needed if you are plotting in a Jupyter notebook.
We now construct some data
Step2: And then produce a line plot
Step3: We can add labels and titles
Step4: We can change the plotting style, and use LaTeX style notation where needed
Step5: We can plot two lines at once, and add a legend, which we can position
Step6: We would probably prefer to use subplots. At this point we have to leave the simple interface, and start building the plot using its individual components, figures and axes, which are objects to manipulate
Step7: The axes variable contains all of the separate axes that you may want. This makes it easy to construct many subplots using a loop
Step8: Matplotlib will allow you to generate and place axes pretty much wherever you like, to use logarithmic scales, to do different types of plot, and so on. Check the examples and gallery for details.
Exercise
The logistic map builds a sequence of numbers ${ x_n }$ using the relation
$$
x_{n+1} = r x_n \left( 1 - x_n \right),
$$
where $0 \le x_0 \le 1$.
Write a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).
Fix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$ Plot the last 100 members of the sequence in both cases. What does this suggest about the long-term behaviour of the sequence?
Fix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).
Step9: This suggests that, for $r=1.5$, the sequence has settled down to a fixed point. In the $r=3.5$ case it seems to be moving between four points repeatedly. | Python Code:
%matplotlib inline
Explanation: Plotting
There are many libraries for plotting in Python. The standard library is matplotlib. Its examples and gallery are particularly useful references.
Matplotlib is most useful if you have data in numpy arrays. We can then plot standard single graphs straightforwardly:
End of explanation
import numpy
x = numpy.linspace(0, 1)
y1 = numpy.sin(numpy.pi * x) + 0.1 * numpy.random.rand(50)
y2 = numpy.cos(3.0 * numpy.pi * x) + 0.2 * numpy.random.rand(50)
Explanation: The above command is only needed if you are plotting in a Jupyter notebook.
We now construct some data:
End of explanation
from matplotlib import pyplot
pyplot.plot(x, y1)
pyplot.show()
Explanation: And then produce a line plot:
End of explanation
pyplot.plot(x, y1)
pyplot.xlabel('x')
pyplot.ylabel('y')
pyplot.title('A single line plot')
pyplot.show()
Explanation: We can add labels and titles:
End of explanation
pyplot.plot(x, y1, linestyle='--', color='black', linewidth=3)
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title(r'A single line plot, roughly $\sin(\pi x)$')
pyplot.show()
Explanation: We can change the plotting style, and use LaTeX style notation where needed:
End of explanation
pyplot.plot(x, y1, label=r'$y_1$')
pyplot.plot(x, y2, label=r'$y_2$')
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title('Two line plots')
pyplot.legend(loc='lower left')
pyplot.show()
Explanation: We can plot two lines at once, and add a legend, which we can position:
End of explanation
fig, axes = pyplot.subplots(nrows=1, ncols=2, figsize=(10,6))
axis1 = axes[0]
axis1.plot(x, y1)
axis1.set_xlabel(r'$x$')
axis1.set_ylabel(r'$y_1$')
axis2 = axes[1]
axis2.plot(x, y2)
axis2.set_xlabel(r'$x$')
axis2.set_ylabel(r'$y_2$')
fig.tight_layout()
pyplot.show()
Explanation: We would probably prefer to use subplots. At this point we have to leave the simple interface, and start building the plot using its individual components, figures and axes, which are objects to manipulate:
End of explanation
data = []
for nx in range(2,5):
for ny in range(2,5):
data.append(numpy.sin(nx * numpy.pi * x) + numpy.cos(ny * numpy.pi * x))
fig, axes = pyplot.subplots(nrows=3, ncols=3, figsize=(10,10))
for nrow in range(3):
for ncol in range(3):
ndata = ncol + 3 * nrow
axes[nrow, ncol].plot(x, data[ndata])
axes[nrow, ncol].set_xlabel(r'$x$')
axes[nrow, ncol].set_ylabel(r'$\sin({} \pi x) + \cos({} \pi x)$'.format(nrow+2, ncol+2))
fig.tight_layout()
pyplot.show()
Explanation: The axes variable contains all of the separate axes that you may want. This makes it easy to construct many subplots using a loop:
End of explanation
def logistic(x0, r, N = 1000):
sequence = [x0]
xn = x0
for n in range(N):
xnew = r*xn*(1.0-xn)
sequence.append(xnew)
xn = xnew
return sequence
x0 = 0.5
N = 2000
sequence1 = logistic(x0, 1.5, N)
sequence2 = logistic(x0, 3.5, N)
pyplot.plot(sequence1[-100:], 'b-', label = r'$r=1.5$')
pyplot.plot(sequence2[-100:], 'k-', label = r'$r=3.5$')
pyplot.xlabel(r'$n$')
pyplot.ylabel(r'$x$')
pyplot.show()
Explanation: Matplotlib will allow you to generate and place axes pretty much wherever you like, to use logarithmic scales, to do different types of plot, and so on. Check the examples and gallery for details.
Exercise
The logistic map builds a sequence of numbers ${ x_n }$ using the relation
$$
x_{n+1} = r x_n \left( 1 - x_n \right),
$$
where $0 \le x_0 \le 1$.
Write a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).
Fix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$ Plot the last 100 members of the sequence in both cases. What does this suggest about the long-term behaviour of the sequence?
Fix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).
End of explanation
# This is the "best" way of doing it, but we may not have much numpy yet
# r_values = numpy.arange(1.0, 4.0, 0.01)
# This way only uses lists
r_values = []
for i in range(302):
r_values.append(1.0 + 0.01 * i)
x0 = 0.5
N = 2000
for r in r_values:
sequence = logistic(x0, r, N)
pyplot.plot(r*numpy.ones_like(sequence[1000:]), sequence[1000:], 'k.')
pyplot.xlabel(r'$r$')
pyplot.ylabel(r'$x$')
pyplot.show()
Explanation: This suggests that, for $r=1.5$, the sequence has settled down to a fixed point. In the $r=3.5$ case it seems to be moving between four points repeatedly.
End of explanation |
15,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is MSE about?
MSE or Maximum Square Estimation is about maximizing the geometric mean of spacings in the data. Such spacings are the differences between the values of the cumulative distribution function at neighbouring data points. Is also known as the Maximum Product of space estimations or MSP. The idea is of choosing the parameter values that make the observed data as uniform as possible, according to a specific quantitative measure of uniformity.
MSE tends to be better than ML at estimating "J" distributions. So lets try it out with a type I Pareto!
Imports and constants
Step1: We start with the assumption of a variable X with a CDF $F(x;\theta_0)$, where $\theta_0 \in \Theta$ is an unknown parameter to be estimated, and from which we can take iid random samples.
The spacings over which we will estimate the geometric mean ($D_i$) are the the differences between $F(x(i);\theta)$ and $F(x(i-1);\theta)$, for i [1,n+1].
With those assumptions, we do the following
Step2: for this nice plot I took some inspiration from this stack overflow answer
Step3: Moar Plots
Allright. This is how the CDF of Pareto I looks like. And there, some colors with funny names. | Python Code:
import numpy as np
from scipy.stats.mstats import gmean
from scipy.stats import pareto
import matplotlib.pyplot as plt
print plt.style.available
plt.style.use('ggplot')
#this is the real shape parameter that we will try to approximate with the estimators
realAlpha=3.
#the left limit of this Pareto distribution
realXm=1.
#pareto distribution CDF (all xi in x are >=1)
parICDF= lambda x,alpha: 1.-(realXm/x)**alpha
#number of samples to estimate from
nSamples=400
#multiplier for alphas in the search
multAlphas=100.
#select some range of alphas from which to do the MSE
low=1.0
high=5.0
alphasForSearch=np.linspace(low,high,num=(high-low)*multAlphas)
#which distribution
distribution=pareto
def sampleSpacings(x):
#calculates the sample spacings of X
D=[xi-x[i] for i,xi in enumerate(x[1:])]
return np.array(D)
Explanation: What is MSE about?
MSE or Maximum Square Estimation is about maximizing the geometric mean of spacings in the data. Such spacings are the differences between the values of the cumulative distribution function at neighbouring data points. Is also known as the Maximum Product of space estimations or MSP. The idea is of choosing the parameter values that make the observed data as uniform as possible, according to a specific quantitative measure of uniformity.
MSE tends to be better than ML at estimating "J" distributions. So lets try it out with a type I Pareto!
Imports and constants
End of explanation
#take "repetitions" samples from each sample size
allSamplesizes=[10,50,100,300,500,10000]
results={'ML':{sz:[] for sz in allSamplesizes},
'MSE':{sz:[] for sz in allSamplesizes}}
#score={'ML':{sz:[] for sz in allSamplesizes},
# 'MSE':{sz:[] for sz in allSamplesizes}}
repetitions=400 #take "repetitions" times for that sample size
for samplesize in allSamplesizes: #using the keys for the different sample sizes
for n in range(repetitions):
#Get an iid random sample ${x1, …, xn}$ from a variable.
sample=distribution.rvs(realAlpha,size=samplesize)
#Sort the elements of the samples. This becomes the ordered sample ${x(1), …, x(n)}$.
orderedSample=np.sort(sample)
#logarithm of the geometric mean of the sample spacings from the CDF.
#the CDF comes from an expression that requests samples (X)
#and an alpha paramter
scoresMSE=np.array([np.log(gmean(sampleSpacings(parICDF(orderedSample,alphax)))) for alphax in alphasForSearch])
#print 'MSE scores: {}'.format(scoresMSE)
bestMSE=scoresMSE.argmax()
results['MSE'][samplesize].append(alphasForSearch[bestMSE])
#score['MSE'][samplesize].append(scoresMSE[bestMSE])
#print 'best alpha MSE:{} -> {}'.format(alphasForSearch[bestMSE],scoresMSE[bestMSE])
#now lets look at log scores for ML
scoresML=np.array([np.sum(np.log(distribution.pdf(sample,alphax)))/nSamples for alphax in alphasForSearch])
bestML=scoresML.argmax()
results['ML'][samplesize].append(alphasForSearch[bestML])
#score['ML'][samplesize].append(scoresML[bestML])
#print 'ML scores: {}'.format(scoresML)
#print 'best alpha ML: {} -> {}'.format(alphasForSearch[bestML],scoresML[bestML])
Explanation: We start with the assumption of a variable X with a CDF $F(x;\theta_0)$, where $\theta_0 \in \Theta$ is an unknown parameter to be estimated, and from which we can take iid random samples.
The spacings over which we will estimate the geometric mean ($D_i$) are the the differences between $F(x(i);\theta)$ and $F(x(i-1);\theta)$, for i [1,n+1].
With those assumptions, we do the following:
End of explanation
xx=[]
yy=[]
for indx,samplesize in enumerate(allSamplesizes):
yy.append(results['MSE'][samplesize])
xx.append(np.random.normal(len(yy), 0.04, size=len(results['MSE'][samplesize])))
yy.append(results['ML'][samplesize])
xx.append(np.random.normal(len(yy), 0.04, size=len(results['MSE'][samplesize])))
print np.shape(yy)
print np.shape(xx)
import pylab as P
P.figure(figsize=(12,7)) #size in inches
bp = P.boxplot(yy)
P.plot(xx,yy, 'r.', alpha=0.15)
P.xticks([x+1 for x in range(len(yy))], [x.format(sz) for sz in allSamplesizes for x in ('MSE_{}','ML_{}')])
P.xlabel('# of samples given to the estimator')
P.ylabel('Value of estimated shape parameter')
P.title('MSE,ML: variation of estimated shape parameter of a Pareto I\n')
#horiz line with true shape value
P.plot([x+1 for x in range(len(yy))],realAlpha*np.ones(len(yy)),
color='dodgerblue', ls='dashed',linewidth='3')
P.show()
Explanation: for this nice plot I took some inspiration from this stack overflow answer:
http://stackoverflow.com/questions/29779079/adding-a-scatter-of-points-to-a-boxplot-using-matplotlib
Basically, we put all the points at around their location in the boxplot, but we "move them around" a bit so that is easier for the eye to see how dense the areas in the boxplot look like. Is more of a way to make the plot easier to look at, yet the boxplot has pretty much all the information you will need in this case.
End of explanation
#lets plot some CDFs of the pareto I CDF, for different alphas
alphasubset={0.1:'black',
1.:'midnightblue',
1.5:'teal',
2.5:'maroon',
2.6:'salmon',
3.:'peru',
5.:'gold'}
x=np.linspace(realXm,5,num=300)
fig = plt.figure()
ax = plt.subplot(111)
plt.title('CDF Pareto I for different shape (alpha) ranges')
for alphax in alphasubset:
if alphax==realAlpha:
ax.plot(x, parICDF(x,alphax), color=alphasubset[alphax], ls='dashed',
label='Target Alpha for this Exercise={}'.format(alphax))
else:
ax.plot(x, parICDF(x,alphax), color=alphasubset[alphax], label='alpha={}'.format(alphax))
boxp = ax.get_position()
ax.set_position([boxp.x0, boxp.y0, boxp.width, boxp.height])
# Put a legend to the right of the current axis
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),ncol=4)
plt.grid(True)
plt.show()
Explanation: Moar Plots
Allright. This is how the CDF of Pareto I looks like. And there, some colors with funny names.
End of explanation |
15,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing Lemke-Howson in Python
Daisuke Oyama
Faculty of Economics, University of Tokyo
Step1: To be consistent with the 0-based indexing in Python,
we call the players 0 and 1.
Complementary pivoting
Build the tableau for each player
Step2: Denote the player 0's variables by $x_0, x_1, x_2$ and $s_3, s_4$,
and the player 1's variables by $r_0, r_1, r_2$ and $y_3, y_4$.
The initial basic variables are $s_3, s_4$ and $r_0, r_1, r_2$.
Step3: Let the initial pivot index be 1,
so that $x_1$ is to enter the basis
Step4: Step 1
Determine the basic variable to leave by the minimum ratio test
Step5: $s_4$ is the basic variable that leaves the basis.
Update the tableau
Step6: Update the basic variables and the pivot for the next step
Step7: That is, $x_1$ has become a basic variable,
while $s_4$ becomes a nonbasic variable (i.e., $s_4 = 0$).
If the new pivot is equal to the initial pivot, we are done.
Step8: But this is False, so we continue.
In the next step, the variable $y_4$ which is complementary to $s_4$ (i.e., $y_4 s_4 = 0$)
becomes a basic variable.
Step 2
Repeat the same exercise as above for tableau1.
Step9: That is, $y_4$ has become a basic variable,
while $r_2$ becomes a nonbasic variable.
If the new pivot is equal to the initial pivot, we are done.
Step10: But this is False, so we continue.
In the next step, the variable $x_2$ which is complementary to $r_2$
becomes a basic variable.
Step 3
Step11: Step 4
Step12: Note on the warning
Step13: We can just ignore it,
but we can also suppress it by
np.errstate
with a with clause
Step14: Now we have complete labeling, so we are done.
Obtaining the Nash equilibrium
The basic variables are
Step15: $x_2$ and $x_1$, and
Step16: $r_0$, $y_3$, and $y_4$.
The indices of the basic variables corresponding to $x$
Step17: The indices of the basic variables corresponding to $y$
Step18: The values of the basic variables are stored in the last columns of the tableaux.
The values of $x_2$ and $x_1$ are
Step19: The values of $y_3$ and $y_4$ are
Step20: We need to normalize these values so that $x$ and $y$ are probability distributions.
Step21: The Nash equilibrium we have found is
Step23: Wrapping the procedure in functions
Step24: Note
Step25: Enumerating all equilibria that are reached by Lemke-Howson paths
Step26: There are games in which some of the Nash equilibria
cannot be reached by Lemke-Howson path from the origin,
as in the game in (3.7) in von Stengel (2007).
Step28: Integer pivoting
Step30: Let us use the Rational class
in SymPy
to represent mixed actions in rational numbers.
Step31: Lexico-minimum ratio test
Step32: Caveat
Step33: Note
Step34: Consider the following degenerate game
Step35: lemke_howson_all fails to work properly
Step36: With lexico-minimum test
Step37: Due to the paricular fixed way of introducing the perturbations $(\varepsilon^1, \ldots, \varepsilon^k)$,
the output does depend on the ordering of the actions
Step38: Essentially, the exercise corresponds to considering
$$
\begin{pmatrix}
\dfrac{3}{1+\varepsilon_1} & \dfrac{2}{1+\varepsilon_1} & \dfrac{3}{1+\varepsilon_1} \
\dfrac{3}{1+\varepsilon_2} & \dfrac{6}{1+\varepsilon_2} & \dfrac{1}{1+\varepsilon_2}
\end{pmatrix},
$$
where $(\varepsilon_1, \varepsilon_2) =(\varepsilon^1, \varepsilon^2)$ or
$(\varepsilon_1, \varepsilon_2) =(\varepsilon^2, \varepsilon^1)$.
In fact | Python Code:
import numpy as np
np.set_printoptions(precision=5) # Reduce the number of digits printed
A = np.array([[3, 3],
[2, 5],
[0 ,6]])
B_T = np.array([[3, 2, 3],
[2, 6, 1]])
m, n = A.shape # Numbers of actions of the players
Explanation: Implementing Lemke-Howson in Python
Daisuke Oyama
Faculty of Economics, University of Tokyo
End of explanation
# Player 0
tableau0 = np.empty((n, m+n+1))
tableau0[:, :m] = B_T
tableau0[:, m:m+n] = np.identity(n)
tableau0[:, -1] = 1
# One-line commamd
# tableau0 = np.hstack((B_T, np.identity(n), np.ones((n, 1))))
tableau0
# Player 1
tableau1 = np.empty((m, n+m+1))
tableau1[:, :m] = np.identity(m)
tableau1[:, m:m+n] = A
tableau1[:, -1] = 1
# One-line command
# tableau1 = np.hstack((np.identity(m), A, np.ones((m, 1))))
tableau1
Explanation: To be consistent with the 0-based indexing in Python,
we call the players 0 and 1.
Complementary pivoting
Build the tableau for each player:
End of explanation
basic_vars0 = np.arange(m, m+n)
basic_vars0
basic_vars1 = np.arange(0, m)
basic_vars1
Explanation: Denote the player 0's variables by $x_0, x_1, x_2$ and $s_3, s_4$,
and the player 1's variables by $r_0, r_1, r_2$ and $y_3, y_4$.
The initial basic variables are $s_3, s_4$ and $r_0, r_1, r_2$.
End of explanation
init_pivot = 1
# Current pivot
pivot = init_pivot
Explanation: Let the initial pivot index be 1,
so that $x_1$ is to enter the basis:
End of explanation
ratios = tableau0[:, -1] / tableau0[:, pivot]
ratios
row_min = ratios.argmin()
row_min
basic_vars0[row_min]
Explanation: Step 1
Determine the basic variable to leave by the minimum ratio test:
End of explanation
tableau0[row_min, :] /= tableau0[row_min, pivot]
tableau0
for i in range(tableau0.shape[0]):
if i != row_min:
tableau0[i, :] -= tableau0[row_min, :] * tableau0[i, pivot]
# Another approach by a NumPy trick
# ind = np.ones(tableau0.shape[0], dtype=bool)
# ind[row_min] = False
# tableau0[ind, :] -= tableau0[row_min, :] * tableau0[ind, :][:, [pivot]]
tableau0
Explanation: $s_4$ is the basic variable that leaves the basis.
Update the tableau:
End of explanation
basic_vars0[row_min], pivot = pivot, basic_vars0[row_min]
basic_vars0
basic_vars0[row_min]
pivot
Explanation: Update the basic variables and the pivot for the next step:
End of explanation
pivot == init_pivot
Explanation: That is, $x_1$ has become a basic variable,
while $s_4$ becomes a nonbasic variable (i.e., $s_4 = 0$).
If the new pivot is equal to the initial pivot, we are done.
End of explanation
tableau1
ratios = tableau1[:, -1] / tableau1[:, pivot]
row_min = ratios.argmin()
row_min
tableau1[row_min, :] /= tableau1[row_min, pivot]
tableau1
ind = np.ones(tableau1.shape[0], dtype=bool)
ind[row_min] = False
tableau1[ind, :] -= tableau1[row_min, :] * tableau1[ind, :][:, [pivot]]
tableau1
basic_vars1[row_min], pivot = pivot, basic_vars1[row_min]
pivot
basic_vars1
basic_vars1[row_min]
Explanation: But this is False, so we continue.
In the next step, the variable $y_4$ which is complementary to $s_4$ (i.e., $y_4 s_4 = 0$)
becomes a basic variable.
Step 2
Repeat the same exercise as above for tableau1.
End of explanation
pivot == init_pivot
Explanation: That is, $y_4$ has become a basic variable,
while $r_2$ becomes a nonbasic variable.
If the new pivot is equal to the initial pivot, we are done.
End of explanation
tableau0
ratios = tableau0[:, -1] / tableau0[:, pivot]
row_min = ratios.argmin()
row_min
tableau0[row_min, :] /= tableau0[row_min, pivot]
ind = np.ones(tableau0.shape[0], dtype=bool)
ind[row_min] = False
tableau0[ind, :] -= tableau0[row_min, :] * tableau0[ind, :][:, [pivot]]
tableau0
basic_vars0[row_min], pivot = pivot, basic_vars0[row_min]
pivot
pivot == init_pivot
Explanation: But this is False, so we continue.
In the next step, the variable $x_2$ which is complementary to $r_2$
becomes a basic variable.
Step 3
End of explanation
tableau1
ratios = tableau1[:, -1] / tableau1[:, pivot]
row_min = ratios.argmin()
row_min
Explanation: Step 4
End of explanation
tableau1[:, pivot]
Explanation: Note on the warning:
tableau1[:, pivot] has a zero entry, so we get a "divide by zero" warning.
End of explanation
with np.errstate(divide='ignore'):
ratios = tableau1[:, -1] / tableau1[:, pivot]
ratios
tableau1[row_min, :] /= tableau1[row_min, pivot]
ind = np.ones(tableau1.shape[0], dtype=bool)
ind[row_min] = False
tableau1[ind, :] -= tableau1[row_min, :] * tableau1[ind, :][:, [pivot]]
tableau1
basic_vars1[row_min], pivot = pivot, basic_vars1[row_min]
pivot
pivot == init_pivot
Explanation: We can just ignore it,
but we can also suppress it by
np.errstate
with a with clause:
End of explanation
basic_vars0
Explanation: Now we have complete labeling, so we are done.
Obtaining the Nash equilibrium
The basic variables are:
End of explanation
basic_vars1
Explanation: $x_2$ and $x_1$, and
End of explanation
basic_vars0[basic_vars0 < m]
Explanation: $r_0$, $y_3$, and $y_4$.
The indices of the basic variables corresponding to $x$:
End of explanation
basic_vars1[basic_vars1 >= m]
Explanation: The indices of the basic variables corresponding to $y$:
End of explanation
tableau0[basic_vars0 < m, -1]
Explanation: The values of the basic variables are stored in the last columns of the tableaux.
The values of $x_2$ and $x_1$ are:
End of explanation
tableau1[basic_vars1 >= m, -1]
Explanation: The values of $y_3$ and $y_4$ are:
End of explanation
x = np.zeros(m)
x[basic_vars0[basic_vars0 < m]] = tableau0[basic_vars0 < m, -1]
x /= x.sum()
x
y = np.zeros(n)
y[basic_vars1[basic_vars1 >= m] - m] = tableau1[basic_vars1 >= m, -1]
y /= y.sum()
y
Explanation: We need to normalize these values so that $x$ and $y$ are probability distributions.
End of explanation
(x, y)
Explanation: The Nash equilibrium we have found is:
End of explanation
def min_ratio_test(tableau, pivot):
ind_nonpositive = tableau[:, pivot] <= 0
with np.errstate(divide='ignore', invalid='ignore'):
ratios = tableau[:, -1] / tableau[:, pivot]
ratios[ind_nonpositive] = np.inf
row_min = ratios.argmin()
return row_min
def pivoting(tableau, pivot, pivot_row):
Perform a pivoting step.
Modify `tableau` in place (and return its view).
# Row indices except pivot_row
ind = np.ones(tableau.shape[0], dtype=bool)
ind[pivot_row] = False
# Store the values in the pivot column, except for row_min
# Made 2-dim by np.newaxis
multipliers = tableau[ind, pivot, np.newaxis]
# Update the tableau
tableau[pivot_row, :] /= tableau[pivot_row, pivot]
tableau[ind, :] -= tableau[pivot_row, :] * multipliers
return tableau
def lemke_howson_tbl(tableau0, tableau1, basic_vars0, basic_vars1, init_pivot):
m, n = tableau1.shape[0], tableau0.shape[0]
tableaux = (tableau0, tableau1)
basic_vars = (basic_vars0, basic_vars1)
init_player = int((basic_vars[0]==init_pivot).any())
players = [init_player, 1 - init_player]
pivot = init_pivot
while True:
for i in players:
# Determine the leaving variable
row_min = min_ratio_test(tableaux[i], pivot)
# Pivoting step: modify tableau in place
pivoting(tableaux[i], pivot, row_min)
# Update the basic variables and the pivot
basic_vars[i][row_min], pivot = pivot, basic_vars[i][row_min]
if pivot == init_pivot:
break
else:
continue
break
out_dtype = np.result_type(*tableaux)
out = np.zeros(m+n, dtype=out_dtype)
for i, (start, num) in enumerate(zip((0, m), (m, n))):
ind = basic_vars[i] < start + num if i == 0 else start <= basic_vars[i]
out[basic_vars[i][ind]] = tableaux[i][ind, -1]
return out
Explanation: Wrapping the procedure in functions
End of explanation
def normalize(unnormalized, m, n):
normalized = np.empty(m+n)
for (start, num) in zip((0, m), (m, n)):
s = unnormalized[start:start+num].sum()
if s != 0:
normalized[start:start+num] = unnormalized[start:start+num] / s
else:
normalized[start:start+num] = 0
return normalized[:m], normalized[m:]
def lemke_howson(A, B_T, init_pivot=0, return_tableaux=False):
m, n = A.shape
tableaux = (np.hstack((B_T, np.identity(n), np.ones((n, 1)))),
np.hstack((np.identity(m), A, np.ones((m, 1)))))
basic_vars = (np.arange(m, m+n), np.arange(0, m))
unnormalized = lemke_howson_tbl(*tableaux, *basic_vars, init_pivot)
normalized = normalize(unnormalized, m, n)
if return_tableaux:
return normalized, tableaux, basic_vars
return normalized
init_pivot = 1
x, y = lemke_howson(A, B_T, init_pivot)
print("Nash equilibrium found\n", (x, y))
lemke_howson(A, B_T, init_pivot, return_tableaux=True)
Explanation: Note: There is no nested break in Python;
see e.g., break two for loops.
End of explanation
def lemke_howson_all(A, B_T):
m, n = A.shape
k = 0
NEs = []
basic_vars_list = []
player = (m <= n)
init_pivot = k
actions, tableaux, basic_vars = \
lemke_howson(A, B_T, init_pivot, return_tableaux=True)
NEs.append(actions)
basic_vars_list.append(np.sort(basic_vars[player]))
for a in range(m+n):
if a == k:
continue
init_pivot = a
actions, tableaux, basic_vars = \
lemke_howson(A, B_T, init_pivot, return_tableaux=True)
basic_vars_sorted = np.sort(basic_vars[player])
for arr in basic_vars_list:
if np.array_equal(basic_vars_sorted, arr):
break
else:
NEs.append(actions)
basic_vars_list.append(basic_vars_sorted)
unnormalized = \
lemke_howson_tbl(*tableaux, *basic_vars, init_pivot=k)
NEs.append(normalize(unnormalized, m, n))
basic_vars_list.append(np.sort(basic_vars[player]))
return NEs
lemke_howson_all(A, B_T)
Explanation: Enumerating all equilibria that are reached by Lemke-Howson paths
End of explanation
payoff_matrix_3_7 = np.array([[3, 3, 0], [4, 0, 1], [0, 4, 5]])
lemke_howson_all(payoff_matrix_3_7, payoff_matrix_3_7)
Explanation: There are games in which some of the Nash equilibria
cannot be reached by Lemke-Howson path from the origin,
as in the game in (3.7) in von Stengel (2007).
End of explanation
def pivoting_int(tableau, pivot, pivot_row, prev_pivot_el):
Perform a pivoting step with integer input data.
Modify `tableau` in place (and return its view).
# Row indices except pivot_row
ind = np.ones(tableau.shape[0], dtype=bool)
ind[pivot_row] = False
# Store the values in the pivot column, except for row_min
# Made 2-dim by np.newaxis
multipliers = tableau[ind, pivot, np.newaxis]
# Update the tableau
tableau[ind, :] *= tableau[pivot_row, pivot]
tableau[ind, :] -= tableau[pivot_row, :] * multipliers
tableau[ind, :] //= prev_pivot_el # Floor division: return int
return tableau
def lemke_howson_tbl_int(tableau0, tableau1, basic_vars0, basic_vars1, init_pivot):
m, n = tableau1.shape[0], tableau0.shape[0]
tableaux = (tableau0, tableau1)
basic_vars = (basic_vars0, basic_vars1)
init_player = int((basic_vars[0]==init_pivot).any())
players = [init_player, 1 - init_player]
pivot = init_pivot
prev_pivot_els = np.ones(2, dtype=np.int_)
while True:
for i in players:
# Determine the leaving variable
row_min = min_ratio_test(tableaux[i], pivot)
# Pivoting step: modify tableau in place
pivoting_int(tableaux[i], pivot, row_min, prev_pivot_els[i])
# Backup the pivot element
prev_pivot_els[i] = tableaux[i][row_min, pivot]
# Update the basic variables and the pivot
basic_vars[i][row_min], pivot = pivot, basic_vars[i][row_min]
if pivot == init_pivot:
break
else:
continue
break
out = np.zeros(m+n, dtype=np.int_)
for i, (start, num) in enumerate(zip((0, m), (m, n))):
ind = basic_vars[i] < start + num if i == 0 else start <= basic_vars[i]
out[basic_vars[i][ind]] = tableaux[i][ind, -1]
return out
Explanation: Integer pivoting
End of explanation
import sympy
def normalize_rational(unnormalized, m, n):
Normalize the integer array `unnormalized` with ratioal numbers.
normalized = np.empty(m+n, np.object_)
for (start, num) in zip((0, m), (m, n)):
s = unnormalized[start:start+num].sum()
if s != 0:
for k in range(start, start+num):
normalized[k] = sympy.Rational(sympy.S(unnormalized[k]), sympy.S(s))
else:
normalized[start:start+num] = sympy.Rational(0)
return normalized[:m], normalized[m:]
def lemke_howson_int(A, B_T, init_pivot=0, rational=False, return_tableaux=False):
m, n = A.shape
tableaux = (np.hstack((B_T, np.identity(n, dtype=np.int_), np.ones((n, 1), dtype=np.int_))),
np.hstack((np.identity(m, dtype=np.int_), A, np.ones((m, 1), dtype=np.int_))))
basic_vars = (np.arange(m, m+n), np.arange(0, m))
unnormalized = lemke_howson_tbl_int(*tableaux, *basic_vars, init_pivot)
if rational:
normalized = normalize_rational(unnormalized, m, n)
else:
normalized = normalize(unnormalized, m, n)
if return_tableaux:
return normalized, tableaux, basic_vars
return normalized
def lemke_howson_all_int(A, B_T, rational=False):
m, n = A.shape
k = 0
NEs = []
basic_vars_list = []
player = (m <= n)
init_pivot = k
actions, tableaux, basic_vars = \
lemke_howson_int(A, B_T, init_pivot, rational=rational, return_tableaux=True)
NEs.append(actions)
basic_vars_list.append(np.sort(basic_vars[player]))
for a in range(m+n):
if a == k:
continue
init_pivot = a
actions, tableaux, basic_vars = \
lemke_howson_int(A, B_T, init_pivot, rational=rational, return_tableaux=True)
basic_vars_sorted = np.sort(basic_vars[player])
for arr in basic_vars_list:
if np.array_equal(basic_vars_sorted, arr):
break
else:
NEs.append(actions)
basic_vars_list.append(basic_vars_sorted)
unnormalized = \
lemke_howson_tbl_int(*tableaux, *basic_vars, init_pivot=k)
if rational:
normalized = normalize_rational(unnormalized, m, n)
else:
normalized = normalize(unnormalized, m, n)
NEs.append(normalized)
basic_vars_list.append(np.sort(basic_vars[player]))
return NEs
lemke_howson_int(A, B_T, init_pivot=1)
lemke_howson_int(A, B_T, init_pivot=1, return_tableaux=True)
lemke_howson_int(A, B_T, init_pivot=1, rational=True)
lemke_howson_int(A, B_T, init_pivot=0, rational=True)
lemke_howson_all_int(A, B_T)
lemke_howson_all_int(A, B_T, rational=True)
Explanation: Let us use the Rational class
in SymPy
to represent mixed actions in rational numbers.
End of explanation
def min_ratio_test_no_tie_breaking(tableau, pivot, test_col, argmins, num_argmins):
idx = 0
i = argmins[idx]
if tableau[i, pivot] > 0:
min_ratio = tableau[i, test_col] / tableau[i, pivot]
else:
min_ratio = np.inf
for k in range(1, num_argmins):
i = argmins[k]
if tableau[i, pivot] <= 0:
continue
ratio = tableau[i, test_col] / tableau[i, pivot]
if ratio > min_ratio:
continue
elif ratio < min_ratio:
min_ratio = ratio
idx = 0
elif ratio == min_ratio:
idx += 1
argmins[idx] = k
return idx + 1
def lex_min_ratio_test(tableau, pivot, slack_start):
num_rows = tableau.shape[0]
argmins = np.arange(num_rows)
num_argmins = num_rows
num_argmins = min_ratio_test_no_tie_breaking(tableau, pivot, -1, argmins, num_argmins)
if num_argmins == 1:
return argmins[0]
for j in range(slack_start, slack_start+num_rows):
if j == pivot:
continue
num_argmins = min_ratio_test_no_tie_breaking(tableau, pivot, j, argmins, num_argmins)
if num_argmins == 1:
break
return argmins[0]
Explanation: Lexico-minimum ratio test
End of explanation
2/3
1 - 1/3
1 - 1/3 == 2/3
Explanation: Caveat:
Because of rounding errors, one should not rely on equality between floating point numbers.
For example:
End of explanation
def lemke_howson_tbl_int_lex_min(tableau0, tableau1, basic_vars0, basic_vars1, init_pivot):
m, n = tableau1.shape[0], tableau0.shape[0]
tableaux = (tableau0, tableau1)
basic_vars = (basic_vars0, basic_vars1)
init_player = int((basic_vars[0]==init_pivot).any())
players = [init_player, 1 - init_player]
pivot = init_pivot
prev_pivot_els = np.ones(2, dtype=np.int_)
slack_starts = (m, 0)
while True:
for i in players:
# Determine the leaving variable
row_min = lex_min_ratio_test(tableaux[i], pivot, slack_starts[i])
# Pivoting step: modify tableau in place
pivoting_int(tableaux[i], pivot, row_min, prev_pivot_els[i])
# Backup the pivot element
prev_pivot_els[i] = tableaux[i][row_min, pivot]
# Update the basic variables and the pivot
basic_vars[i][row_min], pivot = pivot, basic_vars[i][row_min]
if pivot == init_pivot:
break
else:
continue
break
out = np.zeros(m+n, dtype=np.int_)
for i, (start, num) in enumerate(zip((0, m), (m, n))):
ind = basic_vars[i] < start + num if i == 0 else start <= basic_vars[i]
out[basic_vars[i][ind]] = tableaux[i][ind, -1]
return out
def lemke_howson_int_lex_min(A, B_T, init_pivot=0, rational=False, return_tableaux=False):
m, n = A.shape
tableaux = (np.hstack((B_T, np.identity(n, dtype=np.int_), np.ones((n, 1), dtype=np.int_))),
np.hstack((np.identity(m, dtype=np.int_), A, np.ones((m, 1), dtype=np.int_))))
basic_vars = (np.arange(m, m+n), np.arange(0, m))
unnormalized = lemke_howson_tbl_int_lex_min(*tableaux, *basic_vars, init_pivot)
if rational:
normalized = normalize_rational(unnormalized, m, n)
else:
normalized = normalize(unnormalized, m, n)
if return_tableaux:
return normalized, tableaux, basic_vars
return normalized
def lemke_howson_all_int_lex_min(A, B_T, rational=False):
m, n = A.shape
k = 0
NEs = []
basic_vars_list = []
player = (m <= n)
init_pivot = k
actions, tableaux, basic_vars = \
lemke_howson_int_lex_min(A, B_T, init_pivot, rational=rational, return_tableaux=True)
NEs.append(actions)
basic_vars_list.append(np.sort(basic_vars[player]))
for a in range(m+n):
if a == k:
continue
init_pivot = a
actions, tableaux, basic_vars = \
lemke_howson_int_lex_min(A, B_T, init_pivot, rational=rational, return_tableaux=True)
basic_vars_sorted = np.sort(basic_vars[player])
for arr in basic_vars_list:
if np.array_equal(basic_vars_sorted, arr):
break
else:
NEs.append(actions)
basic_vars_list.append(basic_vars_sorted)
unnormalized = \
lemke_howson_tbl_int_lex_min(*tableaux, *basic_vars, init_pivot=k)
if rational:
normalized = normalize_rational(unnormalized, m, n)
else:
normalized = normalize(unnormalized, m, n)
NEs.append(normalized)
basic_vars_list.append(np.sort(basic_vars[player]))
return NEs
Explanation: Note:
In comparing $\frac{a}{b}$ and $\frac{a'}{b'}$,
one may instead compare $a b'$ and $a' b$:
when these are integers, the latter involves only integers.
(This, of course, does not apply only for lexico-minimum test.)
End of explanation
C = np.array([[3, 3],
[2, 5],
[0 ,6]])
D_T = np.array([[3, 2, 3],
[3, 6, 1]])
Explanation: Consider the following degenerate game:
End of explanation
lemke_howson_all(C, D_T)
Explanation: lemke_howson_all fails to work properly:
End of explanation
lemke_howson_all_int_lex_min(C, D_T)
Explanation: With lexico-minimum test:
End of explanation
# Change the order of the actions of player 1
E = np.array([[3, 3],
[5, 2],
[6 ,0]])
F_T = np.array([[3, 6, 1],
[3, 2, 3]])
lemke_howson_all_int_lex_min(E, F_T)
Explanation: Due to the paricular fixed way of introducing the perturbations $(\varepsilon^1, \ldots, \varepsilon^k)$,
the output does depend on the ordering of the actions:
End of explanation
eps = 0.01
D_T_eps = np.array([[3 / (1 + eps), 2 / (1 + eps), 3 / (1 + eps)],
[3 / (1 + eps**2), 6 / (1 + eps**2), 1 / (1 + eps**2)]])
F_T_eps = np.array([[3 / (1 + eps**2), 2 / (1 + eps**2), 3 / (1 + eps**2)],
[3 / (1 + eps), 6 / (1 + eps), 1 / (1 + eps)]])
lemke_howson_all(C, D_T_eps)
lemke_howson_all(C, F_T_eps)
Explanation: Essentially, the exercise corresponds to considering
$$
\begin{pmatrix}
\dfrac{3}{1+\varepsilon_1} & \dfrac{2}{1+\varepsilon_1} & \dfrac{3}{1+\varepsilon_1} \
\dfrac{3}{1+\varepsilon_2} & \dfrac{6}{1+\varepsilon_2} & \dfrac{1}{1+\varepsilon_2}
\end{pmatrix},
$$
where $(\varepsilon_1, \varepsilon_2) =(\varepsilon^1, \varepsilon^2)$ or
$(\varepsilon_1, \varepsilon_2) =(\varepsilon^2, \varepsilon^1)$.
In fact:
End of explanation |
15,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Genetic Home Reference Data linking
The Genetic Home Reference is an NLM resource and can be found at https
Step2: Update Wikidata with corresponding information
Identify the db identifier that has the fewest number of mapping issues
Use identifiers to pull appropriate WD entry for each topic
Check entry to see if mode of inheritance already added. If not, add it
-For inheritance statements, reference
Step3: It looks like the database that is closest in number to the number of unique urls is Orphanet and MeSH, suggesting that these may have the fewest mapping issues within the data set, as GTR (Genetics Testing Registry) and OMIM may have multiple identifiers mapping to the same topic/url. GeneReviews has fewer suggesting that there are entries either missing GeneReview mappings, or that there are multiple urls mapping to a single GeneReview ID.
Step4: In terms of unique coverage, it looks like Orphanet will be the least problematic to use. Now to check it's coverage in Wikidata
Step5: Adding mode of inheritance data
Prepare the inheritance data for mapping
De-duplicate Orphanet-Wikidata mapping table as needed
Merge inheritance table to mapping table
Step9: Generate the references and write the data to Wikidata
Step11: Importing the urls to separate property for external linking
This portion is awaiting completion of the property creation and approval process | Python Code:
from wikidataintegrator import wdi_core, wdi_login, wdi_helpers
from wikidataintegrator.ref_handlers import update_retrieved_if_new_multiple_refs
import pandas as pd
from pandas import read_csv
import requests
from tqdm.notebook import trange, tqdm
import ipywidgets
import widgetsnbextension
import xml.etree.ElementTree as et
import time
datasrc = 'https://ghr.nlm.nih.gov/download/TopicIndex.xml'
## Login for Scheduled bot
print("Logging in...")
try:
from scheduled_bots.local import WDUSER, WDPASS
except ImportError:
if "WDUSER" in os.environ and "WDPASS" in os.environ:
WDUSER = os.environ['WDUSER']
WDPASS = os.environ['WDPASS']
else:
raise ValueError("WDUSER and WDPASS must be specified in local.py or as environment variables")
print("Logging in...")
import wdi_user_config ## Credentials stored in a wdi_user_config file
login_dict = wdi_user_config.get_credentials()
login = wdi_login.WDLogin(login_dict['WDUSER'], login_dict['WDPASS'])
r = requests.get(datasrc)
xml = r.text
xtree = et.fromstring(xml)
topic_of_interest = 'Conditions'
for eachtopic in xtree.findall('topic'):
if eachtopic.attrib['id'] == topic_of_interest:
new_tree = eachtopic.find('topics')
conditions = new_tree
conditions_list = []
for condition in conditions.findall('topic'):
title = condition.find('title').text
url = condition.find('url').text
try:
synonyms = condition.find('other_names')
for synonym in synonyms:
tmpdict = {'title': title,'url':url,'aka':synonym.text}
conditions_list.append(tmpdict)
except:
tmpdict = {'title': title,'url':url,'aka':'None'}
conditions_list.append(tmpdict)
conditions_df = pd.DataFrame(conditions_list)
print(len(conditions_df))
print(conditions_df.head(n=2))
conditions_url_list = conditions_df['url'].unique().tolist()
condition_url_list_test = conditions_url_list[0:3]
inher_list = []
inher_fail = []
syn_fail = []
synonyms_df = pd.DataFrame(columns = ['topic','synonym'])
xref_list = []
xref_fail = []
u=0
for u in tqdm(range(len(conditions_url_list))):
eachurl = conditions_url_list[u]
tmpurl = eachurl+'?report=json'
tmpresponse = requests.get(tmpurl)
data = tmpresponse.json()
## save the inheritance pattern data
try:
pattern_nos = data['inheritance-pattern-list']
i=0
while i < len(pattern_nos):
inher_dict = pattern_nos[i]['inheritance-pattern']
inher_dict['topic']=data['name']
inher_dict['url'] = eachurl
inher_list.append(inher_dict)
i=i+1
except:
inher_fail.append({'topic':data['name'],'url':eachurl})
## save the synonym list
try:
synlist = data['synonym-list']
syndf = pd.DataFrame(synlist)
syndf['topic']=data['name']
synonyms_df = pd.concat((synonyms_df,syndf),ignore_index=True)
except:
syn_fail.append({'topic':data['name'],'url':eachurl})
## save the xrefs
try:
xreflist = data['db-key-list']
k=0
while k < len(xreflist):
tmpdict = xreflist[k]['db-key']
tmpdict['topic'] = data['name']
tmpdict['url'] = eachurl
xref_list.append(tmpdict)
k=k+1
except:
xref_fail.append({'topic':data['name'],'url':eachurl})
u=u+1
inheritance_df = pd.DataFrame(inher_list)
inher_fail_df = pd.DataFrame(inher_fail)
syn_fail_df = pd.DataFrame(syn_fail)
xref_list_df = pd.DataFrame(xref_list)
xref_fail_df = pd.DataFrame(xref_fail)
print(inheritance_df.head(n=2))
print(xref_list_df.head(n=2))
print(inher_fail_df.head(n=2))
print(syn_fail_df.head(n=2))
print(xref_fail_df.head(n=2))
print(syn_fail_df['topic'])
print(xref_list_df['db'].unique().tolist())
## Corresponding Wikidata properties:
wdprop_dict = {'MeSH':'P486','OMIM':'P492', 'Orphanet':'P1550', 'SNOMED CT':'P5806', 'GeneReviews':'P668', 'ICD-10-CM':'P4229'}
Explanation: Genetic Home Reference Data linking
The Genetic Home Reference is an NLM resource and can be found at https://ghr.nlm.nih.gov/condition.
The topics index can be accessed at: https://ghr.nlm.nih.gov/download/TopicIndex.xml
An API call can be used to visit the topic and pulled the corresponding json document for each topic. The json files will have various database identifiers which may be used to xref a condition to existing WD entities.
The topic includes 'conditions', 'genes', 'chromosomes', and the 'handbook' itself. For the initial import, we're only interested in topics that are children of 'conditions'
End of explanation
## Drop topics that map to the same url (assuming they're synonyms)
xref_no_dups = xref_list_df.drop_duplicates()
print("original df size: ",len(xref_list_df),"de-duplicated url df size: ",len(xref_no_dups))
## Check coverage of identifiers for the unique urls
xref_dups = xref_list_df.groupby(['db','key']).size().reset_index(name='count')
print("Number of unique urls: ",len(xref_no_dups['url'].unique().tolist()))
print("Entries of each db: ",xref_list_df.groupby('db').size())
## Verify coverage
print('GTR: ',len(xref_list_df.loc[xref_list_df['db']=='GTR'].groupby(['db','url']).size()))
print('GeneReviews: ',len(xref_list_df.loc[xref_list_df['db']=='GeneReviews'].groupby(['db','url']).size()))
print('ICD-10-CM: ',len(xref_list_df.loc[xref_list_df['db']=='ICD-10-CM'].groupby(['db','url']).size()))
print('MeSH: ',len(xref_list_df.loc[xref_list_df['db']=='MeSH'].groupby(['db','url']).size()))
print('OMIM: ',len(xref_list_df.loc[xref_list_df['db']=='OMIM'].groupby(['db','url']).size()))
print('Orphanet: ',len(xref_list_df.loc[xref_list_df['db']=='Orphanet'].groupby(['db','url']).size()))
print('SNOMED CT: ',len(xref_list_df.loc[xref_list_df['db']=='SNOMED CT'].groupby(['db','url']).size()))
Explanation: Update Wikidata with corresponding information
Identify the db identifier that has the fewest number of mapping issues
Use identifiers to pull appropriate WD entry for each topic
Check entry to see if mode of inheritance already added. If not, add it
-For inheritance statements, reference: Genetics Home Reference (Q62606821)
Add url for GHR (need to create new property)
Determining identifier with fewest mapping issues
End of explanation
#Investigate duplicate mappings more closely.
dups = xref_dups.loc[xref_dups['count']>1]
print("number of duplicated identifiers by type: ")
print(dups.groupby('db').size().reset_index(name='dup_counts'))
print("Number of entries affected by duplicated identfiers: ")
print(dups.groupby('db')['count'].sum().reset_index(name='entry_counts'))
Explanation: It looks like the database that is closest in number to the number of unique urls is Orphanet and MeSH, suggesting that these may have the fewest mapping issues within the data set, as GTR (Genetics Testing Registry) and OMIM may have multiple identifiers mapping to the same topic/url. GeneReviews has fewer suggesting that there are entries either missing GeneReview mappings, or that there are multiple urls mapping to a single GeneReview ID.
End of explanation
## Generate list of unique Orphanet IDs
orphanet_ghr = xref_no_dups.loc[xref_no_dups['db']=='Orphanet']
no_orphanet_dups = orphanet_ghr.drop_duplicates('url')
print("Original Orphanet Xref list: ", len(orphanet_ghr), "Orphanet Xref list less dups: ",len(no_orphanet_dups))
orphanet_id_list = no_orphanet_dups['key'].tolist()
# Retrieve the QIDs for each Orphanet ID (The property for Orphanet IDs is P1550)
i=0
wdmap = []
wdmapfail = []
for i in tqdm(range(len(orphanet_id_list))):
orph_id = orphanet_id_list[i]
try:
sparqlQuery = "SELECT * WHERE {?topic wdt:P1550 \""+orph_id+"\"}"
result = wdi_core.WDItemEngine.execute_sparql_query(sparqlQuery)
orpha_qid = result["results"]["bindings"][0]["topic"]["value"].replace("http://www.wikidata.org/entity/", "")
wdmap.append({'Orphanet':orph_id,'WDID':orpha_qid})
except:
wdmapfail.append(orph_id)
i=i+1
## Inspect the results for mapping or coverage issues
wdid_orpha_df = pd.DataFrame(wdmap)
print("resulting mapping table has: ",len(wdid_orpha_df)," rows.")
Explanation: In terms of unique coverage, it looks like Orphanet will be the least problematic to use. Now to check it's coverage in Wikidata
End of explanation
## De-duplicate to remove anything with mapping issues
wd_orpha_no_dups = wdid_orpha_df.drop_duplicates('Orphanet').copy()
wd_orpha_no_dups.drop_duplicates('WDID')
print('de-duplicated table: ',len(wd_orpha_no_dups))
## Merge with Inheritance table
no_orphanet_dups.rename(columns={'key':'Orphanet'}, inplace=True)
inher_wd_db = inheritance_df.merge(wd_orpha_no_dups.merge(no_orphanet_dups,on='Orphanet',how='inner'), on=['url','topic'], how='inner')
print("resulting mapped table: ",len(inher_wd_db))
Explanation: Adding mode of inheritance data
Prepare the inheritance data for mapping
De-duplicate Orphanet-Wikidata mapping table as needed
Merge inheritance table to mapping table
End of explanation
print(inheritance_df.groupby(['code','memo']).size())
## Mode of inheritance = P1199
GHR_WD_codes = {'ac': 'Q13169788', ##wd:Q13169788 (codominant)
'ad': 'Q116406', ##wd:Q116406 (autosomal dominant)
'ar': 'Q15729064', ##wd:Q15729064 (autosomal recessive)
'm': 'Q15729075', ##wd:Q15729075 (mitochondrial)
'x': 'Q70899378', #wd:Q2597344 (X-linked inheritance)
'xd': 'Q3731276', ##wd:Q3731276 (X-linked dominant)
'xr': 'Q1988987', ##wd:Q1988987 (X-linked recessive)
'y': 'Q2598585'} ##wd:Q2598585 (Y linkage)
GHR_codes_no_WD = {'n': 'not inherited', 'u': 'unknown pattern'}
from datetime import datetime
import copy
def create_reference(ghr_url):
refStatedIn = wdi_core.WDItemID(value="Q62606821", prop_nr="P248", is_reference=True)
timeStringNow = datetime.now().strftime("+%Y-%m-%dT00:00:00Z")
refRetrieved = wdi_core.WDTime(timeStringNow, prop_nr="P813", is_reference=True)
refURL = wdi_core.WDUrl(value=ghr_url, prop_nr="P854", is_reference=True)
return [refStatedIn, refRetrieved, refURL]
## Limit adding mode of inheritance statements to diseases with known modes of inheritance
inheritance_avail = inher_wd_db.loc[(inher_wd_db['code']!='n')&(inher_wd_db['code']!='u')]
print(len(inheritance_avail))
#### Unit test-- write a single statement
disease_qid = inheritance_avail.iloc[0]['WDID']
inheritance_method = GHR_WD_codes[inheritance_avail.iloc[0]['code']]
ghr_url = inheritance_avail.iloc[0]['url']
reference = create_reference(ghr_url)
statement = [wdi_core.WDItemID(value=inheritance_method, prop_nr="P1199", references=[copy.deepcopy(reference)])]
item = wdi_core.WDItemEngine(wd_item_id=disease_qid, data=statement, append_value="P1199",
global_ref_mode='CUSTOM', ref_handler=update_retrieved_if_new_multiple_refs)
print(disease_qid)
print(item)
item.write(login)
#### test run -- write 10 statements
i=0
for i in tqdm(range(10)):
disease_qid = inheritance_avail.iloc[i]['WDID']
inheritance_method = GHR_WD_codes[inheritance_avail.iloc[i]['code']]
ghr_url = inheritance_avail.iloc[i]['url']
reference = create_reference(ghr_url)
statement = [wdi_core.WDItemID(value=inheritance_method, prop_nr="P1199", references=[copy.deepcopy(reference)])]
item = wdi_core.WDItemEngine(wd_item_id=disease_qid, data=statement, append_value="P1199",
global_ref_mode='CUSTOM', ref_handler=update_retrieved_if_new_multiple_refs)
item.write(login)
time.sleep(2)
i=i+1
i=0
for i in tqdm(range(len(inheritance_avail))):
disease_qid = inheritance_avail.iloc[i]['WDID']
inheritance_method = GHR_WD_codes[inheritance_avail.iloc[i]['code']]
ghr_url = inheritance_avail.iloc[i]['url']
reference = create_reference(ghr_url)
statement = [wdi_core.WDItemID(value=inheritance_method, prop_nr="P1199", references=[copy.deepcopy(reference)])]
item = wdi_core.WDItemEngine(wd_item_id=disease_qid, data=statement, append_value="P1199",
global_ref_mode='CUSTOM', ref_handler=update_retrieved_if_new_multiple_refs)
item.write(login)
i=i+1
Explanation: Generate the references and write the data to Wikidata
End of explanation
## Load successfully mapped GHR disease urls
mapped_orpha_urls = wd_orpha_no_dups.merge(no_orphanet_dups,on='Orphanet',how='inner')
print(len(mapped_orpha_urls))
print(mapped_orpha_urls.head(n=5))
## Unit test -- write a statement
disease_qid = mapped_orpha_urls.iloc[1]['WDID']
ghr_url = mapped_orpha_urls.iloc[1]['url']
ghr_id = mapped_orpha_urls.iloc[1]['url'].replace("https://ghr.nlm.nih.gov/condition/","")
reference = create_reference(ghr_url)
url_prop = "P7464"
statement = [wdi_core.WDString(value=ghr_id, prop_nr=url_prop, references=[copy.deepcopy(reference)])]
item = wdi_core.WDItemEngine(wd_item_id=disease_qid, data=statement, append_value=url_prop,
global_ref_mode='CUSTOM', ref_handler=update_retrieved_if_new_multiple_refs)
item.write(login)
print(ghr_id, disease_qid, ghr_url)
i=0
for i in tqdm(range(len(mapped_orpha_urls))):
disease_qid = mapped_orpha_urls.iloc[i]['WDID']
ghr_url = mapped_orpha_urls.iloc[i]['url']
ghr_id = mapped_orpha_urls.iloc[0]['url'].replace("https://ghr.nlm.nih.gov/condition/","")
reference = create_reference(ghr_url)
url_prop = "P7464"
statement = [wdi_core.WDString(value=ghr_id, prop_nr=url_prop, references=[copy.deepcopy(reference)])]
item = wdi_core.WDItemEngine(wd_item_id=disease_qid, data=statement, append_value=url_prop,
global_ref_mode='CUSTOM', ref_handler=update_retrieved_if_new_multiple_refs)
item.write(login)
i=i+1
Explanation: Importing the urls to separate property for external linking
This portion is awaiting completion of the property creation and approval process
End of explanation |
15,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dual CRISPR Screen Analysis
Step 2
Step1: Automated Set-Up
Step2: Construct Filtering Functions | Python Code:
g_num_processors = 3
g_trimmed_fastqs_dir = '~/dual_crispr/test_data/test_set_2'
g_filtered_fastqs_dir = '~/dual_crispr/test_outputs/test_set_2'
g_min_trimmed_grna_len = 19
g_max_trimmed_grna_len = 21
g_len_of_seq_to_match = 19
Explanation: Dual CRISPR Screen Analysis
Step 2: Construct Filter
Amanda Birmingham, CCBB, UCSD (abirmingham@ucsd.edu)
Instructions
To run this notebook reproducibly, follow these steps:
1. Click Kernel > Restart & Clear Output
2. When prompted, click the red Restart & clear all outputs button
3. Fill in the values for your analysis for each of the variables in the Input Parameters section
4. Click Cell > Run All
Input Parameters
End of explanation
import inspect
import ccbb_pyutils.analysis_run_prefixes as ns_runs
import ccbb_pyutils.files_and_paths as ns_files
import ccbb_pyutils.notebook_logging as ns_logs
def describe_var_list(input_var_name_list):
description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list]
return "".join(description_list)
ns_logs.set_stdout_info_logger()
g_trimmed_fastqs_dir = ns_files.expand_path(g_trimmed_fastqs_dir)
g_filtered_fastqs_dir = ns_files.expand_path(ns_runs.check_or_set(g_filtered_fastqs_dir, g_trimmed_fastqs_dir))
print(describe_var_list(['g_trimmed_fastqs_dir', 'g_filtered_fastqs_dir']))
ns_files.verify_or_make_dir(g_filtered_fastqs_dir)
Explanation: Automated Set-Up
End of explanation
import dual_crispr.scaffold_trim as trim
print(inspect.getsource(trim))
import dual_crispr.count_filterer as fltr
print(inspect.getsource(fltr))
import ccbb_pyutils.parallel_process_fastqs as ns_parallel
g_parallel_results = ns_parallel.parallel_process_paired_reads(g_trimmed_fastqs_dir,
trim.get_trimmed_suffix(trim.TrimType.FIVE_THREE), g_num_processors,
fltr.filter_pair_by_len, [g_min_trimmed_grna_len, g_max_trimmed_grna_len,
g_len_of_seq_to_match, g_filtered_fastqs_dir])
print(ns_parallel.concatenate_parallel_results(g_parallel_results))
print(ns_files.check_file_presence(g_trimmed_fastqs_dir, "", trim.get_trimmed_suffix(trim.TrimType.FIVE_THREE),
check_failure_msg="Construct filtering failed to produce filtered file(s)."))
Explanation: Construct Filtering Functions
End of explanation |
15,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 1 - Getting Started
Step1: Python Summary
Further information
More information is usually available with the help function. Using ? brings up the same information in ipython.
Using the dir function lists all the options available from a variable.
help(np)
np?
dir(np)
Variables
A variable is simply a name for something. One of the simplest tasks is printing the value of a variable.
Printing can be customized using the format method on strings.
Step2: Types
A number of different types are available as part of the standard library. The following links to the documentation provide a summary.
https
Step3: Conditionals
https
Step4: Loops
https
Step5: Functions
https
Step6: Numpy
http
Step7: Exercises
Step8: Print the variable a in all uppercase
Print the variable a with every other letter in uppercase
Print the variable a in reverse, i.e. god yzal ...
Print the variable a with the words reversed, i.e. ehT kciuq ...
Print the variable b in scientific notation with 4 decimal places
Step9: Print the items in people as comma seperated values
Sort people so that they are ordered by age, and print
Sort people so that they are ordered by age first, and then their names, i.e. Bob and Charlie should be next to each other due to their ages with Bob first due to his name.
Write a function that returns the first n prime numbers
Given a list of coordinates calculate the distance using the (Euclidean distance)[https
Step10: Print the standard deviation of each row in a numpy array
Print only the values greater than 90 in a numpy array
From a numpy array display the values in each row in a seperate plot (the subplots method may be useful) | Python Code:
import numpy as np
print("Numpy:", np.__version__)
Explanation: Week 1 - Getting Started
End of explanation
location = 'Bethesda'
zip_code = 20892
elevation = 71.9
print("We're in", location, "zip code", zip_code, ", ", elevation, "m above sea level")
print("We're in " + location + " zip code " + str(zip_code) + ", " + str(elevation) + "m above sea level")
print("We're in {0} zip code {1}, {2}m above sea level".format(location, zip_code, elevation))
print("We're in {0} zip code {1}, {2:.2e}m above sea level".format(location, zip_code, elevation))
Explanation: Python Summary
Further information
More information is usually available with the help function. Using ? brings up the same information in ipython.
Using the dir function lists all the options available from a variable.
help(np)
np?
dir(np)
Variables
A variable is simply a name for something. One of the simplest tasks is printing the value of a variable.
Printing can be customized using the format method on strings.
End of explanation
# Sequences
# Lists
l = [1,2,3,4,4]
print("List:", l, len(l), 1 in l)
# Tuples
t = (1,2,3,4,4)
print("Tuple:", t, len(t), 1 in t)
# Sets
s = set([1,2,3,4,4])
print("Set:", s, len(s), 1 in s)
# Dictionaries
# Dictionaries map hashable values to arbitrary objects
d = {'a': 1, 'b': 2, 3: 's', 2.5: 't'}
print("Dictionary:", d, len(d), 'a' in d)
Explanation: Types
A number of different types are available as part of the standard library. The following links to the documentation provide a summary.
https://docs.python.org/3.5/library/stdtypes.html
https://docs.python.org/3.5/tutorial/datastructures.html
Other types are available from other packages and can be created to support special situations.
A variety of different methods are available depending on the type.
End of explanation
import random
if random.random() < 0.5:
print("Should be printed 50% of the time")
elif random.random() < 0.5:
print("Should be primted 25% of the time")
else:
print("Should be printed 25% of the time")
Explanation: Conditionals
https://docs.python.org/3.5/tutorial/controlflow.html
End of explanation
for i in ['a', 'b', 'c', 'd']:
print(i)
else:
print('Else')
for i in ['a', 'b', 'c', 'd']:
if i == 'b':
continue
elif i == 'd':
break
print(i)
else:
print('Else')
Explanation: Loops
https://docs.python.org/3.5/tutorial/controlflow.html
End of explanation
def is_even(n):
return not n % 2
print(is_even(1), is_even(2))
def first_n_squared_numbers(n=5):
return [i**2 for i in range(1,n+1)]
print(first_n_squared_numbers())
def next_fibonacci(status=[]):
if len(status) < 2:
status.append(1)
return 1
status.append(status[-2] + status[-1])
return status[-1]
print(next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci())
def accepts_anything(*args, **kwargs):
for a in args:
print(a)
for k in kwargs:
print(k, kwargs[k])
accepts_anything(1,2,3,4, a=1, b=2, c=3)
# For quick and simple functions a lambda expression can be a useful approach.
# Standard functions are always a valid alternative and often make code clearer.
f = lambda x: x**2
print(f(5))
people = [{'name': 'Alice', 'age': 30},
{'name': 'Bob', 'age': 35},
{'name': 'Charlie', 'age': 35},
{'name': 'Dennis', 'age': 25}]
print(people)
people.sort(key=lambda x: x['age'])
print(people)
Explanation: Functions
https://docs.python.org/3.5/tutorial/controlflow.html
End of explanation
a = np.array([[1,2,3], [4,5,6], [7,8,9]])
print(a)
print(a[1:,1:])
a = a + 2
print(a)
a = a + np.array([1,2,3])
print(a)
a = a + np.array([[10],[20],[30]])
print(a)
print(a.mean(), a.mean(axis=0), a.mean(axis=1))
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(0, 3*2*np.pi, 500)
plt.plot(x, np.sin(x))
plt.show()
Explanation: Numpy
http://docs.scipy.org/doc/numpy/reference/
End of explanation
a = "The quick brown fox jumps over the lazy dog"
b = 1234567890.0
## Print the variable `a` in all uppercase
print(a.upper())
## Print the variable `a` with every other letter in uppercase
def capEveryOtherLetter(str):
ans = ""
i = True # capitalize
for char in str:
if i == True:
ans += char.upper()
else:
ans += char.lower()
if char != ' ': # if character is not a space
i = not i # toggle i between False/True
return ans
print(capEveryOtherLetter(a))
## Print the variable `a` in reverse, i.e. god yzal ...
def reverse(str):
rev = ""
for char in str:
rev = char + rev
return rev
print(reverse(a))
## Print the variable `a` with the words reversed, i.e. ehT kciuq ...
def reverseWords(str):
words = str.split()
for i in range(len(words)):
words[i] = reverse(words[i])
rev = " ".join(words)
return rev
print(reverseWords(a))
## Print the variable `b` in scientific notation with 4 decimal places
## In python, you have floats and decimals that can be rounded.
## If you care about the accuracy of rounding, use decimal type.
## If you use floats, you will have issues with accuracy.
## Why does ans output E+09 and ans2 E+9?
from decimal import Decimal
ans = '%.4E' % Decimal(b)
ans2 = "{:.4E}".format(Decimal(b))
print(ans, ans2)
Explanation: Exercises
End of explanation
people = [{'name': 'Bob', 'age': 35},
{'name': 'Alice', 'age': 30},
{'name': 'Eve', 'age': 20},
{'name': 'Gail', 'age': 30},
{'name': 'Dennis', 'age': 25},
{'name': 'Charlie', 'age': 35},
{'name': 'Fred', 'age': 25},]
## Print the items in people as comma-separated values.
## uses map with str conversion function, as join() expects str, not dict
peopleCommaSeparated = ",".join(map(str, people))
print(peopleCommaSeparated)
## Sort people so that they are ordered by age, and print.
## sort() only works with lists, whereas sorted() accepts any iterable
peopleSortedByAge = sorted(people, key = lambda person: person['age'])
print(peopleSortedByAge)
## Sort people so that they are ordered by age first, and then their names,
## i.e., Bob and Charlie should be next to each other due to their ages
## with Bob first due to his name.
peopleSortedByAgeAndName = sorted(people, key = lambda person: (person['age'], person['name']))
print(peopleSortedByAgeAndName)
Explanation: Print the variable a in all uppercase
Print the variable a with every other letter in uppercase
Print the variable a in reverse, i.e. god yzal ...
Print the variable a with the words reversed, i.e. ehT kciuq ...
Print the variable b in scientific notation with 4 decimal places
End of explanation
coords = [(0,0), (10,5), (10,10), (5,10), (3,3), (3,7), (12,3), (10,11)]
Explanation: Print the items in people as comma seperated values
Sort people so that they are ordered by age, and print
Sort people so that they are ordered by age first, and then their names, i.e. Bob and Charlie should be next to each other due to their ages with Bob first due to his name.
Write a function that returns the first n prime numbers
Given a list of coordinates calculate the distance using the (Euclidean distance)[https://en.wikipedia.org/wiki/Euclidean_distance]
Given a list of coordinates arrange them in such a way that the distance traveled is minimized (the itertools module may be useful).
End of explanation
import numpy as np
np.random.seed(0)
a = np.random.randint(0, 100, size=(10,20))
print(a, "\n")
## Print the standard deviation (σ) of each row in a numpy array
## in a 2D array, columns are axis = 0, and rows are axis = 1
row_std = np.std(a, axis=1)
print(row_std, "\n")
## Print only the values greater than 90 in a numpy array
truncated = a[a > 90]
print(truncated, "\n")
## From a numpy array, display the values in each row in a separate plot
## (the subplots method) may be useful
import matplotlib.pyplot as plt
plt.plot(a[0])
plt.subplot(211)
plt.show()
Explanation: Print the standard deviation of each row in a numpy array
Print only the values greater than 90 in a numpy array
From a numpy array display the values in each row in a seperate plot (the subplots method may be useful)
End of explanation |
15,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
15,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Steps to use the TF Experiment APIs
Define dataset metadata
Define data input function to read the data from .tfrecord files + feature processing
Create TF feature columns based on metadata + extended feature columns
Define an a model function with the required feature columns, EstimatorSpecs, & parameters
Run an Experiment with learn_runner to train, evaluate, and export the model
Evaluate the model using test data
Perform predictions & serving the exported model (using CSV/JSON input)
Step1: 1. Define Dataset Metadata
tf.example feature names and defaults
Numeric and categorical feature names
Target feature name
Target feature labels
Unused features
Step2: 2. Define Data Input Function
Input csv files name pattern
Use TF Dataset APIs to read and process the data
Parse CSV lines to feature tensors
Apply feature processing
Return (features, target) tensors
a. Parsing and preprocessing logic
Step3: b. Data pipeline input function
Step4: 3. Define Feature Columns
Step5: 4. Define Model Function
Step6: 6. Run Experiment
a. Define experiment function
Step7: b. Set HParam and RunConfig
Step8: c. Define JSON serving function
Step9: d. Run the Experiment via learn_runner
Step10: 6. Evaluate the Model
Step11: 7. Prediction
Step12: Serving Exported Model | Python Code:
MODEL_NAME = 'class-model-02'
TRAIN_DATA_FILES_PATTERN = 'data/train-*.csv'
VALID_DATA_FILES_PATTERN = 'data/valid-*.csv'
TEST_DATA_FILES_PATTERN = 'data/test-*.csv'
RESUME_TRAINING = False
PROCESS_FEATURES = True
EXTEND_FEATURE_COLUMNS = True
MULTI_THREADING = True
Explanation: Steps to use the TF Experiment APIs
Define dataset metadata
Define data input function to read the data from .tfrecord files + feature processing
Create TF feature columns based on metadata + extended feature columns
Define an a model function with the required feature columns, EstimatorSpecs, & parameters
Run an Experiment with learn_runner to train, evaluate, and export the model
Evaluate the model using test data
Perform predictions & serving the exported model (using CSV/JSON input)
End of explanation
HEADER = ['key','x','y','alpha','beta','target']
HEADER_DEFAULTS = [[0], [0.0], [0.0], ['NA'], ['NA'], ['NA']]
NUMERIC_FEATURE_NAMES = ['x', 'y']
CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY = {'alpha':['ax01', 'ax02'], 'beta':['bx01', 'bx02']}
CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.keys())
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
TARGET_NAME = 'target'
TARGET_LABELS = ['positive', 'negative']
UNUSED_FEATURE_NAMES = list(set(HEADER) - set(FEATURE_NAMES) - {TARGET_NAME})
print("Header: {}".format(HEADER))
print("Numeric Features: {}".format(NUMERIC_FEATURE_NAMES))
print("Categorical Features: {}".format(CATEGORICAL_FEATURE_NAMES))
print("Target: {} - labels: {}".format(TARGET_NAME, TARGET_LABELS))
print("Unused Features: {}".format(UNUSED_FEATURE_NAMES))
Explanation: 1. Define Dataset Metadata
tf.example feature names and defaults
Numeric and categorical feature names
Target feature name
Target feature labels
Unused features
End of explanation
def parse_csv_row(csv_row):
columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS)
features = dict(zip(HEADER, columns))
for column in UNUSED_FEATURE_NAMES:
features.pop(column)
target = features.pop(TARGET_NAME)
return features, target
def process_features(features):
features["x_2"] = tf.square(features['x'])
features["y_2"] = tf.square(features['y'])
features["xy"] = tf.multiply(features['x'], features['y']) # features['x'] * features['y']
features['dist_xy'] = tf.sqrt(tf.squared_difference(features['x'],features['y']))
return features
Explanation: 2. Define Data Input Function
Input csv files name pattern
Use TF Dataset APIs to read and process the data
Parse CSV lines to feature tensors
Apply feature processing
Return (features, target) tensors
a. Parsing and preprocessing logic
End of explanation
def parse_label_column(label_string_tensor):
table = tf.contrib.lookup.index_table_from_tensor(tf.constant(TARGET_LABELS))
return table.lookup(label_string_tensor)
def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL,
skip_header_lines=0,
num_epochs=None,
batch_size=200):
shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False
print("")
print("* data input_fn:")
print("================")
print("Input file(s): {}".format(files_name_pattern))
print("Batch size: {}".format(batch_size))
print("Epoch Count: {}".format(num_epochs))
print("Mode: {}".format(mode))
print("Shuffle: {}".format(shuffle))
print("================")
print("")
file_names = tf.matching_files(files_name_pattern)
dataset = data.TextLineDataset(filenames=file_names)
dataset = dataset.skip(skip_header_lines)
if shuffle:
dataset = dataset.shuffle(buffer_size=2 * batch_size + 1)
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row))
if PROCESS_FEATURES:
dataset = dataset.map(lambda features, target: (process_features(features), target))
dataset = dataset.repeat(num_epochs)
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, parse_label_column(target)
features, target = csv_input_fn(files_name_pattern="")
print("Feature read from CSV: {}".format(list(features.keys())))
print("Target read from CSV: {}".format(target))
Explanation: b. Data pipeline input function
End of explanation
def extend_feature_columns(feature_columns, hparams):
num_buckets = hparams.num_buckets
embedding_size = hparams.embedding_size
buckets = np.linspace(-3, 3, num_buckets).tolist()
alpha_X_beta = tf.feature_column.crossed_column(
[feature_columns['alpha'], feature_columns['beta']], 4)
x_bucketized = tf.feature_column.bucketized_column(
feature_columns['x'], boundaries=buckets)
y_bucketized = tf.feature_column.bucketized_column(
feature_columns['y'], boundaries=buckets)
x_bucketized_X_y_bucketized = tf.feature_column.crossed_column(
[x_bucketized, y_bucketized], num_buckets**2)
x_bucketized_X_y_bucketized_embedded = tf.feature_column.embedding_column(
x_bucketized_X_y_bucketized, dimension=embedding_size)
feature_columns['alpha_X_beta'] = alpha_X_beta
feature_columns['x_bucketized_X_y_bucketized'] = x_bucketized_X_y_bucketized
feature_columns['x_bucketized_X_y_bucketized_embedded'] = x_bucketized_X_y_bucketized_embedded
return feature_columns
def get_feature_columns(hparams):
CONSTRUCTED_NUMERIC_FEATURES_NAMES = ['x_2', 'y_2', 'xy', 'dist_xy']
all_numeric_feature_names = NUMERIC_FEATURE_NAMES.copy()
if PROCESS_FEATURES:
all_numeric_feature_names += CONSTRUCTED_NUMERIC_FEATURES_NAMES
numeric_columns = {feature_name: tf.feature_column.numeric_column(feature_name)
for feature_name in all_numeric_feature_names}
categorical_column_with_vocabulary = \
{item[0]: tf.feature_column.categorical_column_with_vocabulary_list(item[0], item[1])
for item in CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.items()}
feature_columns = {}
if numeric_columns is not None:
feature_columns.update(numeric_columns)
if categorical_column_with_vocabulary is not None:
feature_columns.update(categorical_column_with_vocabulary)
if EXTEND_FEATURE_COLUMNS:
feature_columns = extend_feature_columns(feature_columns, hparams)
return feature_columns
feature_columns = get_feature_columns(tf.contrib.training.HParams(num_buckets=5,embedding_size=3))
print("Feature Columns: {}".format(feature_columns))
Explanation: 3. Define Feature Columns
End of explanation
def get_input_layer_feature_columns(hparams):
feature_columns = list(get_feature_columns(hparams).values())
dense_columns = list(
filter(lambda column: isinstance(column, feature_column._NumericColumn) |
isinstance(column, feature_column._EmbeddingColumn),
feature_columns
)
)
categorical_columns = list(
filter(lambda column: isinstance(column, feature_column._VocabularyListCategoricalColumn) |
isinstance(column, feature_column._BucketizedColumn),
feature_columns)
)
indicator_columns = list(
map(lambda column: tf.feature_column.indicator_column(column),
categorical_columns)
)
return dense_columns+indicator_columns
def classification_model_fn(features, labels, mode, params):
hidden_units = params.hidden_units
output_layer_size = len(TARGET_LABELS)
feature_columns = get_input_layer_feature_columns(hparams)
# Create the input layers from the feature columns
input_layer = tf.feature_column.input_layer(features= features,
feature_columns=feature_columns)
# Create a fully-connected layer-stack based on the hidden_units in the params
hidden_layers = tf.contrib.layers.stack(inputs= input_layer,
layer= tf.contrib.layers.fully_connected,
stack_args= hidden_units)
# Connect the output layer (logits) to the hidden layer (no activation fn)
logits = tf.layers.dense(inputs=hidden_layers,
units=output_layer_size)
# Reshape output layer to 1-dim Tensor to return predictions
output = tf.squeeze(logits)
# Provide an estimator spec for `ModeKeys.PREDICT`.
if mode == tf.estimator.ModeKeys.PREDICT:
probabilities = tf.nn.softmax(logits)
predicted_indices = tf.argmax(probabilities, 1)
# Convert predicted_indices back into strings
predictions = {
'class': tf.gather(TARGET_LABELS, predicted_indices),
'probabilities': probabilities
}
export_outputs = {
'prediction': tf.estimator.export.PredictOutput(predictions)
}
# Provide an estimator spec for `ModeKeys.PREDICT` modes.
return tf.estimator.EstimatorSpec(mode,
predictions=predictions,
export_outputs=export_outputs)
# Calculate loss using softmax cross entropy
loss = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels))
tf.summary.scalar('loss', loss)
if mode == tf.estimator.ModeKeys.TRAIN:
# Create Optimiser
optimizer = tf.train.AdamOptimizer()
# Create training operation
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
# Provide an estimator spec for `ModeKeys.TRAIN` modes.
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
if mode == tf.estimator.ModeKeys.EVAL:
probabilities = tf.nn.softmax(logits)
predicted_indices = tf.argmax(probabilities, 1)
# Return accuracy and area under ROC curve metrics
labels_one_hot = tf.one_hot(
labels,
depth=len(TARGET_LABELS),
on_value=True,
off_value=False,
dtype=tf.bool
)
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(labels, predicted_indices),
'auroc': tf.metrics.auc(labels_one_hot, probabilities)
}
# Provide an estimator spec for `ModeKeys.EVAL` modes.
return tf.estimator.EstimatorSpec(mode,
loss=loss,
eval_metric_ops=eval_metric_ops)
def create_estimator(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=classification_model_fn,
params=hparams,
config=run_config)
print("")
print("Estimator Type: {}".format(type(estimator)))
print("")
return estimator
Explanation: 4. Define Model Function
End of explanation
def generate_experiment_fn(**experiment_args):
def _experiment_fn(run_config, hparams):
train_input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode = tf.estimator.ModeKeys.TRAIN,
num_epochs=hparams.num_epochs,
batch_size=hparams.batch_size
)
eval_input_fn = lambda: csv_input_fn(
VALID_DATA_FILES_PATTERN,
mode=tf.estimator.ModeKeys.EVAL,
num_epochs=1,
batch_size=hparams.batch_size
)
estimator = create_estimator(run_config, hparams)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=eval_input_fn,
eval_steps=None,
**experiment_args
)
return _experiment_fn
Explanation: 6. Run Experiment
a. Define experiment function
End of explanation
TRAIN_SIZE = 12000
NUM_EPOCHS = 1 #1000
BATCH_SIZE = 500
NUM_EVAL = 1 #10
CHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL))
hparams = tf.contrib.training.HParams(
num_epochs = NUM_EPOCHS,
batch_size = BATCH_SIZE,
hidden_units=[16, 12, 8],
num_buckets = 6,
embedding_size = 3,
dropout_prob = 0.001)
model_dir = 'trained_models/{}'.format(MODEL_NAME)
run_config = tf.contrib.learn.RunConfig(
save_checkpoints_steps=CHECKPOINT_STEPS,
tf_random_seed=19830610,
model_dir=model_dir
)
print(hparams)
print("Model Directory:", run_config.model_dir)
print("")
print("Dataset Size:", TRAIN_SIZE)
print("Batch Size:", BATCH_SIZE)
print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE)
print("Total Steps:", (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS)
print("Required Evaluation Steps:", NUM_EVAL)
print("That is 1 evaluation step after each",NUM_EPOCHS/NUM_EVAL," epochs")
print("Save Checkpoint After",CHECKPOINT_STEPS,"steps")
Explanation: b. Set HParam and RunConfig
End of explanation
def json_serving_input_fn():
receiver_tensor = {}
for feature_name in FEATURE_NAMES:
dtype = tf.float32 if feature_name in NUMERIC_FEATURE_NAMES else tf.string
receiver_tensor[feature_name] = tf.placeholder(shape=[None], dtype=dtype)
if PROCESS_FEATURES:
features = process_features(receiver_tensor)
return tf.estimator.export.ServingInputReceiver(
features, receiver_tensor)
Explanation: c. Define JSON serving function
End of explanation
if not RESUME_TRAINING:
print("Removing previous artifacts...")
shutil.rmtree(model_dir, ignore_errors=True)
else:
print("Resuming training...")
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
learn_runner.run(
experiment_fn=generate_experiment_fn(
export_strategies=[
make_export_strategy(
json_serving_input_fn,
exports_to_keep=1,
as_text=True
)
]
),
run_config=run_config,
schedule="train_and_evaluate",
hparams=hparams
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
Explanation: d. Run the Experiment via learn_runner
End of explanation
TRAIN_SIZE = 12000
VALID_SIZE = 3000
TEST_SIZE = 5000
train_input_fn = lambda: csv_input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TRAIN_SIZE)
valid_input_fn = lambda: csv_input_fn(files_name_pattern= VALID_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= VALID_SIZE)
test_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TEST_SIZE)
estimator = create_estimator(run_config, hparams)
train_results = estimator.evaluate(input_fn=train_input_fn, steps=1)
print()
print("######################################################################################")
print("# Train Measures: {}".format(train_results))
print("######################################################################################")
valid_results = estimator.evaluate(input_fn=valid_input_fn, steps=1)
print()
print("######################################################################################")
print("# Valid Measures: {}".format(valid_results))
print("######################################################################################")
test_results = estimator.evaluate(input_fn=test_input_fn, steps=1)
print()
print("######################################################################################")
print("# Test Measures: {}".format(test_results))
print("######################################################################################")
Explanation: 6. Evaluate the Model
End of explanation
import itertools
predict_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.PREDICT,
batch_size= 5)
predictions = list(itertools.islice(estimator.predict(input_fn=predict_input_fn),5))
print("")
print("* Predicted Classes: {}".format(list(map(lambda item: item["class"]
,predictions))))
print("* Predicted Probabilities: {}".format(list(map(lambda item: list(item["probabilities"])
,predictions))))
Explanation: 7. Prediction
End of explanation
import os
export_dir = model_dir +"/export/Servo/"
saved_model_dir = export_dir + "/" + os.listdir(path=export_dir)[-1]
print(saved_model_dir)
print("")
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key="prediction"
)
output = predictor_fn(
{
'x': [0.5, -1],
'y': [1, 0.5],
'alpha': ['ax01', 'ax01'],
'beta': ['bx02', 'bx01']
}
)
print(output)
Explanation: Serving Exported Model
End of explanation |
15,880 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Having a pandas data frame as follow: | Problem:
import pandas as pd
df = pd.DataFrame({'a':[1,1,1,2,2,2,3,3,3], 'b':[12,13,23,22,23,24,30,35,55]})
import numpy as np
def g(df):
softmax = []
min_max = []
for i in range(len(df)):
Min = np.inf
Max = -np.inf
exp_Sum = 0
for j in range(len(df)):
if df.loc[i, 'a'] == df.loc[j, 'a']:
Min = min(Min, df.loc[j, 'b'])
Max = max(Max, df.loc[j, 'b'])
exp_Sum += np.exp(df.loc[j, 'b'])
softmax.append(np.exp(df.loc[i, 'b']) / exp_Sum)
min_max.append((df.loc[i, 'b'] - Min) / (Max - Min))
df['softmax'] = softmax
df['min-max'] = min_max
return df
df = g(df.copy()) |
15,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="center"><h1>Vector Add on GPU</h1></div>
Vector Add
In the world of computing, the addition of two vectors is the standard "Hello World".
Given two sets of scalar data, such as the image above, we want to compute the sum, element by element.
We start by implementing the algorithm in plain C#.
Edit the file 01-vector-add.cs and implement this algorithm in plain C# until it displays OK
If you get stuck, you can refer to the solution.
Step1: Introduce Parallelism
As we can see in the solution, a plain scalar iterative approach only uses one thread, while modern CPUs have typically 4 cores and 8 threads.
Fortunately, .Net and C# provide an intuitive construct to leverage parallelism
Step2: Run Code on the GPU
Using Hybridizer to run the above code on a GPU is quite straightforward. We need to
- Decorate methods we want to run on the GPU
This is done by adding [EntryPoint] attribute on methods of interest.
- "Wrap" current object into a dynamic object able to dispatch code on the GPU
This is done by the following boilerplate code | Python Code:
!hybridizer-cuda ./01-vector-add/01-vector-add.cs -o ./01-vector-add/vectoradd.exe -run
Explanation: <div align="center"><h1>Vector Add on GPU</h1></div>
Vector Add
In the world of computing, the addition of two vectors is the standard "Hello World".
Given two sets of scalar data, such as the image above, we want to compute the sum, element by element.
We start by implementing the algorithm in plain C#.
Edit the file 01-vector-add.cs and implement this algorithm in plain C# until it displays OK
If you get stuck, you can refer to the solution.
End of explanation
!hybridizer-cuda ./01-vector-add/01-vector-add.cs -o ./01-vector-add/parallel-vectoradd.exe -run
Explanation: Introduce Parallelism
As we can see in the solution, a plain scalar iterative approach only uses one thread, while modern CPUs have typically 4 cores and 8 threads.
Fortunately, .Net and C# provide an intuitive construct to leverage parallelism : Parallel.For.
Modify 01-vector-add.cs to distribute the work among multiple threads.
If you get stuck, you can refer to the solution.
End of explanation
!hybridizer-cuda ./02-gpu-vector-add/02-gpu-vector-add.cs -o ./02-gpu-vector-add/gpu-vectoradd.exe -run
Explanation: Run Code on the GPU
Using Hybridizer to run the above code on a GPU is quite straightforward. We need to
- Decorate methods we want to run on the GPU
This is done by adding [EntryPoint] attribute on methods of interest.
- "Wrap" current object into a dynamic object able to dispatch code on the GPU
This is done by the following boilerplate code:
csharp
dynamic wrapped = HybRunner.Cuda().Wrap(new Program());
wrapped.mymethod(...)
wrapped object has the same methods signatures (static or instance) as the current object, but dispatches calls to GPU.
Modify the 02-vector-add.cs so the Add method runs on a GPU.
If you get stuck, you can refer to the solution.
End of explanation |
15,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
x_prime = map(lambda x1: 0.1 + ((x1*(0.9-0.1))/(255)), x)
return np.array(list(x_prime))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
# one_hot_encoded_labels = np.zeros((len(x), max(x)+1))
# one_hot_encoded_labels[np.arange(len(x)),x] = 1
# return one_hot_encoded_labels
return np.eye(10)[x]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(dtype=tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]], name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(dtype=tf.float32, shape=[None, n_classes], name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(dtype=tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
# print(x_tensor.shape)
# print(conv_ksize)
# print(conv_num_outputs)
color_channels = x_tensor.shape[3].value
weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], color_channels, conv_num_outputs], mean=0, stddev=0.1))
biases = tf.Variable(tf.zeros(conv_num_outputs))
layer = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
layer = tf.add(layer, biases)
layer = tf.nn.relu(layer)
layer = tf.nn.max_pool(layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
return layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
# print(x_tensor)
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
#Step 1: create the weights and bias
size = x_tensor.shape[1].value
weights = tf.Variable(tf.truncated_normal([size, num_outputs], mean=0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
#Step 2: apply matmul
layer = tf.matmul(x_tensor, weights)
#Step 3: add bias
layer = tf.nn.bias_add(layer, bias)
#Step 4: apply relu
layer = tf.nn.relu(layer)
return layer
# return tf.layers.dense(flatten(x_tensor), num_outputs, activation=tf.nn.relu)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
#Step 1: create the weights and bias
size = x_tensor.shape[1].value
weights = tf.Variable(tf.truncated_normal([size, num_outputs], mean=0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
#Step 2: apply matmul
layer = tf.matmul(x_tensor, weights)
#Step 3: add bias
layer = tf.nn.bias_add(layer, bias)
return layer
# return tf.layers.dense(flatten(x_tensor), num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv_ksize = [5, 5]
conv_strides = [1, 1]
pool_ksize = [2, 2]
pool_strides = [1, 1]
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer_1 = conv2d_maxpool(x, 16, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer_2 = conv2d_maxpool(layer_1, 32, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer_3 = conv2d_maxpool(layer_2, 64, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat_layer = flatten(layer_3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fully_connected = fully_conn(flat_layer, 64)
fully_connected = tf.nn.dropout(fully_connected, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_layer = output(fully_connected, 10)
# TODO: return output
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, validation_accuracy))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 20
batch_size = 64
keep_probability = 0.8
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
15,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GitHub - Data Extraction
The file ../data/RPackage-Repositories-150101-150601.csv contains a list of GitHub repositories that are candidates to store a package related to R. Those candidates were collected from the activity on GitHub between 15-01 and 15-06. Those candidates all contain a DESCRIPTION file at the root of the repository.
We git clone-ed each of those repository. This notebook will parse those git repositories and extract the DESCRIPTION file of each commit.
Step1: We will make use of the following commands
Step2: We will retrieve a lot of data, we can benefit from IPython's parallel computation tool.
To use this notebook, you need either to configure your IPController or to start a cluster of IPython nodes, using ipcluster start -n 4 for example. See https | Python Code:
import pandas
from datetime import date
Explanation: GitHub - Data Extraction
The file ../data/RPackage-Repositories-150101-150601.csv contains a list of GitHub repositories that are candidates to store a package related to R. Those candidates were collected from the activity on GitHub between 15-01 and 15-06. Those candidates all contain a DESCRIPTION file at the root of the repository.
We git clone-ed each of those repository. This notebook will parse those git repositories and extract the DESCRIPTION file of each commit.
End of explanation
github = pandas.DataFrame.from_csv('../data/RPackage-Repositories-150101-150601.csv')
repositories = github[['owner.login', 'name']].rename(columns={'owner.login': 'owner', 'name': 'repositories'})
FILENAME = '../data/github-raw-150601.csv'
# Root of the directory where the repositories were collected
GIT_DIR = '/data/github/'
Explanation: We will make use of the following commands:
- git clone <url> <path> where <url> is the url of the repository and <path> is the location to store the repository.
- git log --follow --format="%H/%ci" <path> where <path> will be DESCRIPTION. The output of this command is a list of <commit> / <date> for this file.
- git show <commit>:<path> where <commit> is the considered commit, and <path> will be DESCRIPTION. This command outputs the content of the file at the given commit.
End of explanation
from IPython import parallel
clients = parallel.Client()
clients.block = False # asynchronous computations
print 'Clients:', str(clients.ids)
def get_data_from((owner, repository)):
# Move to target directory
try:
os.chdir(os.path.join(GIT_DIR, owner, repository))
except OSError as e:
# Should happen when directory does not exist
return []
data_list = []
# Get commits for DESCRIPTION
try:
commits = subprocess.check_output(['git', 'log', '--format=%H/%ci', '--', 'DESCRIPTION'])
except subprocess.CalledProcessError as e:
# Should not happen!?
raise Exception(owner + ' ' + repository + '/ log : ' + e.output)
for commit in [x for x in commits.split('\n') if len(x.strip())!=0]:
commit_sha, date = map(lambda x: x.strip(), commit.split('/'))
# Get file content
try:
content = subprocess.check_output(['git', 'show', '{id}:{path}'.format(id=commit_sha, path='DESCRIPTION')])
except subprocess.CalledProcessError as e:
# Could happen when DESCRIPTION was added in this commit. Silently ignore
continue
try:
metadata = deb822.Deb822(content.split('\n'))
except Exception as e:
# I don't known which are the exceptions that Deb822 may throw!
continue # Go further
data = {}
for md in ['Package', 'Version', 'License', 'Imports', 'Suggests', 'Depends', 'Author', 'Authors', 'Maintainer']:
data[md] = metadata.get(md, '')
data['CommitDate'] = date
data['Owner'] = owner
data['Repository'] = repository
data_list.append(data)
# Return to root directory
os.chdir(GIT_DIR)
return data_list
data = []
clients[:].execute('import subprocess, os')
clients[:].execute('from debian import deb822')
clients[:]['GIT_DIR'] = GIT_DIR
balanced = clients.load_balanced_view()
items = [(owner, repo) for idx, (owner, repo) in repositories.iterrows()]
print len(items), 'items'
res = balanced.map(get_data_from, items, ordered=False, timeout=15)
import time
while not res.ready():
time.sleep(5)
print res.progress, ' ',
for result in res.result:
data.extend(result)
df = pandas.DataFrame.from_records(data)
df.to_csv(FILENAME, encoding='utf-8')
print len(df), 'items'
print len(df.drop_duplicates(['Package'])), 'packages'
print len(df.drop_duplicates(['Owner', 'Repository'])), 'repositories'
print len(df.drop_duplicates(['Package', 'Version'])), 'pairs (package, version)'
df
Explanation: We will retrieve a lot of data, we can benefit from IPython's parallel computation tool.
To use this notebook, you need either to configure your IPController or to start a cluster of IPython nodes, using ipcluster start -n 4 for example. See https://ipython.org/ipython-doc/dev/parallel/parallel_process.html for more information.
It seems that most recent versions of IPython Notebook can directly start cluster from the web interface, under the Cluster tab.
End of explanation |
15,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Once you've trained a model, you might like to get more information about how it performs on the various targets you asked it to predict.
To run this tutorial, you'll need to either download the pre-trained model from https
Step1: As long as your HDF5 file has test data set aside, run
Step2: In the output directory, you'll find a table specifying the AUC for each target.
Step3: We can also make receiver operating characteristic curves for each target with the following command. | Python Code:
model_file = '../data/models/pretrained_model.th'
seqs_file = '../data/encode_roadmap.h5'
Explanation: Once you've trained a model, you might like to get more information about how it performs on the various targets you asked it to predict.
To run this tutorial, you'll need to either download the pre-trained model from https://www.dropbox.com/s/rguytuztemctkf8/pretrained_model.th.gz and preprocess the consortium data, or just substitute your own files here:
End of explanation
import subprocess
cmd = 'basset_test.lua %s %s test_out' % (model_file, seqs_file)
subprocess.call(cmd, shell=True)
Explanation: As long as your HDF5 file has test data set aside, run:
End of explanation
!head test_eg/aucs.txt
Explanation: In the output directory, you'll find a table specifying the AUC for each target.
End of explanation
targets_file = '../data/sample_beds.txt'
cmd = 'plot_roc.py -t %s test_out' % (targets_file)
subprocess.call(cmd, shell=True)
# actual file is test_out/roc1.pdf
from IPython.display import Image
Image(filename='test_eg/roc1.png')
Explanation: We can also make receiver operating characteristic curves for each target with the following command.
End of explanation |
15,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dual CRISPR Screen Analysis
Construct Scaffold Trimming
Amanda Birmingham, CCBB, UCSD (abirmingham@ucsd.edu)
Instructions
To run this notebook reproducibly, follow these steps
Step1: CCBB Library Imports
Step2: Automated Set-Up
Step3: Info Logging Pass-Through
Step4: Scaffold Trimming Functions
Step5: Gzipped FASTQ Filenames
Step6: FASTQ Gunzip Execution
Step7: FASTQ Filenames
Step8: Scaffold Trim Execution
Step9: Trimmed FASTQ Filenames | Python Code:
g_num_processors = 3
g_fastqs_dir = '/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/raw/20160504_D00611_0275_AHMM2JBCXX'
g_trimmed_fastqs_dir = '/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/interim/20160504_D00611_0275_AHMM2JBCXX'
g_full_5p_r1 = 'TATATATCTTGTGGAAAGGACGAAACACCG'
g_full_5p_r2 = 'CCTTATTTTAACTTGCTATTTCTAGCTCTAAAAC'
g_full_3p_r1 = 'GTTTCAGAGCTATGCTGGAAACTGCATAGCAAGTTGAAATAAGGCTAGTCCGTTATCAACTTGAAAAAGTGGCACCGAGTCGGTGCTTTTTTGTACTGAG'
g_full_3p_r2 = 'CAAACAAGGCTTTTCTCCAAGGGATATTTATAGTCTCAAAACACACAATTACTTTACAGTTAGGGTGAGTTTCCTTTTGTGCTGTTTTTTAAAATA'
g_code_location = '/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python'
Explanation: Dual CRISPR Screen Analysis
Construct Scaffold Trimming
Amanda Birmingham, CCBB, UCSD (abirmingham@ucsd.edu)
Instructions
To run this notebook reproducibly, follow these steps:
1. Click Kernel > Restart & Clear Output
2. When prompted, click the red Restart & clear all outputs button
3. Fill in the values for your analysis for each of the variables in the Input Parameters section
4. Click Cell > Run All
<a name = "input-parameters"></a>
Input Parameters
End of explanation
import sys
sys.path.append(g_code_location)
Explanation: CCBB Library Imports
End of explanation
# %load -s describe_var_list /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/utilities/analysis_run_prefixes.py
def describe_var_list(input_var_name_list):
description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list]
return "".join(description_list)
from ccbbucsd.utilities.analysis_run_prefixes import check_or_set, get_run_prefix, get_timestamp
g_trimmed_fastqs_dir = check_or_set(g_trimmed_fastqs_dir, g_fastqs_dir)
print(describe_var_list(['g_trimmed_fastqs_dir']))
from ccbbucsd.utilities.files_and_paths import verify_or_make_dir
verify_or_make_dir(g_trimmed_fastqs_dir)
Explanation: Automated Set-Up
End of explanation
from ccbbucsd.utilities.notebook_logging import set_stdout_info_logger
set_stdout_info_logger()
Explanation: Info Logging Pass-Through
End of explanation
# %load /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/scaffold_trim.py
# standard libraries
import enum
# third-party libraries
import cutadapt.scripts.cutadapt
# ccbb libraries
from ccbbucsd.utilities.files_and_paths import get_file_name_pieces, make_file_path
__author__ = 'Amanda Birmingham'
__maintainer__ = "Amanda Birmingham"
__email__ = "abirmingham@ucsd.edu"
__status__ = "prototype"
class TrimType(enum.Enum):
FIVE = "5"
THREE = "3"
FIVE_THREE = "53"
def get_trimmed_suffix(trimtype):
return "_trimmed{0}.fastq".format(trimtype.value)
def trim_linked_scaffold(output_dir, fastq_fp, scaffold_seq_5p, scaffold_seq_3p, quiet=True):
args = ["-a", "{0}...{1}".format(scaffold_seq_5p,scaffold_seq_3p)]
return _run_cutadapt(output_dir, fastq_fp, TrimType.FIVE_THREE, args, quiet)
def trim_global_scaffold(output_dir, fastq_fp, scaffold_seq_5p=None, scaffold_seq_3p=None, quiet=True):
curr_fastq_fp = fastq_fp
if scaffold_seq_5p is not None:
curr_fastq_fp = _run_cutadapt_global(output_dir, curr_fastq_fp, scaffold_seq_5p, True, quiet)
if scaffold_seq_3p is not None:
curr_fastq_fp = _run_cutadapt_global(output_dir, curr_fastq_fp, scaffold_seq_3p, False, quiet)
return curr_fastq_fp
def _run_cutadapt_global(output_dir, input_fastq_fp, seq_to_trim, is_5p, quiet):
end_switch = "-g"
end_name = TrimType.FIVE
if not is_5p:
end_switch = "-a"
end_name = TrimType.THREE
args = [end_switch, seq_to_trim]
return _run_cutadapt(output_dir, input_fastq_fp, end_name, args, quiet)
def _run_cutadapt(output_dir, input_fastq_fp, trim_name, partial_args, quiet):
_, input_base, _ = get_file_name_pieces(input_fastq_fp)
output_fastq_fp = make_file_path(output_dir, input_base, get_trimmed_suffix(trim_name))
args = [x for x in partial_args]
if quiet:
args.append("--quiet")
args.extend(["-o", output_fastq_fp, input_fastq_fp])
cutadapt.scripts.cutadapt.main(args)
return output_fastq_fp
def trim_fw_and_rv_reads(output_dir, full_5p_r1, full_3p_r1, full_5p_r2, full_3p_r2, fw_fastq_fp, rv_fastq_fp):
trim_linked_scaffold(output_dir, fw_fastq_fp, full_5p_r1, full_3p_r1)
trim_linked_scaffold(output_dir, rv_fastq_fp, full_5p_r2, full_3p_r2)
Explanation: Scaffold Trimming Functions
End of explanation
g_seq_file_ext_name = ".fastq"
g_gzip_ext_name = ".gz"
from ccbbucsd.utilities.files_and_paths import summarize_filenames_for_prefix_and_suffix
print(summarize_filenames_for_prefix_and_suffix(g_fastqs_dir, "",
"{0}{1}".format(g_seq_file_ext_name, g_gzip_ext_name),
all_subdirs=True))
Explanation: Gzipped FASTQ Filenames
End of explanation
from ccbbucsd.utilities.files_and_paths import gunzip_wildpath, move_to_dir_and_flatten
def unzip_and_flatten_seq_files(top_fastqs_dir, ext_name, gzip_ext_name, keep_gzs):
# first, recursively unzip all fastq.gz files anywhere under the input dir
gunzip_wildpath(top_fastqs_dir, ext_name + gzip_ext_name, keep_gzs, True) # True = do recursive
# now move all fastqs to top-level directory so don't have to work recursively in future
move_to_dir_and_flatten(top_fastqs_dir, top_fastqs_dir, ext_name)
# False = don't keep gzs as well as expanding, True = do keep them (True only works for gzip 1.6+)
unzip_and_flatten_seq_files(g_fastqs_dir, g_seq_file_ext_name, g_gzip_ext_name, False)
Explanation: FASTQ Gunzip Execution
End of explanation
print(summarize_filenames_for_prefix_and_suffix(g_fastqs_dir, "", g_seq_file_ext_name))
Explanation: FASTQ Filenames
End of explanation
from ccbbucsd.utilities.parallel_process_fastqs import parallel_process_paired_reads, concatenate_parallel_results
g_parallel_results = parallel_process_paired_reads(g_fastqs_dir, g_seq_file_ext_name, g_num_processors,
trim_fw_and_rv_reads, [g_trimmed_fastqs_dir, g_full_5p_r1,
g_full_3p_r1, g_full_5p_r2, g_full_3p_r2])
print(concatenate_parallel_results(g_parallel_results))
Explanation: Scaffold Trim Execution
End of explanation
print(summarize_filenames_for_prefix_and_suffix(g_trimmed_fastqs_dir, "", get_trimmed_suffix(TrimType.FIVE_THREE)))
Explanation: Trimmed FASTQ Filenames
End of explanation |
15,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimating Joint Tour Participation
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
Step1: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
Step2: Load data and prep model for estimation
Step3: Review data loaded from the EDB
The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
Coefficients
Step4: Utility specification
Step5: Chooser data
Step6: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
Step7: Estimated coefficients
Step8: Output Estimation Results
Step9: Write the model estimation report, including coefficient t-statistic and log likelihood
Step10: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode. | Python Code:
import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
Explanation: Estimating Joint Tour Participation
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
End of explanation
os.chdir('test')
Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
End of explanation
modelname = "joint_tour_participation"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
Explanation: Load data and prep model for estimation
End of explanation
data.coefficients
Explanation: Review data loaded from the EDB
The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
Coefficients
End of explanation
data.spec
Explanation: Utility specification
End of explanation
data.chooser_data
Explanation: Chooser data
End of explanation
model.estimate()
model.dataframes.choice_avail_summary()
Explanation: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
End of explanation
model.parameter_summary()
Explanation: Estimated coefficients
End of explanation
from activitysim.estimation.larch import update_coefficients
result_dir = data.edb_directory/"estimated"
update_coefficients(
model, data, result_dir,
output_file=f"{modelname}_coefficients_revised.csv",
);
Explanation: Output Estimation Results
End of explanation
model.to_xlsx(
result_dir/f"{modelname}_model_estimation.xlsx",
data_statistics=False,
)
Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood
End of explanation
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv")
Explanation: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.
End of explanation |
15,887 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a set of objects and their positions over time. I would like to get the distance between each car and their nearest neighbour, and calculate an average of this for each time point. An example dataframe is as follows: | Problem:
import pandas as pd
time = [0, 0, 0, 1, 1, 2, 2]
x = [216, 218, 217, 280, 290, 130, 132]
y = [13, 12, 12, 110, 109, 3, 56]
car = [1, 2, 3, 1, 3, 4, 5]
df = pd.DataFrame({'time': time, 'x': x, 'y': y, 'car': car})
import numpy as np
def g(df):
time = df.time.tolist()
car = df.car.tolist()
nearest_neighbour = []
euclidean_distance = []
for i in range(len(df)):
n = 0
d = np.inf
for j in range(len(df)):
if df.loc[i, 'time'] == df.loc[j, 'time'] and df.loc[i, 'car'] != df.loc[j, 'car']:
t = np.sqrt(((df.loc[i, 'x'] - df.loc[j, 'x'])**2) + ((df.loc[i, 'y'] - df.loc[j, 'y'])**2))
if t < d:
d = t
n = df.loc[j, 'car']
nearest_neighbour.append(n)
euclidean_distance.append(d)
return pd.DataFrame({'time': time, 'car': car, 'nearest_neighbour': nearest_neighbour, 'euclidean_distance': euclidean_distance})
df = g(df.copy()) |
15,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Alright in this section we're going to continue with the running data set but we're going to dive a bit deeper into ways of analyzing the data including filtering, dropping rows, doing some groupings and that sort of thing.
So what we'll do is read in our csv file.
Step1: Now let's about getting some summary statistics. I would encourage you to try these on your own. We've learned pretty much everything we need to in order to be able to do these on without guidance and as always if you need clarification just ask on the side.
What was the longest run in miles and minutes that I ran?
Step2: What about the shortest in miles and minutes that I ran?
Step3: We forgot to ignore our null values, so how would we do it by ignoring those?
Step4: What was the most common running distance I did excluding times when I didn't run at all.
Step5: Plot a graph of the cumulative running distance in this dataset.
Step6: Plot a graph of the cumulative running hours in this data set.
Step7: Another interesting question we could ask is what days of the week do I commonly go for runs. Am I faster on certain days or does my speed improve over time relative to the distance that I'm running.
So let's get our days of the week
Step8: We will do that by mapping our date column to a the time format we need
Step9: then we just set that to a new column.
Step10: and we can make a bar plot of it, but let's see if we can distinguish anything unique about certain days of the week.
Step11: We will do that by creating groups of data frames
We can see that in this sample I run a lot more on the Friday Saturday and Monday. Some interesting patterns. Why don't we try looking at the means and that sort of thing.
But before we get there, at this point, our data frame is getting pretty messy and I think it's worth explaining how to remove columns and add remove rows and columns.
First let's remove the Time column - seeing as we already have minutes and seconds
Step12: del will delete it in place
Step13: Finally we can use drop to drop a column. Now we have to specify the axis( we can also use this to drop rows), now this does not happen in place.
Step14: we can also use drop to drop a specific row by specifying the 0 axis
Step15: we Already saw how to create a new column, we can also create a new row using the append method. This takes in a data frame or Series and appends it to the end of the data frame.
Step16: We can also pop out a column which will remove it from a data frame and return the Series. You'll see that it happens in place.
Step17: Now we've made our dataset a bit more manageable. We've kind of just got the basics of what we need to perform some groupwise analysis.
Now at this point we're going to do some groupings. This is an extremely powerful part of pandas and one that you'll use all the time.
pandas follows the the Split-Apply-Combine style of data analysis.
Many data analysis problems involve the application of a split-apply-combine strategy, where you break up a big problem into manageable pieces, operate on each piece independently and then put all the pieces back together.
Hadley Wickhan from Rice University
Step18: This is clearly an ugly way to do this and pandas provides a much more simple way of approaching this problem. by creating a groupby object.
But first I'm going to filter out our zero values because they'll throw off our analysis.
Step19: We can get the size of each one by using the size command. This basically tells us how many items are in each category.
Step20: Now we have our groups and we can start doing groupwise analysis, now what does that mean?
It means we can start answering questions like what is the average speed per weekday or what is the total miles run per weekday?
Step21: It might be interesting to see the total sum of the amount of runs to try and see any outliers simply because Thursday, Friday, Saturday are close in distances, relatively, but not so much in speed.
We also get access to a lot of summary statistics from here that we can get from the groups.
Step22: iterating through the groups is also very straightforward
Step23: you can get specific groups by using the get_group method.
Step24: We can use an aggregation command to perform an operation to get all the counts for each data frame.
Step25: another way to do this would be to add a count column to our data frame, then sum up each column | Python Code:
pd.read_csv?
list(range(1,7))
df = pd.read_csv('../data/date_fixed_running_data_with_time.csv', parse_dates=['Date'], usecols=list(range(0,6)))
df.dtypes
df.sort(inplace=True)
df.head()
Explanation: Alright in this section we're going to continue with the running data set but we're going to dive a bit deeper into ways of analyzing the data including filtering, dropping rows, doing some groupings and that sort of thing.
So what we'll do is read in our csv file.
End of explanation
df.Minutes.max()
df.Miles.max()
Explanation: Now let's about getting some summary statistics. I would encourage you to try these on your own. We've learned pretty much everything we need to in order to be able to do these on without guidance and as always if you need clarification just ask on the side.
What was the longest run in miles and minutes that I ran?
End of explanation
df.Minutes.min()
df.Miles.min()
Explanation: What about the shortest in miles and minutes that I ran?
End of explanation
df.Miles[df.Miles > 0].min()
Explanation: We forgot to ignore our null values, so how would we do it by ignoring those?
End of explanation
df.Miles[df.Miles > 0].value_counts().index[0]
Explanation: What was the most common running distance I did excluding times when I didn't run at all.
End of explanation
df.Miles.cumsum().plot()
plt.xlabel("Day Number")
plt.ylabel("Distance")
Explanation: Plot a graph of the cumulative running distance in this dataset.
End of explanation
(df.Minutes.fillna(0).cumsum() / 60).plot()
Explanation: Plot a graph of the cumulative running hours in this data set.
End of explanation
df.Date[0].strftime("%A")
Explanation: Another interesting question we could ask is what days of the week do I commonly go for runs. Am I faster on certain days or does my speed improve over time relative to the distance that I'm running.
So let's get our days of the week
End of explanation
df.Date.map(lambda x: x.strftime("%A")).head()
Explanation: We will do that by mapping our date column to a the time format we need
End of explanation
df['Day_of_week'] = df.Date.map(lambda x: x.strftime("%A"))
df.head(10)
Explanation: then we just set that to a new column.
End of explanation
df[df.Miles > 0].Day_of_week.value_counts().plot(kind='bar')
Explanation: and we can make a bar plot of it, but let's see if we can distinguish anything unique about certain days of the week.
End of explanation
del(df['Time'])
Explanation: We will do that by creating groups of data frames
We can see that in this sample I run a lot more on the Friday Saturday and Monday. Some interesting patterns. Why don't we try looking at the means and that sort of thing.
But before we get there, at this point, our data frame is getting pretty messy and I think it's worth explaining how to remove columns and add remove rows and columns.
First let's remove the Time column - seeing as we already have minutes and seconds
End of explanation
df.head()
Explanation: del will delete it in place
End of explanation
df.drop('Seconds',axis=1)
Explanation: Finally we can use drop to drop a column. Now we have to specify the axis( we can also use this to drop rows), now this does not happen in place.
End of explanation
tempdf = pd.DataFrame(np.arange(4).reshape(2,2))
tempdf
tempdf.drop(1,axis=0)
Explanation: we can also use drop to drop a specific row by specifying the 0 axis
End of explanation
tempdf.append(pd.Series([4,5]), ignore_index=True)
df.head()
Explanation: we Already saw how to create a new column, we can also create a new row using the append method. This takes in a data frame or Series and appends it to the end of the data frame.
End of explanation
df.pop('Seconds')
df.head()
Explanation: We can also pop out a column which will remove it from a data frame and return the Series. You'll see that it happens in place.
End of explanation
for dow in df.Day_of_week.unique():
print(dow)
print(df[df.Day_of_week == dow])
break
Explanation: Now we've made our dataset a bit more manageable. We've kind of just got the basics of what we need to perform some groupwise analysis.
Now at this point we're going to do some groupings. This is an extremely powerful part of pandas and one that you'll use all the time.
pandas follows the the Split-Apply-Combine style of data analysis.
Many data analysis problems involve the application of a split-apply-combine strategy, where you break up a big problem into manageable pieces, operate on each piece independently and then put all the pieces back together.
Hadley Wickhan from Rice University:
http://www.jstatsoft.org/v40/i01/paper
Since we're going to want to check things in groups. What I'm going to do is try to analyze each day of the week to see if there are any differences in the types of running that I do on those days.
we'll start by grouping or data set on those weekdays. Basically creating a dictionary of the data where the key is the weekday and the value is the dataframe of all those values.
First let's do this the hard way....
End of explanation
df['Miles'] = df.Miles[df.Miles > 0]
dows = df.groupby('Day_of_week')
print(dows)
Explanation: This is clearly an ugly way to do this and pandas provides a much more simple way of approaching this problem. by creating a groupby object.
But first I'm going to filter out our zero values because they'll throw off our analysis.
End of explanation
dows.size()
dows.count()
Explanation: We can get the size of each one by using the size command. This basically tells us how many items are in each category.
End of explanation
dows.mean()
dows.sum()
Explanation: Now we have our groups and we can start doing groupwise analysis, now what does that mean?
It means we can start answering questions like what is the average speed per weekday or what is the total miles run per weekday?
End of explanation
dows.describe()
df.groupby('Day_of_week').mean()
df.groupby('Day_of_week').std()
Explanation: It might be interesting to see the total sum of the amount of runs to try and see any outliers simply because Thursday, Friday, Saturday are close in distances, relatively, but not so much in speed.
We also get access to a lot of summary statistics from here that we can get from the groups.
End of explanation
for name, group in dows:
print(name)
print(group)
Explanation: iterating through the groups is also very straightforward
End of explanation
dows.get_group('Friday')
Explanation: you can get specific groups by using the get_group method.
End of explanation
dows.agg(lambda x: len(x))['Miles']
Explanation: We can use an aggregation command to perform an operation to get all the counts for each data frame.
End of explanation
df['Count'] = 1
df.head(10)
df.groupby('Day_of_week').sum()
Explanation: another way to do this would be to add a count column to our data frame, then sum up each column
End of explanation |
15,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
=======================================
Receiver Operating Characteristic (ROC)
=======================================
Example of Receiver Operating Characteristic (ROC) metric to evaluate
classifier output quality.
ROC curves typically feature true positive rate on the Y axis, and false
positive rate on the X axis. This means that the top left corner of the plot is
the "ideal" point - a false positive rate of zero, and a true positive rate of
one. This is not very realistic, but it does mean that a larger area under the
curve (AUC) is usually better.
The "steepness" of ROC curves is also important, since it is ideal to maximize
the true positive rate while minimizing the false positive rate.
Multiclass settings
ROC curves are typically used in binary classification to study the output of
a classifier. In order to extend ROC curve and ROC area to multi-class
or multi-label classification, it is necessary to binarize the output. One ROC
curve can be drawn per label, but one can also draw a ROC curve by considering
each element of the label indicator matrix as a binary prediction
(micro-averaging).
Another evaluation measure for multi-class classification is
macro-averaging, which gives equal weight to the classification of each
label.
.. note
Step1: Plot of a ROC curve for a specific class
Step2: Plot ROC curves for the multiclass problem | Python Code:
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
# Import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Binarize the output
y = label_binarize(y, classes=[0, 1, 2])
n_classes = y.shape[1]
# Add noisy features to make the problem harder
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,
random_state=0)
# Learn to predict each class against the other
classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True,
random_state=random_state))
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
Explanation: =======================================
Receiver Operating Characteristic (ROC)
=======================================
Example of Receiver Operating Characteristic (ROC) metric to evaluate
classifier output quality.
ROC curves typically feature true positive rate on the Y axis, and false
positive rate on the X axis. This means that the top left corner of the plot is
the "ideal" point - a false positive rate of zero, and a true positive rate of
one. This is not very realistic, but it does mean that a larger area under the
curve (AUC) is usually better.
The "steepness" of ROC curves is also important, since it is ideal to maximize
the true positive rate while minimizing the false positive rate.
Multiclass settings
ROC curves are typically used in binary classification to study the output of
a classifier. In order to extend ROC curve and ROC area to multi-class
or multi-label classification, it is necessary to binarize the output. One ROC
curve can be drawn per label, but one can also draw a ROC curve by considering
each element of the label indicator matrix as a binary prediction
(micro-averaging).
Another evaluation measure for multi-class classification is
macro-averaging, which gives equal weight to the classification of each
label.
.. note::
See also :func:`sklearn.metrics.roc_auc_score`,
:ref:`sphx_glr_auto_examples_model_selection_plot_roc_crossval.py`.
End of explanation
plt.figure()
lw = 2
plt.plot(fpr[2], tpr[2], color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
Explanation: Plot of a ROC curve for a specific class
End of explanation
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
Explanation: Plot ROC curves for the multiclass problem
End of explanation |
15,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HMM with Poisson observations for detecting changepoints in the rate of a signal
This notebook is based on the
Multiple Changepoint Detection and Bayesian Model Selection Notebook of TensorFlow
Step1: Data
The synthetic data corresponds to a single time series of counts, where the rate of the underlying generative process changes at certain points in time.
Step2: Model with fixed $K$
To model the changing Poisson rate, we use an HMM. We initially assume the number of states is known to be $K=4$. Later we will try comparing HMMs with different $K$.
We fix the initial state distribution to be uniform, and fix the transition matrix to be the following, where we set $p=0.05$
Step4: Now we create an HMM where the observation distribution is a Poisson with learnable parameters. We specify the parameters in log space and initialize them to random values around the log of the overall mean count (to set the scal
Step5: Model fitting using Gradient Descent
We compute a MAP estimate of the Poisson rates $\lambda$ using batch gradient descent, using the Adam optimizer applied to the log likelihood (from the HMM) plus the log prior for $p(\lambda)$.
Step6: We see that the method learned a good approximation to the true (generating) parameters, up to a permutation of the states (since the labels are unidentifiable). However, results can vary with different random seeds. We may find that the rates are the same for some states, which means those states are being treated as identical, and are therefore redundant.
Plotting the posterior over states
Step7: Model with unknown $K$
In general we don't know the true number of states. One way to select the 'best' model is to compute the one with the maximum marginal likelihood. Rather than summing over both discrete latent states and integrating over the unknown parameters $\lambda$, we just maximize over the parameters (empirical Bayes approximation).
$$p(x_{1
Step8: Model fitting with gradient descent
Step9: Plot marginal likelihood of each model
Step10: Plot posteriors | Python Code:
from IPython.utils import io
with io.capture_output() as captured:
!pip install distrax
!pip install flax
import logging
logging.getLogger("absl").setLevel(logging.CRITICAL)
import numpy as np
import jax
from jax.random import split, PRNGKey
import jax.numpy as jnp
from jax import jit, lax, vmap
from jax.experimental import optimizers
import tensorflow_probability as tfp
from matplotlib import pylab as plt
%matplotlib inline
import scipy.stats
import distrax
from distrax import HMM
Explanation: HMM with Poisson observations for detecting changepoints in the rate of a signal
This notebook is based on the
Multiple Changepoint Detection and Bayesian Model Selection Notebook of TensorFlow
End of explanation
true_rates = [40, 3, 20, 50]
true_durations = [10, 20, 5, 35]
random_state = 0
observed_counts = jnp.concatenate(
[
scipy.stats.poisson(rate).rvs(num_steps, random_state=random_state)
for (rate, num_steps) in zip(true_rates, true_durations)
]
).astype(jnp.float32)
plt.plot(observed_counts);
Explanation: Data
The synthetic data corresponds to a single time series of counts, where the rate of the underlying generative process changes at certain points in time.
End of explanation
def build_latent_state(num_states, max_num_states, daily_change_prob):
# Give probability 0 to states outside of the current model.
def prob(s):
return jnp.where(s < num_states + 1, 1 / num_states, 0.0)
states = jnp.arange(1, max_num_states + 1)
initial_state_probs = vmap(prob)(states)
# Build a transition matrix that transitions only within the current
# `num_states` states.
def transition_prob(i, s):
return jnp.where(
(s <= num_states) & (i <= num_states) & (1 < num_states),
jnp.where(s == i, 1 - daily_change_prob, daily_change_prob / (num_states - 1)),
jnp.where(s == i, 1, 0),
)
transition_probs = vmap(transition_prob, in_axes=(None, 0))(states, states)
return initial_state_probs, transition_probs
num_states = 4
daily_change_prob = 0.05
initial_state_probs, transition_probs = build_latent_state(num_states, num_states, daily_change_prob)
print("Initial state probs:\n{}".format(initial_state_probs))
print("Transition matrix:\n{}".format(transition_probs))
Explanation: Model with fixed $K$
To model the changing Poisson rate, we use an HMM. We initially assume the number of states is known to be $K=4$. Later we will try comparing HMMs with different $K$.
We fix the initial state distribution to be uniform, and fix the transition matrix to be the following, where we set $p=0.05$:
$$ \begin{align} z_1 &\sim \text{Categorical}\left(\left{\frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4}\right}\right)\ z_t | z_{t-1} &\sim \text{Categorical}\left(\left{\begin{array}{cc}p & \text{if } z_t = z_{t-1} \ \frac{1-p}{4-1} & \text{otherwise}\end{array}\right}\right) \end{align}$$
End of explanation
def make_hmm(log_rates, transition_probs, initial_state_probs):
Make a Hidden Markov Model with Poisson observation distribution.
return HMM(
obs_dist=tfp.substrates.jax.distributions.Poisson(log_rate=log_rates),
trans_dist=distrax.Categorical(probs=transition_probs),
init_dist=distrax.Categorical(probs=initial_state_probs),
)
rng_key = PRNGKey(0)
rng_key, rng_normal, rng_poisson = split(rng_key, 3)
# Define variable to represent the unknown log rates.
trainable_log_rates = jnp.log(jnp.mean(observed_counts)) + jax.random.normal(rng_normal, (num_states,))
hmm = make_hmm(trainable_log_rates, transition_probs, initial_state_probs)
Explanation: Now we create an HMM where the observation distribution is a Poisson with learnable parameters. We specify the parameters in log space and initialize them to random values around the log of the overall mean count (to set the scal
End of explanation
def loss_fn(trainable_log_rates, transition_probs, initial_state_probs):
cur_hmm = make_hmm(trainable_log_rates, transition_probs, initial_state_probs)
return -(jnp.sum(rate_prior.log_prob(jnp.exp(trainable_log_rates))) + cur_hmm.forward(observed_counts)[0])
def update(i, opt_state, transition_probs, initial_state_probs):
params = get_params(opt_state)
loss, grads = jax.value_and_grad(loss_fn)(params, transition_probs, initial_state_probs)
return opt_update(i, grads, opt_state), loss
def fit(trainable_log_rates, transition_probs, initial_state_probs, n_steps):
opt_state = opt_init(trainable_log_rates)
def train_step(opt_state, step):
opt_state, loss = update(step, opt_state, transition_probs, initial_state_probs)
return opt_state, loss
steps = jnp.arange(n_steps)
opt_state, losses = lax.scan(train_step, opt_state, steps)
return get_params(opt_state), losses
rate_prior = distrax.LogStddevNormal(5, 5)
opt_init, opt_update, get_params = optimizers.adam(1e-1)
n_steps = 201
params, losses = fit(trainable_log_rates, transition_probs, initial_state_probs, n_steps)
rates = jnp.exp(params)
hmm = make_hmm(params, transition_probs, initial_state_probs)
print("Inferred rates: {}".format(rates))
print("True rates: {}".format(true_rates))
plt.plot(losses)
plt.ylabel("Negative log marginal likelihood");
Explanation: Model fitting using Gradient Descent
We compute a MAP estimate of the Poisson rates $\lambda$ using batch gradient descent, using the Adam optimizer applied to the log likelihood (from the HMM) plus the log prior for $p(\lambda)$.
End of explanation
_, _, posterior_probs, _ = hmm.forward_backward(observed_counts)
def plot_state_posterior(ax, state_posterior_probs, title):
ln1 = ax.plot(state_posterior_probs, c="tab:blue", lw=3, label="p(state | counts)")
ax.set_ylim(0.0, 1.1)
ax.set_ylabel("posterior probability")
ax2 = ax.twinx()
ln2 = ax2.plot(observed_counts, c="black", alpha=0.3, label="observed counts")
ax2.set_title(title)
ax2.set_xlabel("time")
lns = ln1 + ln2
labs = [l.get_label() for l in lns]
ax.legend(lns, labs, loc=4)
ax.grid(True, color="white")
ax2.grid(False)
fig = plt.figure(figsize=(10, 10))
plot_state_posterior(fig.add_subplot(2, 2, 1), posterior_probs[:, 0], title="state 0 (rate {:.2f})".format(rates[0]))
plot_state_posterior(fig.add_subplot(2, 2, 2), posterior_probs[:, 1], title="state 1 (rate {:.2f})".format(rates[1]))
plot_state_posterior(fig.add_subplot(2, 2, 3), posterior_probs[:, 2], title="state 2 (rate {:.2f})".format(rates[2]))
plot_state_posterior(fig.add_subplot(2, 2, 4), posterior_probs[:, 3], title="state 3 (rate {:.2f})".format(rates[3]))
plt.tight_layout()
print(rates)
# max marginals
most_probable_states = jnp.argmax(posterior_probs, axis=-1)
most_probable_rates = rates[most_probable_states]
fig = plt.figure(figsize=(10, 4))
ax = fig.add_subplot(1, 1, 1)
ax.plot(most_probable_rates, c="tab:green", lw=3, label="inferred rate")
ax.plot(observed_counts, c="black", alpha=0.3, label="observed counts")
ax.set_ylabel("latent rate")
ax.set_xlabel("time")
ax.set_title("Inferred latent rate over time")
ax.legend(loc=4);
# max probaility trajectory (Viterbi)
most_probable_states = hmm.viterbi(observed_counts)
most_probable_rates = rates[most_probable_states]
fig = plt.figure(figsize=(10, 4))
ax = fig.add_subplot(1, 1, 1)
color_list = np.array(["tab:red", "tab:green", "tab:blue", "k"])
colors = color_list[most_probable_states]
for i in range(len(colors)):
ax.plot(i, most_probable_rates[i], "-o", c=colors[i], lw=3, alpha=0.75)
ax.plot(observed_counts, c="black", alpha=0.3, label="observed counts")
ax.set_ylabel("latent rate")
ax.set_xlabel("time")
ax.set_title("Inferred latent rate over time");
Explanation: We see that the method learned a good approximation to the true (generating) parameters, up to a permutation of the states (since the labels are unidentifiable). However, results can vary with different random seeds. We may find that the rates are the same for some states, which means those states are being treated as identical, and are therefore redundant.
Plotting the posterior over states
End of explanation
max_num_states = 6
states = jnp.arange(1, max_num_states + 1)
# For each candidate model, build the initial state prior and transition matrix.
batch_initial_state_probs, batch_transition_probs = vmap(build_latent_state, in_axes=(0, None, None))(
states, max_num_states, daily_change_prob
)
print("Shape of initial_state_probs: {}".format(batch_initial_state_probs.shape))
print("Shape of transition probs: {}".format(batch_transition_probs.shape))
print("Example initial state probs for num_states==3:\n{}".format(batch_initial_state_probs[2, :]))
print("Example transition_probs for num_states==3:\n{}".format(batch_transition_probs[2, :, :]))
rng_key, rng_normal = split(rng_key)
# Define variable to represent the unknown log rates.
trainable_log_rates = jnp.log(jnp.mean(observed_counts)) + jax.random.normal(rng_normal, (max_num_states,))
Explanation: Model with unknown $K$
In general we don't know the true number of states. One way to select the 'best' model is to compute the one with the maximum marginal likelihood. Rather than summing over both discrete latent states and integrating over the unknown parameters $\lambda$, we just maximize over the parameters (empirical Bayes approximation).
$$p(x_{1:T}|K) \approx \max_\lambda \int p(x_{1:T}, z_{1:T} | \lambda, K) dz$$
We can do this by fitting a bank of separate HMMs in parallel, one for each value of $K$. We need to make them all the same size so we can batch them efficiently. To do this, we pad the transition matrices (and other paraemeter vectors) so they all have the same shape, and then use masking.
End of explanation
n_steps = 201
params, losses = vmap(fit, in_axes=(None, 0, 0, None))(
trainable_log_rates, batch_transition_probs, batch_initial_state_probs, n_steps
)
rates = jnp.exp(params)
plt.plot(losses.T)
plt.ylabel("Negative log marginal likelihood");
Explanation: Model fitting with gradient descent
End of explanation
plt.plot(-losses[:, -1])
plt.ylim([-400, -200])
plt.ylabel("marginal likelihood $\\tilde{p}(x)$")
plt.xlabel("number of latent states")
plt.title("Model selection on latent states");
Explanation: Plot marginal likelihood of each model
End of explanation
for i, learned_model_rates in enumerate(rates):
print("rates for {}-state model: {}".format(i + 1, learned_model_rates[: i + 1]))
def posterior_marginals(trainable_log_rates, initial_state_probs, transition_probs):
hmm = make_hmm(trainable_log_rates, transition_probs, initial_state_probs)
_, _, marginals, _ = hmm.forward_backward(observed_counts)
return marginals
posterior_probs = vmap(posterior_marginals, in_axes=(0, 0, 0))(
params, batch_initial_state_probs, batch_transition_probs
)
most_probable_states = jnp.argmax(posterior_probs, axis=-1)
fig = plt.figure(figsize=(14, 12))
for i, learned_model_rates in enumerate(rates):
ax = fig.add_subplot(4, 3, i + 1)
ax.plot(learned_model_rates[most_probable_states[i]], c="green", lw=3, label="inferred rate")
ax.plot(observed_counts, c="black", alpha=0.3, label="observed counts")
ax.set_ylabel("latent rate")
ax.set_xlabel("time")
ax.set_title("{}-state model".format(i + 1))
ax.legend(loc=4)
plt.tight_layout()
Explanation: Plot posteriors
End of explanation |
15,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power using DICS beamfomer
Compute a Dynamic Imaging of Coherent Sources (DICS) filter from single trial
activity to estimate source power for two frequencies of interest.
The original reference for DICS is
Step1: Read raw data | Python Code:
# Author: Roman Goj <roman.goj@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.time_frequency import csd_epochs
from mne.beamformer import dics_source_power
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
Explanation: Compute source power using DICS beamfomer
Compute a Dynamic Imaging of Coherent Sources (DICS) filter from single trial
activity to estimate source power for two frequencies of interest.
The original reference for DICS is:
Gross et al. Dynamic imaging of coherent sources: Studying neural interactions
in the human brain. PNAS (2001) vol. 98 (2) pp. 694-699
End of explanation
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Set picks
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# Read epochs
event_id, tmin, tmax = 1, -0.2, 0.5
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, mag=4e-12))
evoked = epochs.average()
# Read forward operator
forward = mne.read_forward_solution(fname_fwd, surf_ori=True)
# Computing the data and noise cross-spectral density matrices
# The time-frequency window was chosen on the basis of spectrograms from
# example time_frequency/plot_time_frequency.py
# As fsum is False csd_epochs returns a list of CrossSpectralDensity
# instances than can then be passed to dics_source_power
data_csds = csd_epochs(epochs, mode='multitaper', tmin=0.04, tmax=0.15,
fmin=15, fmax=30, fsum=False)
noise_csds = csd_epochs(epochs, mode='multitaper', tmin=-0.11,
tmax=-0.001, fmin=15, fmax=30, fsum=False)
# Compute DICS spatial filter and estimate source power
stc = dics_source_power(epochs.info, forward, noise_csds, data_csds)
clim = dict(kind='value', lims=[1.6, 1.9, 2.2])
for i, csd in enumerate(data_csds):
message = 'DICS source power at %0.1f Hz' % csd.frequencies[0]
brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,
time_label=message, figure=i, clim=clim)
brain.set_data_time_index(i)
brain.show_view('lateral')
# Uncomment line below to save images
# brain.save_image('DICS_source_power_freq_%d.png' % csd.frequencies[0])
Explanation: Read raw data
End of explanation |
15,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-esm2-hr5', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-ESM2-HR5
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
15,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Selecting only closed loans
Step1: Investigating closed loans
features summary
Total loans
Step2: TODO
Step3: Investigate whether the two weird 'does not meet' categories should stay in there, are they really closed?
Next payment day is not NAN in the 'does not meet' categories.
Outstanding principle is all 0 (so not active anymore)
Indeed seems like older loans
--> seems they are in fact closed, so leave them in
Step4: something weird with policy 2?
http | Python Code:
# 887,379 loans in total
loans = pd.read_csv('../data/loan.csv')
loans['grade'] = loans['grade'].astype('category', ordered=True)
loans['last_pymnt_d'] = pd.to_datetime(loans['last_pymnt_d'])#.dt.strftime("%Y-%m-%d")
loans.shape
loans['loan_status'].unique()
# most loans are current
sns.countplot(loans['loan_status'], color='turquoise')
plt.xticks(rotation=90)
plt.savefig('../figures/barplot_loan_statusses.jpg', bbox_inches='tight')
# exclude current loans leaves 256,939 (about 30%)
closed_status = ['Fully Paid', 'Charged Off',
'Does not meet the credit policy. Status:Fully Paid',
'Does not meet the credit policy. Status:Charged Off']
closed_loans = loans[loans['loan_status'].isin(closed_status)]
closed_loans.shape
sns.countplot(closed_loans['loan_status'], color='turquoise')
plt.xticks(rotation=90)
plt.savefig('../figures/barplot_loan_statusses_closed.jpg', bbox_inches='tight')
# two categories: paid/unpaid
paid_status = ['Fully Paid', 'Does not meet the credit policy. Status:Fully Paid']
closed_loans['paid'] = [True if loan in paid_status else False for loan in closed_loans['loan_status']]
sns.countplot(closed_loans['paid'])
plt.xticks(rotation=90)
Explanation: Selecting only closed loans
End of explanation
# 1914 loans amounts bigger than funded amount
sum(closed_loans['loan_amnt'] != closed_loans['funded_amnt'])
# nr of null values per feature
nr_nulls = closed_loans.isnull().apply(sum, 0)
nr_nulls = nr_nulls[nr_nulls != 0]
ratio_missing = nr_nulls.sort_values(ascending=False) / 255720
ratio_missing.to_csv('../data/missing_ratio.txt', sep='\t')
ratio_missing
sns.distplot(closed_loans['funded_amnt'], kde=False, bins=50)
plt.savefig('../figures/funded_amount.jpg')
# closed loans about 20% are 60 months
# all loans lot of missing data, rest 30% are 60 months
sns.countplot(closed_loans['term'], color='darkblue')
plt.title('closed')
plt.savefig('../figures/term_closed.jpg')
plt.show()
sns.countplot(loans['term'])
plt.title('all')
Explanation: Investigating closed loans
features summary
Total loans: 256,939
Total features: 74
Loan
- id: loan
- loan_amnt: 1914 times is loan amount bigger than funded amount
- funded_amnt
- funded_amnt_inv
- term: 36 or 60 months
- int_rate: interest rates
- installment: height monthly pay
- grade: A-G, A low risk, G high risk
- sub_grade
- issue_d: month-year loan was funded
- loan_status
- pymnt_plan: n/y
- url
- desc: description provided by borrower
- purpose: 'credit_card', 'car', 'small_business', 'other', 'wedding', 'debt_consolidation', 'home_improvement', 'major_purchase', 'medical', 'moving', 'vacation', 'house', 'renewable_energy','educational'
- title: provided by borrower
- initial_list_status: w/f (what is this?)
- out_prncp: outstanding prinicipal --> still >0 in fully paid?!
- out_prncp_inv
- total_pymnt
- total_pymnt_inv
- total_rec_prncp
- total_rec_int: total recieved interest
- total_rec_late_fee
- recoveries: post charged off gross recovery
- collection_recovery_fee: post charged off collection fee
- last_pymnt_d
- last_pymnt_amnt
- next_pymnt_d
- collections_12_mths_ex_med: almost all 0
- policy_code: 1 publicly available, 2 not
- application_type (only 1 JOINT, rest INDIVIDUAL)
Borrower
- emp_title
- emp_length: 0-10 (10 stands for >=10)
- home_ownership: 'RENT', 'OWN', 'MORTGAGE', 'OTHER', 'NONE', 'ANY'
- member_id: person
- annual_inc (stated by borrower)
- verification_status: 'Verified', 'Source Verified', 'Not Verified' (income verified by LC?)
- zip_code
- addr_state
- dti: debt to income (without mortgage)
- delinq_2yrs: The number of 30+ days past-due incidences of delinquency in the borrower's credit file for the past 2 years
- mths_since_last_delinq
- mths_since_last_record
- pub_rec
- earliest_cr_line
- inq_last_6mths
- open_acc (nr of open credit lines)
- total_acc (nr of total credit lines in credit file)
- revol_bal
- last_credit_pull_d
- mths_since_last_major_derog: Months since most recent 90-day or worse rating
- acc_now_delinq: The number of accounts on which the borrower is now delinquent.
- tot_coll_amt: Total collection amounts ever owed
- tot_cur_bal: Total current balance of all accounts
- open_acc_6m: Number of open trades in last 6 months
- open_il_6m: Number of currently active installment trades
- open_il_12m: Number of installment accounts opened in past 12 months
- open_il_24m
- mths_since_rcnt_il: Months since most recent installment accounts opened
- total_bal_il: Total current balance of all installment accounts
- il_util: Ratio of total current balance to high credit/credit limit on all install acct
- open_rv_12m: Number of revolving trades opened in past 12 months
- open_rv_24m
- max_bal_bc: Maximum current balance owed on all revolving accounts
- all_util: Balance to credit limit on all trades
- total_rev_hi_lim: Total revolving high credit/credit limit
- inq_fi: Number of personal finance inquiries
- total_cu_tl: Number of finance trades
- inq_last_12m: Number of credit inquiries in past 12 months
Two borrowers (only in 1 case)
- annual_inc_joint
- dti_joint
- verification_status_joint
Difference between default and charged off
In general, a note goes into Default status when it is 121 or more days past due. When a note is in Default status, Charge Off occurs no later than 150 days past due (i.e. No later than 30 days after the Default status is reached) when there is no reasonable expectation of sufficient payment to prevent the charge off. However, bankruptcies may be charged off earlier based on date of bankruptcy notification.
--> so default is not closed yet (so threw that one out).
End of explanation
# higher interest rate more interesting for lenders
# higher grade gets higher interest rate (more risk)
# does it default more often?
# do you get richer from investing in grade A-C (less default?) or from D-G (more interest)?
fig = sns.distplot(closed_loans['int_rate'], kde=False, bins=50)
fig.set(xlim=(0, None))
plt.savefig('../figures/int_rates.jpg')
sns.boxplot(data=closed_loans, x='grade', y='int_rate', color='turquoise')
plt.savefig('../figures/boxplots_intrate_grade.jpg')
sns.stripplot(data=closed_loans, x='grade', y='int_rate', color='gray')
# closed_loans['collection_recovery_fee']
closed_loans['profit'] = (closed_loans['total_rec_int'] + closed_loans['total_rec_prncp']
+ closed_loans['total_rec_late_fee'] + closed_loans['recoveries']) - closed_loans['funded_amnt']
profits = closed_loans.groupby('grade')['profit'].sum()
sns.barplot(data=profits.reset_index(), x='grade', y='profit', color='gray')
plt.savefig('../figures/profit_grades.jpg')
plt.show()
profits = closed_loans.groupby('paid')['profit'].sum()
sns.barplot(data=profits.reset_index(), x='paid', y='profit')
plt.show()
profits = closed_loans.groupby(['grade', 'paid'])['profit'].sum()
sns.barplot(data=profits.reset_index(), x='profit', y='grade', hue='paid', orient='h')
plt.savefig('../figures/profit_grades_paid.jpg')
plt.show()
# Sort off normally distributed --> statistically test whether means are different?
sns.distplot(closed_loans[closed_loans['paid']==True]['int_rate'])
sns.distplot(closed_loans[closed_loans['paid']==False]['int_rate'])
plt.savefig('../figures/int_rate_paid.jpg')
grade_paid = closed_loans.groupby(['grade', 'paid'])['id'].count()
risk_grades = dict.fromkeys(closed_loans['grade'].unique())
for g in risk_grades.keys():
risk_grades[g] = grade_paid.loc[(g, False)] / (grade_paid.loc[(g, False)] + grade_paid.loc[(g, True)])
risk_grades = pd.DataFrame(risk_grades, index=['proportion_unpaid_loans'])
sns.stripplot(data=risk_grades, color='darkgray', size=15)
plt.savefig('../figures/proportion_grades.jpg')
# does the purpose matter for the chance of charged off?
sns.countplot(closed_loans['purpose'], color='turquoise')
plt.xticks(rotation=90)
plt.show()
purpose_paid = closed_loans.groupby(['purpose', 'paid'])['id'].count()
sns.barplot(data=pd.DataFrame(purpose_paid).reset_index(), x='purpose', y='id', hue='paid')
plt.xticks(rotation=90)
plt.savefig('../figures/purposes.jpg', bbox_inches='tight')
# debt to income
sns.boxplot(data=closed_loans, x='paid', y='dti')
plt.savefig('../figures/dti.jpg')
Explanation: TODO: interest questions
End of explanation
sns.countplot(closed_loans[closed_loans['next_pymnt_d'].notnull()]['loan_status'])
plt.xticks(rotation=90)
plt.savefig('../figures/last_payment_day.jpg', bbox_inches='tight')
plt.show()
print(closed_loans['loan_status'].value_counts())
new_loans = ['Fully Paid', 'Charged Off']
sns.countplot(data=closed_loans[~closed_loans['loan_status'].isin(new_loans)], x='last_pymnt_d', hue='loan_status')
plt.xticks([])
plt.savefig('../figures/last_payment_day_old.jpg')
plt.show()
sns.countplot(data=closed_loans[closed_loans['loan_status'].isin(new_loans)], x='last_pymnt_d', hue='loan_status')
plt.xticks([])
plt.savefig('../figures/last_payment_day_new.jpg')
plt.show()
closed_loans['out_prncp'].value_counts()
Explanation: Investigate whether the two weird 'does not meet' categories should stay in there, are they really closed?
Next payment day is not NAN in the 'does not meet' categories.
Outstanding principle is all 0 (so not active anymore)
Indeed seems like older loans
--> seems they are in fact closed, so leave them in
End of explanation
closed_loans['policy_code'].value_counts()
Explanation: something weird with policy 2?
http://www.lendacademy.com/forum/index.php?topic=2427.msg20813#msg20813
Only policy 1 loans in this case, so no problem.
End of explanation |
15,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h3> Exploring the function of mask how we can put a list inside numpy array </h3>
Step1: <h3> Exploring 2d array </h3>
Step2: <h3> Finding the L2 or euclidean distance based on test and train data </h3> | Python Code:
mask = range(5)
a = np.array(a)
a[mask]
Explanation: <h3> Exploring the function of mask how we can put a list inside numpy array </h3>
End of explanation
b = np.array([[1,2,3,4],[5,6,7,8]])
b
b[1,2]
b[1]
b[1,:]
b[:]
b[:,1]
b = np.array([1,2,3,4])
np.dot(b,b)
c = np.array([[1,2,3,4],[5,6,7,8]])
c
a = np.array([4,5,6,7])
c - a
a * a
a.sum(axis = 0)
a
c
c.sum(axis = 0)
c.sum(axis = 1)
Explanation: <h3> Exploring 2d array </h3>
End of explanation
train = np.array([[1,2,3,4],[5,6,7,8],[3,1,9,8]])
test = np.array([[1,2,3,4],[5,6,7,8],[1,1,1,1],[3,2,1,6]])
test.shape
train.shape
#and we have to make a vectorized operation so that
#we can get 4 * 3 vector based on euclidean.Vectorized
train.reshape(3,1,4)
test.shape
a = train.reshape(3,1,4) - test
a.shape
b = a.sum(axis = 2)
b.shape
b
a
a*a
(a*a).sum(axis = 2)
b
b.sum(axis = 1)
p = np.array([1,2,1,2,3,1,4,23])
np.bincount(p)
p.argmax()
#Array split
a = np.array(range(100))
p =np.array_split(a,6)
p
p
np.hstack(p[:2]+p[4:])
np.vstack(p[:2])
a = np.array([[1,2,3,4],[4,5,6,7]])
(a**2).sum(axis = 1)
b
b.sum(axis = 0)
b[0]
b[0,:]
a
np.square(b[0] - a).sum(axis = 1)
a = np.array([[1,2,3,4,5],[6,6,7,8,8],[7,7,7,8,1]])
a
a[np.arange(3),[1,2,3]].reshape(3,1)
np.maximum(a,a)
a = np.arange(100).reshape(5,20)
a
b = np.random.choice(89)
b
a[1:3]
a
np.argmax(a,axis = 1)
p = [1,2,3,4]
q = [5,6,7,8]
zip(p,q)
a = np.array([1,2,3,4])
a.sum()
relu = lambda x : x*(x > 0)
relu(5)
relu(0)
relu(-1)
np.logspace(2,3,5)
np.linspace(1,2,10)
Explanation: <h3> Finding the L2 or euclidean distance based on test and train data </h3>
End of explanation |
15,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ml - Machine Learning et données cryptées
Comment faire du machine learning avec des données cryptées ? Ce notebook propose d'en montrer un principe exposé dans CryptoNets
Step1: Principe
Le machine learning sur des données cryptées repose sur un algorithme de chiffrement_homomorphe ou homomorphic encryption. Ce concept a été inventé par Craig Gentry (lire Fully Homomorphic Encryption Using Ideal Lattices, Fully Homomorphic Encryption over the Integers). On note $x \rightarrow \varepsilon(x)$ une fonction de chiffrement complètement homomorphe. Il vérifie | Python Code:
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.ml - Machine Learning et données cryptées
Comment faire du machine learning avec des données cryptées ? Ce notebook propose d'en montrer un principe exposé dans CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy.
End of explanation
from sklearn.datasets import load_diabetes
data = load_diabetes()
X = data.data
Y = data.target
Explanation: Principe
Le machine learning sur des données cryptées repose sur un algorithme de chiffrement_homomorphe ou homomorphic encryption. Ce concept a été inventé par Craig Gentry (lire Fully Homomorphic Encryption Using Ideal Lattices, Fully Homomorphic Encryption over the Integers). On note $x \rightarrow \varepsilon(x)$ une fonction de chiffrement complètement homomorphe. Il vérifie :
$$\begin{array}{ll}\varepsilon(x+y) = \varepsilon(x) + \varepsilon(y) \ \varepsilon(x*y) = \varepsilon(x) * \varepsilon(y)\end{array}$$. Dans l'exemple qui suit, nous avons besoin que le système de cryptage soit partiellement homomorphe : seule l'addition est stable une fois l'entier crypté.
Un exemple : $\varepsilon:\mathbb{N} \rightarrow \mathbb{Z}/n\mathbb{Z}$ et $\varepsilon(x) = (x * a) \mod n$. Cela veut dire que l'on peut crypter des données, faire des calculs avec et décrypter un résultat qui serait presque le même que si les calculs avaient été fait sur les données non cryptées.
Exercice 1 : écrire deux fonctions de cryptage, décryptage
Il faut bien choisir $n$, $a$ pour implémenter la fonction de cryptage :
$\varepsilon:\mathbb{N} \rightarrow \mathbb{Z}/n\mathbb{Z}$ et $\varepsilon(x) = (x * a) \mod n$. On vérifie ensuite qu'elle conserve l'addition au module $n$ près.
Exercice 2 : Entraîner une régression linéaire
End of explanation |
15,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: This is some text, here comes some latex
Step2: Apos?
Step3: Javascript plots
plotly
Step4: bokeh | Python Code:
a = 1
a
b = 'pew'
b
%matplotlib inline
import matplotlib.pyplot as plt
from pylab import *
x = linspace(0, 5, 10)
y = x ** 2
figure()
plot(x, y, 'r')
xlabel('x')
ylabel('y')
title('title')
show()
import numpy as np
num_points = 130
y = np.random.random(num_points)
plt.plot(y)
Explanation: Title: Notebook with MD info in the first cell
Slug: md-info-in-cell
Date: 2100-12-31
Tags: Test
Author: Daniel Rodriguez
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur purus mi, sollicitudin ac justo a, dapibus ultrices dolor. Curabitur id eros mattis, tincidunt ligula at, condimentum urna. Morbi accumsan, risus eget porta consequat, tortor nibh blandit dui, in sodales quam elit non erat. Aenean lorem dui, lacinia a metus eu, accumsan dictum urna. Sed a egestas mauris, non porta nisi. Suspendisse eu lacinia neque. Morbi gravida eros non augue pharetra, condimentum auctor purus porttitor.
Header 2
End of explanation
%%latex
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
Explanation: This is some text, here comes some latex
End of explanation
import re
text = 'foo bar\t baz \tqux'
re.split('\s+', text)
Explanation: Apos?
End of explanation
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
from plotly.graph_objs import Scatter, Figure, Layout
init_notebook_mode(connected=True)
iplot([{"x": [1, 2, 3], "y": [3, 1, 6]}])
Explanation: Javascript plots
plotly
End of explanation
from bokeh.plotting import figure, output_notebook, show
output_notebook()
p = figure()
p.line([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], line_width=2)
show(p)
Explanation: bokeh
End of explanation |
15,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Second, the instantiation of the class.
Step2: The following is an example list object containing datetime objects.
Step3: The call of the method get_forward_reates() yields the above time_list object and the simulated forward rates. In this case, 10 simulations.
Step4: Accordingly, the call of the get_discount_factors() method yields simulated zero-coupon bond prices for the time grid.
Step5: Stochstic Drifts
Let us value use the stochastic short rate model to simulate a geometric Brownian motion with stochastic short rate. Define the market environment as follows
Step6: Then add the stochastic_short_rate object as discount curve.
Step7: Finally, instantiate the geometric_brownian_motion object.
Step8: We get simulated instrument values as usual via the get_instrument_values() method.
Step9: Visualization of Simulated Stochastic Short Rate | Python Code:
from dx import *
me = market_environment(name='me', pricing_date=dt.datetime(2015, 1, 1))
me.add_constant('initial_value', 0.01)
me.add_constant('volatility', 0.1)
me.add_constant('kappa', 2.0)
me.add_constant('theta', 0.05)
me.add_constant('paths', 1000)
me.add_constant('frequency', 'M')
me.add_constant('starting_date', me.pricing_date)
me.add_constant('final_date', dt.datetime(2015, 12, 31))
me.add_curve('discount_curve', 0.0) # dummy
me.add_constant('currency', 0.0) # dummy
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Stochastic Short Rates
This brief section illustrates the use of stochastic short rate models for simulation and (risk-neutral) discounting. The class used is called stochastic_short_rate.
The Modelling
First, the market environment. As a stochastic short rate model the square_root_diffusion class is (currently) available. We therefore need to define the respective parameters for this class in the market environment.
End of explanation
ssr = stochastic_short_rate('sr', me)
Explanation: Second, the instantiation of the class.
End of explanation
time_list = [dt.datetime(2015, 1, 1),
dt.datetime(2015, 4, 1),
dt.datetime(2015, 6, 15),
dt.datetime(2015, 10, 21)]
Explanation: The following is an example list object containing datetime objects.
End of explanation
ssr.get_forward_rates(time_list, 10)
Explanation: The call of the method get_forward_reates() yields the above time_list object and the simulated forward rates. In this case, 10 simulations.
End of explanation
ssr.get_discount_factors(time_list, 10)
Explanation: Accordingly, the call of the get_discount_factors() method yields simulated zero-coupon bond prices for the time grid.
End of explanation
me.add_constant('initial_value', 36.)
me.add_constant('volatility', 0.2)
# time horizon for the simulation
me.add_constant('currency', 'EUR')
me.add_constant('frequency', 'M')
# monthly frequency; paramter accorind to pandas convention
me.add_constant('paths', 10)
# number of paths for simulation
Explanation: Stochstic Drifts
Let us value use the stochastic short rate model to simulate a geometric Brownian motion with stochastic short rate. Define the market environment as follows:
End of explanation
me.add_curve('discount_curve', ssr)
Explanation: Then add the stochastic_short_rate object as discount curve.
End of explanation
gbm = geometric_brownian_motion('gbm', me)
Explanation: Finally, instantiate the geometric_brownian_motion object.
End of explanation
gbm.get_instrument_values()
Explanation: We get simulated instrument values as usual via the get_instrument_values() method.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
%matplotlib inline
# short rate paths
plt.figure(figsize=(10, 6))
plt.plot(ssr.process.instrument_values[:, :10]);
Explanation: Visualization of Simulated Stochastic Short Rate
End of explanation |
15,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Sampling
Copyright 2016 Allen Downey
License
Step1: Part One
Suppose we want to estimate the average weight of men and women in the U.S.
And we want to quantify the uncertainty of the estimate.
One approach is to simulate many experiments and see how much the results vary from one experiment to the next.
I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.
Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters
Step2: Here's what that distribution looks like
Step3: make_sample draws a random sample from this distribution. The result is a NumPy array.
Step4: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.
Step5: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean
Step6: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight.
We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.
Step7: The next line runs the simulation 1000 times and puts the results in
sample_means
Step8: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.
Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.
Step9: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.
Step10: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.
This quantity is called the "standard error".
Step11: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results
Step14: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.
Step15: Here's a test run with n=100
Step16: Now we can use interact to run plot_sampling_distribution with different values of n. Note
Step17: Other sample statistics
This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic.
Exercise 1
Step24: STOP HERE
We will regroup and discuss before going on.
Part Two
So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.
But in real life we don't know the actual distribution of the population. If we did, we wouldn't be doing statistical inference in the first place!
In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it.
Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
Step25: The following function instantiates a Resampler and runs it.
Step26: Here's a test run with n=100
Step27: Now we can use interact_func in an interaction
Step30: Exercise 2
Step31: Test your code using the cell below
Step32: When your StdResampler is working, you should be able to interact with it
Step33: STOP HERE
We will regroup and discuss before going on.
Part Three
We can extend this framework to compute SE and CI for a difference in means.
For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data)
Step34: And here's the men's distribution
Step35: I'll simulate a sample of 100 men and 100 women
Step36: The difference in means should be about 17 kg, but will vary from one random sample to the next
Step38: Here's the function that computes Cohen's effect size again
Step39: The difference in weight between men and women is about 1 standard deviation
Step40: Now we can write a version of the Resampler that computes the sampling distribution of $d$.
Step41: Now we can instantiate a CohenResampler and plot the sampling distribution. | Python Code:
%matplotlib inline
import numpy
import scipy.stats
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# seed the random number generator so we all get the same results
numpy.random.seed(18)
Explanation: Random Sampling
Copyright 2016 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
weight = scipy.stats.lognorm(0.23, 0, 70.8)
weight.mean(), weight.std()
Explanation: Part One
Suppose we want to estimate the average weight of men and women in the U.S.
And we want to quantify the uncertainty of the estimate.
One approach is to simulate many experiments and see how much the results vary from one experiment to the next.
I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.
Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters:
End of explanation
xs = numpy.linspace(20, 160, 100)
ys = weight.pdf(xs)
plt.plot(xs, ys, linewidth=4, color='C0')
plt.xlabel('weight (kg)')
plt.ylabel('PDF');
Explanation: Here's what that distribution looks like:
End of explanation
def make_sample(n=100):
sample = weight.rvs(n)
return sample
Explanation: make_sample draws a random sample from this distribution. The result is a NumPy array.
End of explanation
sample = make_sample(n=100)
sample.mean(), sample.std()
Explanation: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.
End of explanation
def sample_stat(sample):
return sample.mean()
Explanation: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean:
End of explanation
def compute_sampling_distribution(n=100, iters=1000):
stats = [sample_stat(make_sample(n)) for i in range(iters)]
return numpy.array(stats)
Explanation: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight.
We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.
End of explanation
sample_means = compute_sampling_distribution(n=100, iters=1000)
Explanation: The next line runs the simulation 1000 times and puts the results in
sample_means:
End of explanation
plt.hist(sample_means, color='C1', alpha=0.5)
plt.xlabel('sample mean (n=100)')
plt.ylabel('count');
Explanation: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.
Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.
End of explanation
sample_means.mean()
Explanation: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.
End of explanation
std_err = sample_means.std()
std_err
Explanation: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.
This quantity is called the "standard error".
End of explanation
conf_int = numpy.percentile(sample_means, [5, 95])
conf_int
Explanation: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results:
End of explanation
def plot_sampling_distribution(n, xlim=None):
Plot the sampling distribution.
n: sample size
xlim: [xmin, xmax] range for the x axis
sample_stats = compute_sampling_distribution(n, iters=1000)
se = numpy.std(sample_stats)
ci = numpy.percentile(sample_stats, [5, 95])
plt.hist(sample_stats, color='C1', alpha=0.5)
plt.xlabel('sample statistic')
plt.xlim(xlim)
text(0.03, 0.95, 'CI [%0.2f %0.2f]' % tuple(ci))
text(0.03, 0.85, 'SE %0.2f' % se)
plt.show()
def text(x, y, s):
Plot a string at a given location in axis coordinates.
x: coordinate
y: coordinate
s: string
ax = plt.gca()
plt.text(x, y, s,
horizontalalignment='left',
verticalalignment='top',
transform=ax.transAxes)
Explanation: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.
End of explanation
plot_sampling_distribution(100)
Explanation: Here's a test run with n=100:
End of explanation
def sample_stat(sample):
return sample.mean()
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_sampling_distribution, n=slider, xlim=fixed([55, 95]));
Explanation: Now we can use interact to run plot_sampling_distribution with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n.
End of explanation
def sample_stat(sample):
# TODO: replace the following line with another sample statistic
return sample.mean()
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_sampling_distribution, n=slider, xlim=fixed([0, 100]));
Explanation: Other sample statistics
This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic.
Exercise 1: Fill in sample_stat below with any of these statistics:
Standard deviation of the sample.
Coefficient of variation, which is the sample standard deviation divided by the sample standard mean.
Min or Max
Median (which is the 50th percentile)
10th or 90th percentile.
Interquartile range (IQR), which is the difference between the 75th and 25th percentiles.
NumPy array methods you might find useful include std, min, max, and percentile.
Depending on the results, you might want to adjust xlim.
End of explanation
class Resampler(object):
Represents a framework for computing sampling distributions.
def __init__(self, sample, xlim=None):
Stores the actual sample.
self.sample = sample
self.n = len(sample)
self.xlim = xlim
def resample(self):
Generates a new sample by choosing from the original
sample with replacement.
new_sample = numpy.random.choice(self.sample, self.n, replace=True)
return new_sample
def sample_stat(self, sample):
Computes a sample statistic using the original sample or a
simulated sample.
return sample.mean()
def compute_sampling_distribution(self, iters=1000):
Simulates many experiments and collects the resulting sample
statistics.
stats = [self.sample_stat(self.resample()) for i in range(iters)]
return numpy.array(stats)
def plot_sampling_distribution(self):
Plots the sampling distribution.
sample_stats = self.compute_sampling_distribution()
se = sample_stats.std()
ci = numpy.percentile(sample_stats, [5, 95])
plt.hist(sample_stats, color='C1', alpha=0.5)
plt.xlabel('sample statistic')
plt.xlim(self.xlim)
text(0.03, 0.95, 'CI [%0.2f %0.2f]' % tuple(ci))
text(0.03, 0.85, 'SE %0.2f' % se)
plt.show()
Explanation: STOP HERE
We will regroup and discuss before going on.
Part Two
So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.
But in real life we don't know the actual distribution of the population. If we did, we wouldn't be doing statistical inference in the first place!
In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it.
Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
End of explanation
def interact_func(n, xlim):
sample = weight.rvs(n)
resampler = Resampler(sample, xlim=xlim)
resampler.plot_sampling_distribution()
Explanation: The following function instantiates a Resampler and runs it.
End of explanation
interact_func(n=100, xlim=[50, 100])
Explanation: Here's a test run with n=100
End of explanation
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(interact_func, n=slider, xlim=fixed([50, 100]));
Explanation: Now we can use interact_func in an interaction:
End of explanation
# Solution goes here
class StdResampler(Resampler):
Computes the sampling distribution of the standard deviation.
def sample_stat(self, sample):
Computes a sample statistic using the original sample or a
simulated sample.
return sample.std()
Explanation: Exercise 2: write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data.
End of explanation
def interact_func2(n, xlim):
sample = weight.rvs(n)
resampler = StdResampler(sample, xlim=xlim)
resampler.plot_sampling_distribution()
interact_func2(n=100, xlim=[0, 100])
Explanation: Test your code using the cell below:
End of explanation
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(interact_func2, n=slider, xlim=fixed([0, 100]));
Explanation: When your StdResampler is working, you should be able to interact with it:
End of explanation
female_weight = scipy.stats.lognorm(0.23, 0, 70.8)
female_weight.mean(), female_weight.std()
Explanation: STOP HERE
We will regroup and discuss before going on.
Part Three
We can extend this framework to compute SE and CI for a difference in means.
For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data):
End of explanation
male_weight = scipy.stats.lognorm(0.20, 0, 87.3)
male_weight.mean(), male_weight.std()
Explanation: And here's the men's distribution:
End of explanation
female_sample = female_weight.rvs(100)
male_sample = male_weight.rvs(100)
Explanation: I'll simulate a sample of 100 men and 100 women:
End of explanation
male_sample.mean() - female_sample.mean()
Explanation: The difference in means should be about 17 kg, but will vary from one random sample to the next:
End of explanation
def CohenEffectSize(group1, group2):
Compute Cohen's d.
group1: Series or NumPy array
group2: Series or NumPy array
returns: float
diff = group1.mean() - group2.mean()
n1, n2 = len(group1), len(group2)
var1 = group1.var()
var2 = group2.var()
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / numpy.sqrt(pooled_var)
return d
Explanation: Here's the function that computes Cohen's effect size again:
End of explanation
CohenEffectSize(male_sample, female_sample)
Explanation: The difference in weight between men and women is about 1 standard deviation:
End of explanation
class CohenResampler(Resampler):
def __init__(self, group1, group2, xlim=None):
self.group1 = group1
self.group2 = group2
self.xlim = xlim
def resample(self):
n, m = len(self.group1), len(self.group2)
group1 = numpy.random.choice(self.group1, n, replace=True)
group2 = numpy.random.choice(self.group2, m, replace=True)
return group1, group2
def sample_stat(self, groups):
group1, group2 = groups
return CohenEffectSize(group1, group2)
Explanation: Now we can write a version of the Resampler that computes the sampling distribution of $d$.
End of explanation
resampler = CohenResampler(male_sample, female_sample)
resampler.plot_sampling_distribution()
Explanation: Now we can instantiate a CohenResampler and plot the sampling distribution.
End of explanation |
15,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Add placeholder to indicate whether or not we are training the model
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation |